Exploratory Data Evaluation: Gamma Spectroscopy in Python (Part 2)

-

part, I did an exploratory data evaluation of the gamma spectroscopy data. We were capable of see that using a contemporary scintillation detector, we are able to not only see that the article is radioactive. With a gamma spectrum, we’re also capable of tell it’s radioactive and how much isotopes the article comprises.

On this part, we are going to go further, and I’ll show the way to make and train a machine learning model for detecting radioactive elements.

Before we start, an essential warning. All data files collected for this text can be found on Kaggle, and readers can train and test their ML models without having real hardware. If you must test , do it at your individual risk. I did my tests with sources that may be legally found and purchased, like vintage uranium glass or old watches with radium dial paint. Please check your local laws and skim safety guidelines about handling radioactive materials. Sources utilized in this test should not seriously dangerous, but still should be handled with care!

Now, let’s start! I’ll show the way to collect the information, train the model, and run it using a Radiacode scintillation detector. For those readers who should not have Radiacode hardware, the link to the datasource is added at the tip of the article.

Methodology

This text will contain several parts:

  1. I’ll briefly explain what a gamma spectrum is and the way we are able to use it.
  2. We’ll collect the information for our ML model. I’ll show the code for collecting the spectra using the Radiacode device.
  3. We’ll train the model and control its accuracy.
  4. Finally, I’ll make an HTMX-based web frontend for the model, and we are going to see the ends in real-time.

Let’s get into it!

1. Gamma Spectrum

It is a short recap of the primary part, and for more details, I highly recommend reading it first.

Why is the gamma spectrum so interesting? Some objects around us may be barely radioactive. Its sources vary from the naturally occurring radiation of granite within the buildings to the radium in some vintage watches or the thorium in modern thoriated tungsten rods. A only shows us the variety of radioactive particles that were detected. A shows us not only the variety of particles but in addition their energies. This is a vital difference—it turned out that different radioactive materials emit gamma rays with different energies, and every material has its own “footprint.”

As a primary example, I purchased this pendant within the Chinese shop:

Image by writer

It was advertised as an “ion-generating,” so I already suspected that the pendant may very well be barely radioactive (an ionizing radiation, as its name suggests, can produce ions). Indeed, as we are able to see on the meter screen, its radioactivity level is about 1,20 /h, which is 12 times higher than the background (0,1 /h). It will not be crazy high and comparable to a level on an airplane throughout the flight, nevertheless it continues to be statistically significant 😉

Nevertheless, by only observing the worth, we cannot tell the article is radioactive. A gamma spectrum will show us what isotopes are inside the article:

Image by writer

In this instance, the pendant comprises thorium-232, and a thorium decay chain produces radium and actinium. As we are able to see on the graph, the actinium-228 peak is well visible on the spectrum.

As a second example, let’s say we’ve got found this piece of rock:

Image source Wikipedia

That is uraninite, a mineral that comprises a number of uranium dioxide. Such specimens may be present in some areas of Germany, the Czech Republic, or the US. If we get it within the mineral shop, it probably has a label on it. But in the sector, it’s often not the case 😉 With a gamma spectrum, we are able to see a picture like this:

Image by writer

By comparing the peaks with known isotopes, we are able to tell that the rock comprises uranium, but, for instance, not thorium.

A physical explanation of the gamma spectrum can also be fascinating. As we are able to see on the graph below, gamma rays are literally photons and belong to the identical spectrum as visible light:

Electromagnetic spectrum, Image source Wikipedia

When some people think that radioactive objects are glowing at midnight, it’s actually true! Every radioactive material is indeed glowing with its own unique “color,” but within the very far and non-visible to the human eye a part of the spectrum.

A second fascinating thing is that only 10-20 years ago, gamma-spectroscopy was available just for institutions and massive labs (in the very best case, some used crystals with unknown quality may very well be found on eBay). Nowadays, as a result of advancements in electronics, a scintillation detector may be purchased for the value of a mid-range smartphone.

Now, let’s return to our project. As we are able to see from the 2 examples above, the spectra of various objects are different. Let’s create a machine learning model that may routinely detect various elements.

2. Collecting the Data

As readers can guess, our first challenge is collecting the samples. I’m not a nuclear institution, and I don’t have access to the calibrated test sources like cesium or strontium. Nevertheless, for our task, it will not be required, and a few materials may be legally found and purchased. For instance, americium continues to be utilized in smoke detectors; radium was utilized in painting the watch dials before the Sixties; uranium was widely utilized in glass manufacturing before the Fifties, and thoriated tungsten rods are still produced today and may be purchased from Amazon. Even the natural uranium ore may be purchased within the mineral shops; nevertheless, it requires a bit more safety precautions. And a very good thing about gamma-spectroscopy is that we don’t must disassemble or break the items, and the method is usually protected.

The second challenge is collecting the information. In case you work in e-commerce, then it’s often not an issue, and each SQL request will return thousands and thousands of records. Alas, within the “real world,” it might probably be rather more difficult. Especially if you must make a database of the radioactive materials. In our case, collecting every spectrum requires 10-20 minutes. For each test object, it might be nice to have a minimum of 10 records. As we are able to see, the method can take hours, and having thousands and thousands of records will not be a practical option.

For getting the spectrum data, I can be using a Radiacode 103G scintillation detector and an open-source radiacode library.

Radiacode detector, Image by writer

A gamma spectrum may be exported in XML format using the official Radiacode Android app, however the manual process is just too slow and tedious. As a substitute, I created a Python script that collects the spectra using random time intervals:

from radiacode import RadiaCode, RawData, Spectrum


def read_forever(rc: RadiaCode):
    """ Read data from the device """
    while True:
        interval_sec = random.randint(10*60, 30*60)
        read_spectrum(rc, interval_sec)

def read_spectrum(rc: RadiaCode, interval: int):
    """ Read and save spectrum """
    rc.spectrum_reset()

    # Read
    dt = datetime.datetime.now()
    filename = dt.strftime("spectrum-%Y%m%d%H%M%S.json")
    logging.debug(f"Making spectrum for {interval // 60} min")

    # Wait
    t_start = time.monotonic()
    while time.monotonic() - t_start < interval:
        show_device_data(rc)
        time.sleep(0.4)

    # Save
    spectrum: Spectrum = rc.spectrum()
    spectrum_save(spectrum, filename)

def show_device_data(rc: RadiaCode):
    """ Get CPS (counts per second) values """
    data = rc.data_buf()
    for record in data:
        if isinstance(record, RawData):
            log_str = f"CPS: {int(record.count_rate)}"
            logging.debug(log_str)

def spectrum_save(spectrum: Spectrum, filename: str):
    """ Save  spectrum data to log """
    duration_sec = spectrum.duration.total_seconds()
    data = {
            "a0": spectrum.a0,
            "a1": spectrum.a1,
            "a2": spectrum.a2,
            "counts": spectrum.counts,
            "duration": duration_sec,
    }
    with open(filename, "w") as f_out:
        json.dump(data, f_out, indent=4)
        logging.debug(f"File '{filename}' saved")


rc = RadiaCode()
app.read_forever()

Some error handling is omitted here for clarity reasons. A link to the complete source code may be found at the tip of the article.

As we are able to see, I randomly select the time between 10 and half-hour, collect the gamma spectrum data, and reserve it to a JSON file. Now, I only need to put a Radiacode detector near the article and leave the script running for several hours. Consequently, 10-20 JSON files can be saved. I also must repeat the method for each sample I even have. As a final output, 100-200 files may be collected. It’s still not thousands and thousands, but as we are going to see, it's enough for our task.

3. Training the Model

When the information from the previous step is prepared, we are able to start training the model. As a reminder, all files can be found on Kaggle, and readers are welcome to make their very own models as well.

First, let’s preprocess the information and extract the features we wish to make use of.

3.1 Data Load

When the information is collected, we should always have some spectrum files saved in JSON format. A person file looks like this:

{
    "a0": 24.524023056030273,
    "a1": 2.2699732780456543,
    "a2": 0.0004327862989157,
    "counts": [ 48, 52, , ..., 0, 35],
    "duration": 1364.0
}

Here, the “actual spectrum data. Different detectors could have different formats; a Radiacode returns the information in the shape of a 1024-channel array. Calibration constants [a0, a1, a2] allow us to convert the channel number into the energy in keV (kiloelectronvolt).

First, let’s make a way to load the spectrum from a file:

@dataclass
class Spectrum:
    """ Radiation spectrum measurement data """

    duration: int
    a0: float
    a1: float
    a2: float
    counts: list[int]

    def channel_to_energy(self, ch: int) -> float:
        """ Convert channel number to the energy level """
        return self.a0 + self.a1 * ch + self.a2 * ch**2

    def energy_to_channel(self, e: float):
        """ Convert energy to the channel number (inverse E = a0 + a1*C + a2 C^2) """
        c = self.a0 - e
        return int(
            (np.sqrt(self.a1**2 - 4 * self.a2 * c) - self.a1) / (2 * self.a2)
        )


def load_spectrum_json(filename: str) -> Spectrum:
    """ Load spectrum from a json file """
    with open(filename) as f_in:
        data = json.load(f_in)
        return Spectrum(
            a0=data["a0"], a1=data["a1"], a2=data["a2"],
            counts=data["counts"],
            duration=int(data["duration"]),
        )

Now, we are able to draw it with Matplotlib:

import matplotlib.pyplot as plt

def draw_simple_spectrum(spectrum: Spectrum, title: Optional[str] = None):
    """ Draw spectrum obtained from the Radiacode """
    fig, ax = plt.subplots(figsize=(12, 3))
    ax.spines["top"].set_color("lightgray")
    ax.spines["right"].set_color("lightgray")
    counts = spectrum.counts
    energy = [spectrum.channel_to_energy(x) for x in range(len(counts))]
    # Bars
    ax.bar(energy, counts, width=3.0, label="Counts")
    # X values
    ticks_x = [
       spectrum.channel_to_energy(ch) for ch in range(0, len(counts), len(counts) // 20)
    ]
    labels_x = [f"{ch:.1f}" for ch in ticks_x]
    ax.set_xticks(ticks_x, labels=labels_x)
    ax.set_xlim(energy[0], energy[-1])
    plt.ylim(0, None)
    title_str = "Gamma-spectrum" if title is None else title
    ax.set_title(title_str)
    ax.set_xlabel("Energy, keV")
    plt.legend()
    fig.tight_layout()


sp = load_spectrum_json("thorium-20250617012217.json")
draw_simple_spectrum(sp)

The output looks like this:

Thorium spectrum, image by writer

What can we see here?

As was mentioned before, from a regular Geiger counter, we are able to get only the variety of detected particles. It tells us if the article is radioactive or not, but no more. From a scintillation detector, we are able to get the variety of particles grouped by their energies, which is practically a ready-to-use histogram! A radioactive decay itself is random, so the longer the gathering time, the “smoother” the graph.

3.2 Data Transform

3.2.1 Normalization
Let’s take a look at the spectrum again:

Here, the information was collected for about 10 minutes, and the vertical axis comprises the variety of detected particles. This approach has a straightforward problem: the variety of particles will not be a relentless. It will depend on each the gathering time and the “strength” of the source. It signifies that we may not have 600 particles like on this graph, but 60 or 6000. We may also see that the information is a bit noisy. This is very visible with a “weak” source and a brief collection time.

To eliminate these issues, I made a decision to make use of a two-step pipeline. First, I applied the Savitzky-Golay filter to scale back the noise:

from scipy.signal import savgol_filter

def smooth_data(data: np.array) -> np.array:
    """ Apply 1D smoothing filter to the information array """
    window_size = 10
    data_out = savgol_filter(
        data,
        window_length=window_size,
        polyorder=2,
    )
    return np.clip(data_out, a_min=0, a_max=None)

It is very useful for spectra with short collection times, where the peaks should not so well visible.

Second, I normalized a NumPy array to 0..1 by simply dividing its values by the utmost.

A final “normalize” method looks like this:

def normalize(spectrum: Spectrum) -> Spectrum:
    """ Normalize data to the vertical range of 0..1 """
    # Smooth data
    counts = np.array(spectrum.counts).astype(np.float64)
    counts = smooth_data(counts)

    # Normalize
    val_norm = counts.max()
    return Spectrum(
        duration=spectrum.duration,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = counts/val_norm
    )

Consequently, spectra from different sources now have the same scale:

Image by writer

As we may also see, the difference between the 2 samples is kind of visible.

3.2.2 Data Augmentation
Technically, we're able to train the model. Nevertheless, as we saw within the “Collecting the information” part, the dataset is pretty small – I could have only 100-200 files in total. The answer is to enhance the information by adding more synthetic samples.

As a straightforward approach, I made a decision so as to add some noise to the unique spectra. But how much noise should we add? I chosen a 680 keV channel as a reference value, because this part has no interesting isotopes. Then I added a noise with 50% of the amplitude of that channel. A call guarantees that the information values should not negative (for the quantity of detected particles, it doesn't make physical sense).

def add_noise(spectrum: Spectrum) -> Spectrum:
    """ Add random noise to the spectrum """
    counts = np.array(spectrum.counts)    
    ch_empty = spectrum.energy_to_channel(680.0)
    val_norm = counts[ch_empty]

    ampl = val_norm / 2
    noise = np.random.normal(0, ampl, counts.shape)
    data_out = np.clip(counts + noise, min=0)
    return Spectrum(
        duration=spectrum.duration,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = data_out
    )

sp = load_spectrum_json("thorium-20250617012217.json")
sp = add_noise(normalize(sp))
draw_simple_spectrum(sp, filename)

The output looks like this:

Image by writer

As we are able to see, the noise level will not be that big, so it doesn't distort the peaks. At the identical time, it adds some diversity to the information.

A more sophisticated approach may also be used. For instance, some radioactive minerals contain thorium, uranium, or potassium in numerous proportions. It might be possible to mix spectra of existing samples to get some “recent” ones.

3.2.3 Feature Extraction
Technically, we are able to use all 1024 values “as is” as an input for our ML model. Nevertheless, this approach has two problems:

  • First, it's redundant – we're mostly interested only particularly isotopes. For instance, on the last graph, there may be a very good visible peak at 238 keV, which belongs to Lead-212, and a less visible peak at 338 keV, which belongs to Actinium-228.
  • Second, it's device-specific. I need a model to be universal. Using only the energies of the chosen isotopes as input allows us to make use of any gamma spectrometer model.

Finally, I created this list of isotopes:

isotopes = [ 
    # Americium
    ("Am-241", 59.5),
    # Potassium
    ("K-40", 1460.0),
    # Radium
    ("Ra-226", 186.2),
    ("Pb-214", 242.0),
    ("Pb-214", 295.2),
    ("Pb-214", 351.9),
    ("Bi-214", 609.3),
    ("Bi-214", 1120.3),
    ("Bi-214", 1764.5),
    # Thorium
    ("Pb-212", 238.6),
    ("Ac-228", 338.2),
    ("TI-208", 583.2),
    ("AC-228", 911.2),
    ("AC-228", 969.0),
    # Uranium
    ("Th-234", 63.3),
    ("Th-231", 84.2),
    ("Th-234", 92.4),
    ("Th-234", 92.8),
    ("U-235", 143.8),
    ("U-235", 185.7),
    ("U-235", 205.3),
    ("Pa-234m", 766.4),
    ("Pa-234m", 1000.9),
]

def isotopes_save(filename: str):
    """ Save isotopes list to a file """
    with open(filename, "w") as f_out:
        json.dump(isotopes, f_out)

Only spectrum values for these isotopes can be used as input for the model. I also created a way to save lots of a listing into the JSON file – it would be used to load the model later. Some isotopes, like Uranium-235, could also be present in minuscule amounts and never be practically detectable. Readers are welcome to enhance the list on their very own.

Now, let’s create a way that converts a Radiacode spectrum to a listing of features:

def get_features(spectrum: Spectrum, isotopes: List) -> np.array:
    """ Extract features from the spectrum """
    energies = [energy for _, energy in isotopes]
    data = [spectrum.counts[spectrum.energy_to_channel(energy)] for energy in energies]
    return np.array(data)

Practically, we converted the list of 1024 values to a NumPy array with only 23 elements, which is a very good size reduction!

3.3 Training

Finally, we're able to train the ML model.

First, let’s mix all files into one dataset. Practically, it will depend on the samples you've gotten and will appear to be this:

all_files = [
    ("Americium", glob.glob("../data/train/americium*.json")),
    ("Radium", glob.glob("../data/train/radium*.json")),
    ("Thorium", glob.glob("../data/train/thorium*.json")),
    ("Uranium Glass", glob.glob("../data/train/uraniumGlass*.json")),
    ("Uranium Glaze", glob.glob("../data/train/uraniumGlaze*.json")),
    ("Uraninite", glob.glob("../data/train/uraninite*.json")),
    ("Background", glob.glob("../data/train/background*.json")),
]

def prepare_data(augmentation: int) -> Tuple[np.array, np.array]:
    """ Prepare data for training """
    x, y = [], []
    for name, files in all_files:
        for filename in files:
            print(f"Processing {filename}...")
            sp = normalize(load_spectrum(filename))
            for _ in range(augmentation):
                sp_out = add_noise(sp)
                x.append(get_features(sp_out, isotopes))
                y.append(name)

    return np.array(x), np.array(y)


X_train, y_train = prepare_data(augmentation=10)

As we are able to see, our y-values contain names like “Americium.” I'll use a to convert them into numeric values:

from sklearn.preprocessing import LabelEncoder


le = LabelEncoder()
le.fit(y_train)
y_train = le.transform(y_train)

print("X_train:", X_train.shape)
#> (1900, 23)

print("y_train:", y_train.shape)
#> (1900,)

I made a decision to make use of an open-source XGBoost model, which is predicated on gradient tree boosting (original paper link). I can even use a GridSearchCV to search out optimal parameters:

from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV


bst = XGBClassifier(n_estimators=10, max_depth=2, learning_rate=1)
clf = GridSearchCV(
    bst,
    {
        "max_depth": [1, 2, 3, 4],
        "n_estimators": range(2, 20),
        "learning_rate": [0.001, 0.01, 0.1, 1.0, 10.0]
    },
    verbose=1,
    n_jobs=1,
    cv=3,
)
clf.fit(X_train, y_train)

print("best_score:", clf.best_score_)
#> best_score: 0.99474

print("best_params:", clf.best_params_)
#> best_params: {'learning_rate': 1.0, 'max_depth': 1, 'n_estimators': 9}

Last but not least, I want to save lots of the trained model:

isotopes_save("../models/V1/isotopes.json")
bst.save_model("../models/V1/XGBClassifier.json")
np.save("../models/V1/LabelEncoder.npy", le.classes_)

Obviously, we want not only the model itself but in addition the list of isotopes and labels. If we modify something, the information is not going to match anymore, and the model will produce garbage, so model versioning is our friend!

To confirm the outcomes, I want data that the model didn't “see” before. I already collected several XML files using the Radiacode Android app, and only for fun, I made a decision to make use of them for testing.

First, I created a way to load the information:

import xmltodict

def load_spectrum_xml(file_path: str) -> Spectrum:
    """ Load the spectrum from a Radiacode Android app file """
    with open(file_path) as f_in:
        doc = xmltodict.parse(f_in.read())
        result = doc["ResultDataFile"]["ResultDataList"]["ResultData"]
        spectrum = result["EnergySpectrum"]
        cal = spectrum["EnergyCalibration"]["Coefficients"]["Coefficient"]
        a0, a1, a2 = float(cal[0]), float(cal[1]), float(cal[2])
        duration = int(spectrum["MeasurementTime"])
        data = spectrum["Spectrum"]["DataPoint"]
        return Spectrum(
            duration=duration,
            a0=a0, a1=a1, a2=a2,
            counts=[int(x) for x in data],
        )

It has the identical spectra values that I utilized in the JSON files, with some extra data that will not be required for our task.

Practically, that is an example of knowledge collection. This Victorian creamer from the Nineties is 130 years old, and trust me, you can not get this data by utilizing an SQL request 🙂

Image by writer

This uranium glass is barely radioactive (the background level is about 0,08 µSv/h), nevertheless it’s at a protected level and can't produce any harm.

The test code itself is straightforward:

# Load model
bst = XGBClassifier()
bst.load_model("../models/V1/XGBClassifier.json")
isotopes = isotopes_load("../models/V1/isotopes.json")
le = LabelEncoder()
le.classes_ = np.load("../models/V1/LabelEncoder.npy")

# Load data
test_data = [
    ["../data/test/background1.xml", "../data/test/background2.xml"],
    ["../data/test/thorium1.xml", "../data/test/thorium2.xml"],
    ["../data/test/uraniumGlass1.xml", "../data/test/uraniumGlass2.xml"],
    ...
]

# Predict
for group in test_data:
    data = []
    for filename in group:
        spectrum = load_spectrum(filename)
        features = get_features(normalize(spectrum), isotopes)
        data.append(features)

    X_test = np.array(data)
    preds = bst.predict(X_test)
    preds = le.inverse_transform(preds)
    print(preds)

#> ['Background' 'Background']
#> ['Thorium' 'Thorium']
#> ['Uranium Glass' 'Uranium Glass']
#> ...

Here, I also grouped the values from different samples and used batch prediction.

As we are able to see, all results are correct. I used to be also going to make a confusion matrix, but a minimum of for my relatively small variety of samples, all objects were detected properly.

4. Testing

As a final a part of this text, let’s use the model in real-time with a Radiacode device.

The code is sort of the identical as at the start of the article, so I’ll show only the crucial parts. Using the radiacode library, I hook up with the device, read the spectra once per minute, and use these values to predict the isotopes:

from radiacode import RadiaCode, RealTimeData, Spectrum
import logging


le = LabelEncoder()
le.classes_ = np.load("../models/V1/LabelEncoder.npy")
isotopes = isotopes_load("../models/V1/isotopes.json")
bst = XGBClassifier()
bst.load_model("../models/V1/XGBClassifier.json")


def read_spectrum(rc: RadiaCode):
    """ Read spectrum data """
    spectrum: Spectrum = rc.spectrum()
    logging.debug(f"Spectrum: {spectrum.duration} collection time")
    result = predict_spectrum(spectrum)
    logging.debug(f"Predict: {result}")

def predict_spectrum(sp: Spectrum) -> str:
    """ Predict the isotope from a spectrum """
    features = get_features(normalize(sp), isotopes)
    preds = bst.predict([features])
    return le.inverse_transform(preds)[0]

def read_cps(rc: RadiaCode):
    """ Read CPS (counts per second) values """
    data = rc.data_buf()
    for record in data:
        if isinstance(record, RealTimeData):
             logging.debug(f"CPS: {record.count_rate:.2f}")


if __name__ == '__main__':
    logging.basicConfig(
        level=logging.DEBUG, format="[%(asctime)-15s] %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S"
    )

    rc = RadiaCode()
    logging.debug(f"ML model loaded")
    fw_version = rc.fw_version()
    logging.debug(f"Device connected:, firmware {fw_version[1]}")
    rc.spectrum_reset()
    while True:
        for _ in range(12):
            read_cps(rc)
            time.sleep(5.0)

        read_spectrum(rc)

Here, I read the CPS (counts per second) values from the Radiacode every 5 seconds, simply to make sure that the device works. Every minute, I read the spectrum and use it with the model.

Before running the app, I placed the Radiacode detector near the article:

Image by writer

This vintage watch was made within the Fifties, and it has radium paint on the digits. Its radiation level is ~5 times the background, nevertheless it continues to be inside a protected level (and it is definitely 2 times lower than everyone gets in an airplane during a flight).

Now, we are able to run the code and see the ends in real-time:

As we are able to see, the model’s prediction is correct.

Readers who don’t have a Radiacode hardware can use raw log files to replay the information. The link is added to the tip of the article.

Conclusion

In this text, I explained the means of making a machine learning model for predicting radioactive isotopes. I also tested the model with some radioactive samples that may be legally purchased.

I also did an interactive HTMX frontend for the model, but this text is already too long. If there may be a public interest on this topic, this can be published in the following part.

As for the model itself, there are several ways for improvement:

  • Adding more data samples and isotopes. I’m not a nuclear institution, and my alternative (from not only financial or legal perspectives, but in addition considering the free space in my apartment) is proscribed. Readers who've access to other isotopes and minerals are welcome to share their data, and I'll try so as to add it to the model.
  • Adding more features. On this model, I normalized all spectra, and it really works well. Nevertheless, in this manner, we lose the knowledge in regards to the radioactivity level of the objects. For instance, the uranium glass has a much lower radiation level in comparison with the uranium ore. To differentiate these objects more effectively, we are able to add the radioactivity level as a further model feature.
  • Testing other model types. It looks promising to make use of a vector search to search out the closest embeddings. It may possibly even be more interpretable, and the model can show several closest isotopes. A library like FAISS may be useful for that. One other way is to make use of a deep learning model, which may also be interesting to check.

In this text, I used a Radiacode radiation detector. It's a pleasant device that enables making some interesting experiments (disclaimer: I don’t have any profit or other industrial interest from its sales). For those readers who don’t have a Radiacode hardware, all collected data is freely available on Kaggle.

The complete source code for this text is offered on my Patreon page. This support helps me to purchase equipment or electronics for future tests. And readers are also welcome to attach via LinkedIn, where I periodically publish smaller posts that should not sufficiently big for a full article.

Thanks for reading.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x