Adjusting accelerators with help from machine learning

by

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

A computer-generated image based on a generative diffusion process shows 2D projections of a particle accelerator beam. Starting from pure noise, signals from the accelerator adaptively guide the process. As a result, each version is a little clearer. Credit: Alexander Scheinker, Los Alamos National Laboratory

Banks of computer screens stacked two and three high line the walls. The screens are covered with numbers and graphs that are unintelligible to an untrained eye. But they tell a story to the operators staffing the particle accelerator control room. The numbers describe how the accelerator is speeding up tiny particles to smash into targets or other particles.

However, even the best operator can't fully track the miniscule shifts over time that affect the accelerator's machinery. Scientists are investigating how to use computers to make the tiny adjustments necessary to keep particle accelerators running at their best.

Researchers use accelerators to better understand materials and the particles that make them up. Chemists and biologists use them to study ultra-fast processes like photosynthesis. Nuclear and high energy physicists smash together protons and other particles to learn more about the building blocks of our universe.

Compact accelerators can be particularly useful for broader applications in society. Medical scientists and doctors use accelerators in cancer therapy, while manufacturers use them to produce semiconductors for electronics. Other applications include sterilizing medical devices, analyzing historical artifacts, and hardening lightweight materials for cars.

Unfortunately, the performance of particle accelerators is prone to drifting over time. They have hundreds of thousands of components. Some of these components are incredibly complex. Influences from outside, like vibrations and temperature changes, can affect how the machinery functions.

As various parts shift, they have a domino effect on the pieces after them in line. By the time the accelerator produces the particle beam, tiny shifts may have added up to a significant change. It's like how individual cars slowing down can lead to a traffic jam. Over time, the beam becomes less precise and less useful.

To fix this issue, operators need to "retune" accelerators back to their optimum parameters. These periods of retuning limit how much time the accelerators are available to scientists. In addition, while scientists are taking experimental data, the technicians can't adjust the accelerators in real-time.

On top of all of that, the beams are incredibly complex. They exist in a space that scientists can't measure quickly or even directly. Operators are limited to looking at the beam position in one dimension. Considering that the beam actually exists in six dimensions (the normal three, plus motion in each of them), the operators miss out on a lot of data.

To deal with these issues, scientists have developed complex controls and diagnostics. Special algorithms adapt how a particle accelerator operates to compensate for changes over time. A number of systems use these algorithms, including the LCLS (a DOE Office of Science user facility at SLAC National Accelerator Laboratory). But these methods have a big challenge. Because these algorithms are based on feedback from the accelerator, the algorithms can end up getting "stuck" without finding the true optimal conditions.

At Los Alamos National Laboratory, physicist Alexander Scheinker develops new ways to use machine learning to improve particle accelerators' performance. Credit: Alexander Scheinker, Los Alamos National Laboratory

Machine learning—a type of artificial intelligence—has the potential to help. With machine learning, computers could act as "virtual observers" that support human technicians. Machine learning applications search for patterns in data and then make predictions. Scientists "teach" machine learning applications by giving them sets of training data.

Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.

Subscribe

From this data, the application learns to identify the relationship data and results. While a human operator can recognize a problem based on their past experience, a machine learning application recognizes a problem based on what it "saw" in its training data. Some accelerators at CERN—the particle physics laboratory in Switzerland—are using this type of application.

But machine learning applications are only as good as their training data. Training data is based on the original characteristics of an accelerator. But unfortunately, as the accelerator's machinery shifts, the data is no longer accurate. To solve this problem, scientists would have to continuously retrain the model. That defeats the entire point. They just end up running into a different variation of their original issue.

The best solution may lie in combining the two approaches. Researchers and engineers at DOE's Los Alamos National Laboratory and Lawrence Berkeley National Laboratory are developing a new machine learning technique for compact particle accelerators. This technique uses real time data from the accelerator diagnostics to continuously tweak the model. It then uses this data to guide an advanced generative AI process, detailed on the arXiv preprint server, known as diffusion.

The process creates virtual views of accelerators' beams as they change with time. One machine learning tool has the ability to take a set of super complex inputs with many dimensions, compress them into a much simpler representation, and then provide a complex output that reflects the system.

In addition to compact accelerators, these methods can also be applied to large-scale accelerators such as FACET-II. On the FACET-II accelerator system at SLAC, the model produced 15 different two-dimensional projections of the six-dimensional beam at five different locations. While even thinking about that scale hurts a human's brain, the machine learning system needs it. This data allows the system to learn the possible changes over time as well as the changes' relationships with each other and the basic physics.

Scientists also demonstrated the adaptability of this approach by showing that the same generative diffusion method can be used at the European X-ray FEL in research appearing in Scientific Reports. They used the method to create megapixel-resolution, virtual views of intense electron beams.

So far, this method seems promising. On accelerators where operators can take complex measurements of the beam as it is running, researchers have been collecting data. They then compare the application's predictions to the measurements. With this information, they can further train the application.

In the future, human operators of particle accelerators may get some help from their computer counterparts. This assistance will allow scientists to make more and better discoveries than ever before.

More information: Alexander Scheinker, cDVAE: Multimodal Generative Conditional Diffusion Guided by Variational Autoencoder Latent Embedding for Virtual 6D Phase Space Diagnostics, arXiv (2024). DOI: 10.48550/arxiv.2407.20218

Alexander Scheinker, Conditional guided generative diffusion for particle accelerator beam diagnostics, Scientific Reports (2024). DOI: 10.1038/s41598-024-70302-z

Journal information: Scientific Reports , arXiv

Provided by US Department of Energy