Unlocking Sustainable AI: The Game-Changing Tsetlin Machine Approach

by · Forbes
Mikhail Tsetlin, creator of learning automata theory in 1961Courtesy of Lisa Tsetlin

Japan has unveiled plans to construct the world's fastest supercomputer. To power such a supercomputer using current technology would require the energy output of 21 nuclear power plants.

Already, the energy consumption of AI-powered data centers is becoming a significant concern for mankind. Hence, it is likely that access to energy will, or arguably should, become a roadblock for AI development. Currently, data centers consume about 3% of global electricity and require substantial water usage. The recently launched xAI data center in Memphis, Tennessee, named Colossus, exemplifies this issue, using 150MW of electricity and 1 million gallons of water daily for its AI training system.

Noel Hurley, CEO of Literal-Labs and former Arm executive, highlights the critical nature of this problem. He argues that energy and efficiency are the biggest obstacles to AI growth, as data centers are already consuming significant portions of countries' energy resources. The market and economics have been able to adapt to more expensive chips, but we're reaching a limit on energy consumption.

The root of this energy crisis lies in the fundamental structure of neural networks. At their core, these networks rely on large matrix multiplication functions, which are computationally expensive and energy-intensive. This process makes chips costly and power-hungry. Hurley points out “Having multiplication at the heart of neural networks is the cause of the problems that we have today.”

Curve-Jumping Product

To address this issue, a radical rethinking of AI fundamentals is necessary. Guy Kawasaki's explains the concept of a "curve-jumping product" in his book "The Art of the Start 2.0". In Kawasaki’s words “Entrepreneurship is at its best when it alters the future, and it alters the future when it jumps curves.” Whilst there are modern examples, Leonardo Da Vinci is arguably the greatest creator of innovations that skipped marginal improvements to create transformational shifts in technology and science.

MORE FOR YOU
Today’s NYT Mini Crossword Clues And Answers For Friday, September 20th
Google Chrome Says Goodbye To Passwords On Windows, Mac, Linux, Android
Trump Says If He Loses In November Jewish Voters Will ‘Have A Lot To Do’ With It

Back to modern times. Literal-Labs is developing a potential curve-jumping solution for AI with their Tsetlin machine approach toolkit. This method, which originated in the Soviet Union in the 1960s, has been revitalized and combined with propositional logic by Norwegian and UK researchers. As Hurley explains “We spent five years looking at start-ups doing neural networks, and came to the conclusion that these companies were just fiddling around the edges. Hence, we have taken a completely different approach.“

Tsetlin Machine Approach

The Literal-Labs approach replaces multiplication functions with massive if-then statements, look-up tables, and Tsetlin machine automatas. This technique uses voting algorithms to determine which statements to include, resulting in energy consumption that's just a fraction (one part in thousands) of traditional neural networks and up to 1000x faster inferencing.. The company is currently finalizing external benchmarking to provide verified stats on energy usage and efficiency compared to neural networks.

Real world applications & Explainability

Literal-Labs has initially focused on IoT and edge applications, such as anomaly detection (including leaks, machine health, and predictive maintenance) where processor capability is limited. The efficiency of the Tsetlin approach allows AI to be used on existing, less powerful hardware in the field.

Aside from efficiency, another key advantage of the Tsetlin machine approach is its improved explainability. Unlike the "black box" nature of neural networks, which use millions or billions of parameters and non-linear processes, the linear boolean logic of Tsetlin machines allows for easier tracing of the decision tree, addressing a major criticism of current AI systems. As a result, the explainability of the Tsetlin machine approach has attracted interest from finance and insurance companies seeking more transparent AI solutions.

Accuracy & Complexity

While the Tsetlin machine approach offers significant energy efficiency improvements, it does have some limitations. It may be slightly less accurate in certain benchmarking tests compared to neural networks. However, Hurley argues that the key question is whether the accuracy is sufficient for the specific application, not whether it's 100% accurate. Furthermore, the approach is less effective with tasks involving complex, high-dimensional raw data like images or audio, or unprocessed data.

The development of neural networks has had a significant head start over the Tsetlin approach, with research dating back to Frank Rosenblatt's first trainable neural network in 1957. In contrast, there has been little development of the Tsetlin approach between the 1960s and 2018 - which explains the current dominance of neural networks in the field. Unfortunately, the demand for energy as a result of neural networks has only recently become critical - so little effort has been made on finding more efficient AI methodologies.

With climate change caused largely by the creation of electricity - resulting in droughts across the globe, the use of AI in data centers is not helping human survival on this planet. Although not a complete solution to the AI energy crisis, the Tsetlin Machine approach offers a promising alternative for many applications. As the AI industry grapples with scaling responsibly, innovations like this could be key to balancing powerful AI capabilities with environmental stewardship. By addressing both energy efficiency and explainability, the Tsetlin approach may become an essential tool in creating a more sustainable AI ecosystem.