At UnicMinds, because we’re passionate computer science engineers who’ve seen the world from mainframes to AI, we’ll never forget the greatest people who contributed to the computational movement of the world. And, the man who led the charge on this was Alan Turing. Turing Machine is the original model of computation that started the world on this journey. But, if you think critically, you will see that nature is the obvious computational machine working all through in front of us yet remaining in the background.
Foundational Impact & The Nobel Prize
Great pleasure is obtained in thinking fundamentally about concepts. And, that is what scientists and academicians do. A lot of people outside of this fraternity too think about problems at a fundamental level but scientists and academicians do it as their passion.
The Nobel Prize is provided to those who have a lasting impact on the future of humanity and life in general with ground-breaking work in physics, chemistry, physiology or medicine, literature, and peace. Learn more about the beginning of the Nobel Prize here.
Usually, the Nobel Prize in Physics is awarded to particle physicists (people involved in studying small things) or astrophysicists (people involved in studying large things). However, this year it went to two people who are studying artificial intelligence (AI). Now, that might make you wonder what’s AI got to do with Physics?
Why does AI fall under Physics?
The development framework of neural networks and basic models of artificial intelligence are originally inspired from the physical principles and laws of natural systems. The original generative models such as Hopfield neural networks and Boltzmann machines which eventually led to the modern generative AI are all rooted in the laws of natural systems.
We all know that the basis of computation lies in 1s and 0s, binary numbers. However, not many know that the basis of binary numbers lies in physics – in magnets! Each magnet has its own north and south poles, and even if you break a magnet the broken magnet will have its own north and south pole. An electron in an atom is the smallest magnet with a spin orientation just like magnets. Each electron’s spin is either aligned or anti-aligned but cannot be anything else. This is a binary behavior that drives all magnetic storage such as hard-drives. Like how the Earth’s magnetic field causes all the magnets to point towards North, similarly all electron spins may decide to align or not-align with neighbors with an overall energy of the system.
But, while this makes sense on how physics and binary systems in computing are related, how is this linked to Artificial Intelligence?
Coincidentally, neurons in the brain also exist in discrete binary systems such as excited or inhibited, or at least that is how we understand the neurons at this point. Because this resembles the spin networks, it inspired physicists to apply spin networks to the neurons of the brain and to the fundamental tenets of brain workings. That is how Dr. Hopfield created or simulated a model of memory and computation of our brain in the computational world, giving birth to the Hopfield Neural Network (HNN). A HNN can be trained and it can store and recall memory, which is fundamental to computational generative neuroscience.
As in a magnet, a neural network starts with the neurons (or spins) in the input state, and then iteratively lets each neuron flip to reduce the total energy. In this way, the Hopfield network functions as a recurrent neural network (RNN) (because the next state depends on the previous one). These updates will definitely stop eventually, and the input is transformed into output across layers.
However, this original neural network theory is not very useful for true machine learning and intelligence because HNNs are deterministic. You wouldn’t still call it true intelligence. If ChatGPT could only recite Wikipedia or news articles, it wouldn’t be very helpful. In the same way, Hopfield networks, which follow strict, deterministic rules, aren’t effective for most real-world ML tasks. While Hopfield networks can store memory, they have some key problems:
- The number of weights and biases must be significantly larger than the size of the memory (and the weights are real numbers, not just 1s and 0s).
- Memory retrieval may fail when some memories are too similar.
- As a generative model, it is ineffective because it can only recall its memories exactly as stored.
The solution? Heat the system up. This is the foundation of a Boltzmann machine.
HNNs vs. Boltzmann Machines
Unlike HNNs, the Boltzmann networks are not constantly looking to minimize energy. Boltzmann machines are no longer exact memory models with 1 or 0 spin, instead they’ve probabilities. This is similar to nature conserving the energy, but due to thermal noise the energy gets distributed randomly across subsystems. The more evenly energy is spread out, the better the entire system can explore different possibilities — this is the Second Law of Thermodynamics, or the principle of maximum entropy.
Maximizing probabilities in Boltzmann machines is equivalent to minimizing free energy. Thus, Boltzmann machines are literally borrowing principles from thermodynamics — such as those that form leaves, snowflakes, and shells — to create complex networks capable of generalization. This ability to generate new data makes Boltzmann machines some of the first generative AI models, with fully rigorous data-based stochastic training and inference.
And, that’s how AI research is now intertwined with the fundamental laws of nature.
Hope this is useful, thank you.
At UnicMinds, we don’t believe that it is too early to teach a concept to kids, and we use teach the intuition of complex topics in simple language to inspire children into foundational thinking to solve problems. After all, it is all about solving problems!
You may like to read: Coding Terms for Kids, AI Basics for Kids, & Operating Systems Explained to Kids