Nobel Prize awarded to pioneers of artificial intelligence

51 seconds ago

From left: John Hopfield and Geoffrey Hinton. Image: Ill Niklas Elmehed © Nobel Prize Outreach

Hopfield’s ‘associative memory’ can store patterns and recreate them and served as the basis of today’s AI is trained.

Two scientists were awarded the 2024 Nobel Prize in Physics for their contributions towards the foundation for today’s powerful artificial intelligence and machine learning models.

Laureates John Hopfield and Geoffrey Hinton were honoured with the award today (8 October) for their work, starting in the 1980s, in helping create machines that can mimic functions such as memory and learning and developing the earliest models that today’s generative AI is based upon.

The way in which machine learning models are trained was inspired by how human brains learn, connecting nodes, or a small pieces of information, with connectors, similar to how brain synapses work in humans. This hypothesis likened a human neural network to an artificial neural network and forms the basis of how computer models are trained today.

Hopfield published his discovery of ‘associative memory,’ also called the ‘Hopfield Network’ in 1982, a network that can store patterns and recreate them.

When trained on an image, the network can check a different image that was input and make corrections to it to match the first. The Hopfield Network often reproduced the original image on which it was trained. The network could recreate data that contained ‘noise’ (wrong or unnecessary data), or data which had been partially erased.

Hopfield’s associative memory gives perspective on today’s large language models built on networks similar to, but much larger than Hopfield’s initial discovery.

Hinton, who had previously studied experimental psychology and artificial intelligence, picked up from the Hopfield Network and built the Boltzmann machine, an early example of a generative model, using statistical physics.

The Boltzmann machine, which was published in 1985, utilised an equation by the 19th century physicist Ludwig Boltzmann, and can learn from being given examples.

A trained Boltzmann machine can recognise familiar traits in data it has not previously seen, much like humans who can recognise traits of something familiar in an entirely new object.

In a similar way, the Boltzmann machine can recognise an entirely new example if it belongs to a category found in the data it was trained on, and differentiate it from anything dissimilar.

Hinton continued his work into artificial neural networks even when the industry seemingly lost interest in the 1990s. However, the industry regained renewed interest in the 2010s.

Hopfield and Hinton’s work fed into the AI and machine learning boom that humanity is currently in, with mass improvements being made to how machines process data. Generative AI has made further developments, being able to process complex human languages and vast amounts of data.

Prof Peter Gallagher, the head of astrophysics and director of Dunsink Observatory at the Dublin Institute for Advanced Studies (DIAS), said machine learning is “transforming how researchers in space science and astrophysics are analysing and interpreting complex datasets”.

“Machine learning allows us to automatically find and characterise large numbers of solar radio bursts that would be impossible to achieve by eye”, he said.

Rhodri Cusack, a neuroscience professor at Trinity College Dublin, said AI neural networks have proven to be valuable models of processes in the brain. “In short, machines are helping us understand ourselves, which in turn provides new avenues for technology. None of this would be possible without the seminal work of Hopfield and Hinton”.

Earlier this year, Hinton, widely considered the ‘Godfather of AI’, was awarded the Ulysses Medal by University College Dublin (UCD) for his contributions to society through backpropagation – a way of training artificial neural networks to be more accurate by feeding error rates back through them, reducing the need for continued input from a human.

However, he also left his role at Google last year to be more vocal about the dangers of AI. He said that with the flood of data created by generative AI, an average person will “not be able to know what is true anymore”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Suhasini Srinivasaragavan is a sci-tech reporter for Silicon Republic

editorial@siliconrepublic.com