Two AI researchers, John Hopfield and Geoffrey Hinton, received the Nobel Prize in physics on Monday for their work building artificial neural networks that can memorize information and recognize patterns in ways that mimic the human brain.
Their research in the 1980s laid the foundation for the last decade’s explosive progress in artificial intelligence and today’s ubiquitous recommendation algorithms and generative AI systems. Both men have since said that progress needs to be constrained for the sake of humanity.
In its announcement of the prize, the Nobel committee emphasized how far the field has come since Hopfield published his seminal paper in 1982, in which he described a neural network with fewer than 500 possible parameters. Today, tech companies are churning out generative AI systems with billions and trillions of parameters.
The Hopfield network was a collection of 30 interconnected digital nodes that could change their values between 1 and 0 and in doing so be programmed to record patterns that represented the pixels in black and white images. It drew on equations from physics used to describe how atoms in a network affect each others’ spin in order to calculate how the relationships between the nodes in the network represented the images. In effect, the network could be programmed to create memories of certain images. And when it was fed a new image that was fuzzy or incomplete, it could calculate its way back to the most similar image in its memory.
Hinton built on Hopfield’s work by designing neural networks that could not just remember and recreate patterns, but could be taught to recognize similar patterns in entirely different data—for example, the patterns that make one picture of a dog like another but not like a picture of a cat. In 1985 he published a paper introducing this network, named a Boltzmann machine after physicist Ludwig Boltzmann, who developed statistical equations for calculating the collective properties of a network composed of many different components.
In 2023, Hopfield was one of the most notable signatories on a letter calling for AI companies to pause the development of generative AI systems more powerful than OpenAI’s GPT-4. And Hinton has recently been talking a lot about his concerns that AI is advancing too rapidly for humans to control.
He’s estimated that humans could build an artificial intelligence that exceeds our own intelligence in the next five to 20 years and that “It’ll figure out ways of manipulating people to do what it wants.” He was one of several big names in the field to sign an open letter this year calling for California Governor Gavin Newsom to enact a law that would have held large tech companies liable for building AI models that caused catastrophic losses of life or property damage. Newsom ultimately vetoed the bill, under pressure from tech companies, including Hinton’s former employers at Google.
“In the next few years we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview with the Nobel Prize committee following the award announcement. “So I think it’s very important right now for people to be working on the issue of how will we keep control. We need to put a lot of research effort into it. I think one thing governments can do is force the big companies to spend a lot more of their resources on safety research.”
Read the full article here