Nobel Foundation
Let’s change the topic temperary. Regarding the Nobel Foundation, which is responsible for distributing Nobel Prizes, why can’t the money be spent until now? Its investment strategy, as an investor, you should really understand how its funds are endless. Please refer to my previous post for a special introduction: “The investment strategy of the Nobel Foundation“.
2024 Nobel Prize in Physics
Profile
The winner of the 2024 Nobel Prize in Physics has been announced, shared by John J. Hopfield, a scholar at Princeton University in the United States, and Geoffrey Hinton, a scholar at the University of Toronto in Canada. The total prize is 11 million Swedish kronor. In recognition of the two fundamental discoveries and inventions, the world can use neural networks for machine learning. Both winners are pioneers in the field of artificial intelligence.
John J. Hopfield created associative neural networks in 1982, now commonly known as “Hopfield networks,” that can store and reconstruct patterns in images and other data. Geoffrey Hinton invented a method that can independently find properties in data and perform tasks such as identifying specific elements in pictures.
Contributions
According to the Nobel Prize website, the two used physics to train neural networks and developed methods using physics tools, laying the foundation for today’s powerful machine learning. With their research work, John J. Hopfield and Geoffrey Hinton laid the foundation for the machine learning revolution that began around 2010, leading to the explosive development of machine learning.
Munns, chairman of the Nobel Prize Committee in Physics, said that neural networks have been used to advance research on many physical topics such as particle physics, materials science and astrophysics. Such tools have also been integrated into facial recognition in daily life. , language translation, etc.
Artificial neurons vs. natural neurons
Currently, a large number of artificial neurons are used in artificial intelligence in the computer industry. They mainly use “neural-like networks”, which are composed of “nodes”. The nodes are encoded and assigned values. Nodes are connected to each other, and as the network is trained, the connections between active nodes will strengthen and the connections between inactive nodes will weaken.
What is the difference between human natural neurons and the artificial neurons used in large numbers in artificial intelligence in the computer industry today? The natural neural network of the human brain is composed of living cells “neurons” coupled with advanced internal mechanisms. Neurons can send signals to each other through “synapses.” When humans learn things, the connections between some neurons strengthen and the connections between other neurons weaken.
In short, deep learning and machine learning, which are now widely used in artificial intelligence in the computer industry, use computer technology to imitate the way humans’ natural neurons operate and learn to solve problems.
Future worries
However, machine learning is advancing rapidly, raising concerns about the future. Humanity must take collective responsibility to utilize this new technology in a safe and ethical manner for the greatest benefit of mankind.
Geoffrey Hinton shares the same concerns, is very concerned about the huge uncertainty in the future of AI, and emphasizes the importance of responsible regulation of AI. He worked at Google for ten years, leaving last May due to concerns about the risks of AI, resigning as Google’s vice president and engineering researcher in order to discuss the dangers of machine learning technology more freely. When he received a call from the Nobel Committee, he expressed shock and claimed that he never thought he would win the prize. He predicts that AI will have a major impact on human civilization, bringing progress in productivity and medical care comparable to the Industrial Revolution.
But Geoffrey Hinton said that AI will surpass humans in intelligence. “We have not yet experienced what it will be like when there is something smarter than us.” This would be a good thing in many ways, but there are also many possible consequences to worry about, “especially the threat of loss of control.”
Research directions of the two laureates
John J. Hopfield
In artificial neural networks, a large number of neurons perform calculations through synaptic-like connections. John J. Hopfield, the 2024 Nobel Prize winner in physics, established a network described by a physical spin system. Each node can store a single value. It is trained to store and reconstruct information by finding the connection values between nodes. It systematically works through the nodes and updates the value. As the network gradually operates, this method can find the best value. Similar storage modes.
In 1982, John J. Hopfield invented associative memory, which is similar to the process by which people search for uncommon words among similar words. The network he built can recreate patterns based on stored information. When the network is provided with an incomplete or slightly distorted model, the network can find the most similar stored model.
At the time, John J. Hopfield was using his background in physics to explore theoretical problems in molecular biology. At Caltech in Pasadena, Southern California, he found a way to build systems with many small components working together. inspiration. Thanks to his understanding of the physics of magnetic materials, he was able to build exemplary networks with nodes and connections using the physics of materials that describe how spins interact with each other.
The network he built had nodes connected together with varying strengths, each of which could store a separate value, which in the first work could be 0 or 1, like pixels in a black-and-white picture. This method is very special. Its network can save multiple pictures at the same time and distinguish them. After this, John J. Hopfield and others continued to develop the details of how the network would work, including that nodes could store any value not just 0 or 1.
Jeffrey Hinton
When Geoffrey Hinton was working at Carnegie Mellon University in Pittsburgh, he and his colleagues used the ideas of statistical physics to extend the Hopfield network and build new things.
In the 1990s, many researchers lost interest in artificial neural networks, but Geoffrey Hinton did not give up and started a new round of explosive growth in this field of research. In 2006, he and his colleagues developed a method of pre-training networks by layering a series of Boltzmann machines, one on top of the other. This pre-training provides a better starting point for connections in the network, thus optimizing its training to recognize elements in images.
Geoffrey Hinton used the Hopfield network to invent another method that can independently find data attributes to perform requirements such as identifying specific elements of an image: the Boltzmann machine (Boltzmann machine). The machine is trained by updating the values in the connections, and can recognize familiar features in new information from a given example. For example, when you see a pair of brothers and sisters, you immediately know that they are related. The Boltzmann machine works in a similar way. Identify another entirely new example.
Geoffrey Hinton is one of the inventors of the backpropagation algorithm and the Contrastive Divergence algorithm. He is also an active promoter of deep learning and is known as the “Godfather of Deep Learning.” Geoffrey Hinton was awarded the 2018 Turing Award for his contributions to deep learning, along with Yoshua Bengio and Yann LeCun.
Turing Award
The Turing Award was established by the Association for Computing Machinery (ACM) in 1966 to specifically reward individuals who have made important contributions to the computer industry. The name is taken from A.M. Turing, the world’s pioneer of computer science, in order to commemorate the founder of modern computer science. The winner must have made a lasting and significant technological contribution to the computer field. Most of the winners are computer scientists. The Turing Award is the most prestigious award in the computer industry, known as the “Nobel Prize in Computer Science”.
DNN-research
Geoffrey Hinton founded the deep neural network research company DNN-research with his two students at the University of Toronto, Alex Krizhevsky and Ilya Sutskever (who later became the former chief scientist of OpenAI). Breakthrough results were achieved in the competition, attracting the attention of technology giants. In 2013, Google acquired DNN-research, and Geoffrey Hinton immediately joined Google and was responsible for part of Google Brain’s research work.
Accelerate the progress of AI
It is the foundation of modern deep learning
John J. Hopfield and Geoffrey Hinton developed pioneering methods and concepts that helped shape the field of artificial neural networks. Additionally, John J. Hopfield played a leading role in this work extending the method to deep and dense ANNs.
The current development of machine learning is based on large amounts of data and huge improvements in computing power. In 1982, John J. Hopfield published an article on associative memory, which provided support for this development. He used a network with 30 nodes, and if all nodes were connected to each other, there would be 435 connections. Compare this to today’s large language models, which are built into networks that can contain over a trillion parameters (a million).
GPUs are used for artificial intelligence
In 2006, Geoffrey Hinton and his student Simon Osindero published an important paper “A Fast Learning Algorithm for Deep Belief Networks“. for Deep Belief Nets), it is recommended to use GPU to increase the speed of training neural networks.
The publication of this paper made many people realize the possibility of using GPUs to break through computing power bottlenecks. As a result, neural network research finally came back to life after being dormant for many years. In order to help people get rid of their stereotypes about this subject, Geoffrey Hinton also gave this type of research a new name. This is where the name “deep learning” comes from.
In 2012, Geoffrey Hinton and his students built a neural network that could analyze thousands of photos and teach itself to identify common objects such as flowers, dogs, and cars. The neural network developed by Geoffrey Hinton and his students won the ImageNet Large-Scale Visual Recognition Challenge that year with a huge advantage, thus pushing deep learning into the mainstream.
Related articles
- “Geoffrey Hinton, 2024 Nobel Physics winner, inadvertently helped Nvida transform to AI overlord“
- “2024 Nobel Chemistry Prize awarded to 3 AI experts, accurately predict the 3D structure of proteins“
- “The investment strategy of the Nobel Foundation“
- “Top vendors and uses of GPU“
- “How does nVidia make money, Nvidia is changing the gaming rules“
- “The reasons for Nvidia’s monopoly and the challenges it faces“
- “Why nVidia failed to acquire ARM?“
- “Revisiting Nvidia: The Absolute Leader in Artificial Intelligence, Data Center, and Graphics“
- “Data center, a rapidly growing semiconductor field“
- “Top five lucrative artificial lucrative intelligence listed companies“
- “The artificial intelligence bubble in the capital market is forming“
- “Artificial intelligence benefits industries“
- “OpenAI, the Generative Artificial Intelligence rising star and ChatGPT“
- “Major artificial intelligence companies in US stocks market“
Disclaimer
- The content of this site is the author’s personal opinions and is for reference only. I am not responsible for the correctness, opinions, and immediacy of the content and information of the article. Readers must make their own judgments.
- I shall not be liable for any damages or other legal liabilities for the direct or indirect losses caused by the readers’ direct or indirect reliance on and reference to the information on this site, or all the responsibilities arising therefrom, as a result of any investment behavior.