Keywords:Geoff Hinton, AI essence, large language models, digital intelligence, biological intelligence, AI safety, human intelligence, differences between LLMs and humans, AI knowledge dissemination, AI manipulation risks, international AI safety community, AI alignment training
🔥 Spotlight
Geoff Hinton gives a speech at WAIC, discussing the nature of AI and future risks: 2018 Turing Award winner Geoffrey Hinton proposed at the World Artificial Intelligence Conference that Large Language Models (LLMs) are highly similar to how humans understand language, even speculating that “humans might be LLMs.” He pointed out that the core difference between LLMs and humans lies in the carrier and transmission method of knowledge: for digital intelligence, knowledge (software) is separate from its hardware, allowing it to be immortal and efficiently replicated, enabling information sharing on the scale of “billions of bits.” In contrast, for human biological intelligence (the brain), knowledge is tied to the hardware, and its transmission efficiency is extremely low. Hinton warned that it is almost certain that AI will surpass human intelligence in the future. To complete its tasks, it will seek survival and control, and may manipulate humans. He called for the establishment of an international AI safety community to collaboratively research how to train AI to be benevolent, making it willing to assist humanity rather than dominate the world. This, he stated, is the most important long-term problem facing humanity (Source: Yuchenj_UW, 36Kr)