If the Godfather of AI, tells you to “train to be a plumber” you know that you got to pay attention, atleast thats what got me hooked. In a recent conversation, Geoffrey Hinton discussed the various possibilities in the upcoming era of superintelligent AI and if you are wondering how did this conversation go about, you are not alone! During this interview Hinton touched on the big questions surrounding AI: its potential dangers, societal risks, and what it all means for humanity. Let’s dive into the key moments and insights shared by the legendary AI researcher.
In a podcast, “Diary of a CEO” hosted by Steven Bartlett, a popular British Entrepreneur, Geoffrey Hinton talks about his current mission – “to warn people about the dangers of AI”. Now, when we are at an intersection where humanity is at juncture of producing machines intelligent than humans themselves, he feels there is not enough being done by leaders and technologists in regards to establishing proper guardrails to control these machines.
Besides this, here are a few points from the conversation that stood out for me:
Geoffrey Hinton had pushed for an AI model based on neural networks for more than two decades, attempting to copy the human brain’s ability to learn and recognize patterns. During the initial years his ideas were met with skepticism and dismissal. But now, his invention is defining the very essence of human – machine relationship. Hinton, along with his brilliant students, played a central role in building what we know as deep learning, the backbone of modern AI. When asked about how he could foresee AI surpassing human intelligence, Hinton was candid about his delayed realization. “I was slow to recognize it,” he admitted. The breakthrough moment for AI came with the release of ChatGPT. Now, it’s exhibiting capabilities beyond anything anyone expected like understanding the nuances of human emotions and emulating it too.
In his words, AI has developed capabilities that are fundamentally different from biological intelligence. With AI’s ability to learn faster, process more data, and function on scales unimaginable for humans, it could soon be at a point where it could surpass human intelligence at every level. If that day comes, AI could make decisions for us, without any human input, and without needing to justify those decisions.
We could start with the joke, but there’s no laughing matter about it. Geoffrey Hinton, when asked about career prospects in the world of AI, casually mentioned becoming a plumber. The joke was more like a forewarning, rooted in the growing concerns that AI could take over most intellectual jobs. The joke echoed a deeper truth: as AI becomes more advanced, mundane intellectual tasks that once required human intervention may no longer be needed. For jobs that involve critical thinking, creativity, and nuanced decision-making, LLMs too would be competing against humans. This essentially means that humans would be left fighting for a few critical jobs.
Hinton explains that AI presents two primary kinds of risks. The first is the immediate danger that comes from people misusing AI. Cyberattacks, fraud, and manipulation could all become more efficient and dangerous with AI. The second and much more insidious risk is what happens when AI becomes far smarter than us. At that point, Hinton argues, we wouldn’t even be able to understand what’s happening or how to control it. For example it could just be the cause of the next election manipulation as data from different chatbots can be used to just manipulate voters with ads targeting each person individually. Social media channels like YouTube and Facebook could essentially fuel people’s emotions for or against any agenda using LLMs.
One of the most pressing concerns Hinton raised is how AI will affect jobs. While many people think of AI as something that will complement human workers, Hinton warns that AI will eliminate a significant portion of jobs especially those involving repetitive intellectual tasks. He mentions how his niece, a health service worker, now uses AI to handle complaints much faster. This is a prime example of AI replacing tasks that previously required a lot of human effort.
In Hinton’s view, jobs that require intellectual labor will become obsolete, though tasks like healthcare and manual labor may be safe for now but even that could change if physically intelligent robots find way into our day to day lives.
When it comes to AI in warfare, Hinton’s concerns turn to “lethal autonomous weapons.” These machines, capable of making their own decisions about who to kill, could reshape the nature of warfare. With minimal human involvement, wars could become more frequent and less protested, as nations would avoid the human cost of sending soldiers to die on battlefields. While these autonomous weapons might not be smarter than humans, they are designed to operate without human input. That in itself makes them dangerous because they could be used in aggressive ways that are hard to predict or control. While these could be an expensive investment, but countries with budgets would be able to invest in it, giving them edge over an army that consists only of humans.
Also Read: 8 Future Predictions from Jensen Huang that Sound Like Sci-Fi
Hinton goes even further to explain that once AI reaches a level of superintelligence, where it’s more capable than humans in every conceivable way it will no longer be bound by human needs or desires. The question is not whether it can become smarter than us, but whether we can ensure it doesn’t decide to eliminate us. The superintelligent AI could see humanity as a hindrance to its goals and take actions that could spell the end of our civilization. This sounded like the plot of Terminator 3 but – it’s a real concern for Hinton, who is actively involved in AI safety research. The challenge now is to find ways to build AI thats safe, secure and also always under our control.
Hinton acknowledges that there are regulations being put in place, particularly in Europe, but he criticizes them for being inadequate to tackle the full range of threats AI poses. Some regulations, for example, exclude military uses of AI, which is where the real danger lies. Hinton also highlighted the risks of AI in the hands of governments and corporations, which could use it to manipulate elections, control populations, and create widespread inequality.
Despite the growing concerns, Hinton is skeptical that we will be able to slow down the pace of AI development, as the technology is too valuable and too entrenched in our global systems. He sees it as an unstoppable force a technological revolution that has already gone too far.
Note: You can hear the entire conversation here.
This conversation left me with a sense of urgency. The risks he outlined are too real to ignore. While AI holds incredible potential for improving our lives enhancing healthcare, education, and efficiency it also harbors significant dangers that we, as a society, are woefully unprepared to handle. From job displacement to autonomous weapons, AI’s impact could radically change the world in ways that we’re not ready for.
Hinton’s call to action is clear: we need to put more resources into AI safety, and fast. Whether or not we will be able to control the future of AI is uncertain. But if we don’t at least try, we could be looking at a future where AI’s dominance leads to catastrophic consequences.
As I reflect on this conversation, I’m left with more questions than answers, but one thing is certain: we are standing at the precipice of something bigger than any of us can fully understand. AI may be the greatest gift humanity has ever created or it may be our greatest mistake. The choice, and the responsibility, lie with us.