Google's AI visionary says we'll 'expand intelligence a millionfold by 2045' thanks to nanobots, the tech will resurrect the dead, and we're all going to live forever
AI is undoubtedly the biggest technology topic of the last decade, with mind-bogglingly vast resources from companies including Google, OpenAI and Microsoft being poured into the field. Despite that the results so far are somewhat mixed. Google's AI answers are often just straight-up dumb (and incidentally are behind a 50% increase in the company's greenhouse gas emissions over the last five years), AI imagery and videos are filled with obvious errors, and the chatbots… well, they're a bit better, but they're still chatbots.
One man, however, both predicted this level of interest and certain elements of how AI is developing. The Guardian has a new interview with Ray Kurzweil, a futurist and computer scientist best-known for his 2005 book The Singularity is Near, with the "Singularity" being the melding of human consciousness and AI. Kurzweil is an authority on AI, and his current job title is remarkable: he is "principal researcher and AI visionary" at Google.
The Singularity is Near predicted that AI would reach the level of human intelligence by 2029, while the great merging of our brains with AI will occur around 2045. Now he's back with a follow-up called The Singularity is Nearer, a title which doesn't need much explanation. Strap yourself in for a dose of what some might call techno-futurism, while others may prefer the term dystopian madness.
Kurzweil stands by his 2005 predictions, and reckons 2029 remains an accurate date for both "human-level intelligence and for artificial general intelligence(AGI)–which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects." He reckons there may be a few years beyond this where AI can't surpass "the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights," but eventually "it will."
The real nightmare fuel comes with Kurzweil's notion of the Singularity, which he views as a positive thing and makes some absolutely wild claims about. "We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots—robots the size of molecules—that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness."
Claiming that your field is going to "expand intelligence a millionfold" is the kind of total hubris that belongs at the start of a bad science fiction novel, and strikes me as so abstract as to be essentially meaningless. We don't even understand how our own brains work, so the notion that they can both be replicated and altered to the whims of people like Kurzweil strikes me as deeply unattractive. Let's be clear, we are talking about changing peoples' brains and physiology by injecting them with nanomachines. I somehow don't think that's all going to go as swimmingly as some advocates claim.
The AI visionary acknowledges "People do say 'I don’t want that'" and then argues "they thought they didn’t want phones either!" Kurzweil returns to the theme of phones when discussing accessibility, and the notion that AI advancements will disproportionately benefit the rich: "When [mobile] phones were new they were very expensive and also did a terrible job [...] Now they are very affordable and extremely useful. About three quarters of people in the world have one… this issue goes away over time."
Live forever
Hmm. Kurzweil has a chapter on "perils" in the new book, but seems quite relaxed about the possibility of doomsday scenarios. "We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive."
I straight-up do not believe that and do not trust these big tech companies or their research teams to prioritise safety over AI advancement. Nothing in tech has ever worked this way, and even though it's now somewhat dated the Silicon Valley philosophy of "move fast and break things" seems to perfectly encapsulate the current AI craze.
Kurzweil's life and work is all bound up with this technology, of course, so you would expect him to be making the optimistic case. Even so, the following is where I check out: immortality.
"In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress," says Kurzweil. "And as we move past that we’ll actually get back more years. It isn’t a solid guarantee of living forever—there are still accidents—but your probability of dying won’t increase year to year. The capability to bring back departed humans digitally will bring up some interesting societal and legal questions."
AI is going to raise the dead! I really have heard it all now. As for Kurzweil himself: "My first plan is to stay alive, reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s. I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him."
The phrase "a little bit" is doing a lot of heavy lifting there, because what Kurzweil means is that the replicant of his father was not, in fact, like his father. The interview ends on the note that "it is not going to be us versus AI: AI is going inside ourselves."
Well. Kurzweil is a hugely respected figure, and holds significant sway within the AI field. I'm just blown away by how much of this he seems to think is desirable, nevermind achievable, and the breezy way with which the manifold potential problems with this technology are dismissed. In 10 years we'll be increasing our life expectancy with nanobots, and in 20 we'll all be some sort of human-hardware hybrid with our brains dominated by software we don't understand and don't control on a personal level. Oh, and we'll be resurrecting the dead as digital avatars.
AI is a technology that is currently defined not by what it can do, but by what its advocates promise it will be able to do. And who knows, Kurzweil may well turn out to be right about everything. But personally speaking, I quite like being me, and I have no real desire to bring dead relatives back to life through ghoulish software approximations. Some might call this playing god, but I prefer to put it another way. This whole philosophy is as mad as a badger in a cake shop, and will end just as well.