The Age of AI: And Our Human Future should have been a better book than it is. Given the brilliance and reputations of its co-authors, it should have been a compelling look at the capabilities of artificial intelligence, today and in the future. It should have been a cohesive account of what those capabilities reveal about the limitations of both humans and machines in the coming century. Above all, two leading experts on U.S.–China relations, former Secretary of State Henry Kissinger and former Google CEO Eric Schmidt, should have weighed in on how the U.S. can regain its lead versus Beijing in AI technology, which Schmidt has conceded elsewhere we are in danger of losing.
Instead, bringing these two minds together results in what I can only describe as dysergy, instead of synergy. MIT dean Daniel Huttenlocher must have found himself in a tough spot trying to negotiate between two powerful personalities pushing distinct, and ultimately conflicting, visions. The result is a book that gives us a greatly overrated view of the possibility of intelligent machines and a very cramped view of humanity—while remaining virtually mute on the real threat to our human future, which is not AI but China.
Still, The Age of AI does do an important service in dispelling some of the more overheated fears about what artificial intelligence can and will do to expand the capacity of computers to mimic and even exceed the capabilities of humans—a future that some have greeted with alarm (Max Tegmark, Nick Bostrom), and some with enthusiasm (Ray Kurzweil, Robin Hanson).
Just to clarify: Artificial intelligence is the branch of computer science that deals with the simulation of intelligent, i.e., human-like, behavior in computers—such as planning activities, moving around in a physical environment, recognizing objects and sounds, speaking, translating, etc. Machine learning (ML) is the subset of AI that uses algorithms to curate data, learn from it, and make a determination or prediction about certain things or events. AI’s capabilities are built on machine learning. It uses those determinations to solve problems faster and more efficiently than human beings can, such as finding the winning move in a chess game or identifying which chemical molecules can create a new antibiotic (which MIT researchers did in 2020, naming it halicin after the computer HAL in 2001: A Space Odyssey).
The authors point to AI’s capacity for startling insights, “such as identifying drug candidates and [devising] new strategies for winning games” as signposts toward a new future both for computers and human beings. But they also point out that despite AI’s seemingly daunting capabilities, it’s still left “to humans to divine their significance and, if prudent, integrate those insights into existing bodies of knowledge.”
In addition, “AI cannot reflect upon what it discovers.” This means “the significance of its actions is up to humans to decide,” which leaves a wide space for AI’s human operators to exercise control, “regulate and monitor the technology.”
AI and ML programs also make mistakes. Visual-recognition applications are especially tricky; Google Photos once labeled a picture of a gorilla as an African American, while another identified a school bus as an ostrich. Operators can teach the machine not to make the same mistake; that’s why it’s called machine learning. (Google’s solution was to take pictures of gorillas out of the dataset.) But AI is not self-correcting. “The algorithms, training data, and objectives for machine learning are determined by the people developing and training the AI, thus they reflect those people’s values, motivations, goals, and judgments,” the authors write. “Even as machine-learning technologies become more sophisticated, these limitations will persist.”
All this hardly sounds like Terminator-style machines replacing humans and taking over the world. The mood darkens, however, when the authors get to Artificial General Intelligence (AGI), a supposed point in time when machines will be able to equal or even exceed the intelligence capacity of humans—or possibly unite themselves into a single super AGI beyond the capability of any human being to control or even understand. They predict that “AI will progress at least as fast as computing power has, yielding a million-fold increase in fifteen to twenty years. Such progress will allow the creation of neural networks that, in scale, are equal to the human brain.”
Equal in scale, maybe. But in capability? That’s a much bigger leap, which looks more and more impossible the more we know about how AI and ML really work.
All ML, the workhorse of AI, is driven by a computer’s ability to recognize patterns in sets of noisy data—whether those data consist of sounds, images, electrical pulses, airline passenger manifests, or financial transactions. The mathematical representation of these data is called a tensor. As long as data can be converted into a tensor, it’s ready for ML and its more sophisticated cousin, Deep Learning. Deep Learning builds algorithms inspired by the structure and function of the brain’s neural network for the purpose of constructing a predictive model. It does so by learning from other testing datasets that correct and validate its initial model.
What operators end up with is a prediction curve based on the recognition of past patterns (e.g., given the pervasiveness of X with Y in the past, we can expect XY to appear again in the future). The more data, the better the model: Patterns that may be undetectable in tens of thousands of examples can suddenly be obvious in the millionth or ten millionth example. AI pioneer Oren Etzioni, who is cited with approval by our authors, says that machine learning creates high-capacity statistical models. Another AI scientist, Yoshua Benigo, has explained that deep-learning networks “tend to learn statistical regularities in the dataset rather than higher-level abstract concepts.”
This is not thinking, or anything remotely like it. And yet the authors proceed as if it were, concluding: “Whether we consider it a tool, a partner, or a rival, [AI] will alter our experience as reasoning beings and permanently change our relationship with reality.” They even assert that AI “hastens dynamics that erode human reason as we have come to understand it.”
This is because the authors have embraced a distorted view of the primacy of reason in human affairs. They quote French and German thinkers such as Descartes, Kant, and Montesquieu, but not figures from the Scottish Enlightenment such as Adam Smith and David Hume—who understood that our passions and moral sentiments are far more important to our lives as human beings than reason alone. It was Hume who asserted that “reason is and ought to be the slave of the passions”—an inconvenient quotation that appears nowhere in this book. Nor do they mention the traditional Judeo-Christian view that “the divine light of reason” (to quote Augustine) may support but hardly defines what makes us truly human, namely, our soul.
In the end, what definitively distinguishes us from machines isn’t our intelligence but our subjective states of consciousness, the origins of which neuroscientists are just beginning to understand. This is why the neuroscientist Anil Seth’s new book, Being You, dismisses fears that AGI is just around the corner. Those fears, he points out, rest on the false assumption that consciousness and intelligence are intimately linked and “that consciousness will just come along for the ride.”
Instead, the life of the human mind or consciousness proceeds on the basis of informed guesses punctuated by intuitive leaps—“Eureka” moments are not just crucial for major scientific discoveries but for everyday life (“I saw this young woman through the shop window, and I just had to meet her”). Logical reasoning and the patterns of the past are of no help here. After all, I’ve seen plenty of young women through shop windows before, but something made me choose that one (and then make her my wife a year and a half later).
Compared with these moments, the workings of AI/ML will always seem mechanical and plodding. As the Dutch computer and systems scientist Edsger Dijkstra puts it, “the question of whether machines can think is about as relevant as the question of whether submarines can swim.” Yet because machine intelligence can mimic human intelligence, people fantasize that it somehow threatens human intelligence itself. It’s true that operators are now writing algorithms that come without an explanation facility for the user, while the reasons why a deep neural network does what it does can be hard to unravel. It’s sometimes impossible to ask the computer how it arrived at a particular conclusion—especially if the conclusion is wrong (e.g., mistaking a picture of a bus for an ostrich). Those are developments that should give pause to anyone pause, not just computer scientists, given the extent to which we are relying on these machines. But these problems arise because AI/ML is getting more complicated and sophisticated, not because it has suddenly moved to a new and higher qualitative level of thinking. And when AI shifts from one endeavor to another, it turns out that AI trained to play chess can’t play Go without further programming, so its performance level quickly sinks—unlike with humans, who can make those shifts all the time.
The real breakthrough will come when operators figure out how to install their own general intuitive sense into their machines. So far, no one has figured out how to do that. There is no indication anyone ever will.
_____________
Unfortunately, The Age of AI’s misleading view of the role of reason in history and human affairs leads to an even more misleading—one might even say dangerous—view of the future.
Here the authors turn to a bad historical analogy when they compare the threat of AI and AI-driven weapons in the 21st century to the threat of nuclear weapons in the 20th. They even suggest the threat could be worse, because “AI’s capacity for autonomy and separate logic generates a layer of incalculability” that can be applied to all existing weapon systems, including nuclear weapons, and because “delegation of critical decisions to machines may grow inevitable,” including decisions affecting life and death that may be “opaque to human reason.” From this perspective, the rise of SkyNet (in the Terminator movies) and HAL doesn’t seem so far-fetched after all. Even worse, unlike today’s existing international agreements concerning the use and proliferation of nuclear weapons, “efforts to conceptualize a cyber balance of power” and AI nonproliferation are “in their infancy, if that.”
Relying on the analogy of nuclear weapons and nuclear deterrence as a way to understand the threat that AI may pose in the wrong hands is not surprising, given Kissinger’s lifelong interest and expertise in the subject. But AI is not a discrete technology, as nuclear weapons were. Unlike nuclear weapons, AI and ML have already become all-pervasive and are in widespread use. Thanks to user-friendly AI frameworks such as Orange and SageMaker, building your own AI application has become relatively easy, while the cloud gives ordinary users direct access to top-rank hardware. At the same time, the proliferation of open-source data, from weather and census data to marketing surveys and university research, provides endless grist for the thousands of AI mills already out there.
In fact, the more data the model can digest, the better it gets. That’s why in the age of AI, access to data will become the decisive instrument for exercising command over AI/ML’s future, whether in financial markets or on the battlefield. And that’s why establishing control over access to the avalanche of data that will come through 5G wireless technology is as important as control over AI itself.
The one country that understands this is China.
The U.S. has been the center of AI research going back to the 1950s. The danger is that we are losing that leadership to a country that has no compunctions about the dark side of ML and AI.
President Xi Jinping has set aside $150 billion to make China the first AI-driven nation, which includes building a massive police-surveillance apparatus powered by Big Data and artificial intelligence. Trains in China now require national IDs to buy tickets, which allows the government to block human-rights activists or anti-corruption journalists from traveling. In Xinjiang Province, home of China’s oppressed Uyghur Muslim minority, the government uses AI-sifted Big Data and facial recognition to scrutinize anyone entering a mosque or even a shopping mall, thanks to the thousands of checkpoints requiring a national ID check-in.
Even more, when Google’s Deep Mind employed AI to defeat a world-class human champion at Go, China’s national game, in 2017, the People’s Liberation Army realized that AI had the potential to give it an insurmountable edge on the battlefield—including enhanced command-and-control functions, hypersonics, building swarm technology for thousands of UAVs, as well as object- and facial-recognition targeting software and AI-enabled cyber deterrence. By law, virtually all the work that Chinese companies do in AI research and development is also available to the Chinese military and intelligence services to shape their future force posture—while Chinese telecom equipment giant Huawei will make sure that the 5G networks it builds around the world will provide access to endless supplies of data to make the Chinese AI juggernaut stronger and better.
The authors of The Age of AI are strangely muted about China. This is odd, since Kissinger has been an expert on China going back to his historic visit in 1971, and, as chair of the U.S. National Security Commission on Artificial Intelligence, Eric Schmidt has publicly warned that China is poised to replace the U.S. as the world’s “AI superpower” and that the U.S. “is not prepared to defend or compete in the AI era.”
The authors admit that Chinese digital technology such as AI and digital platforms such as TikTok are being used as extensions of Beijing’s policy objectives, including its military. But they also insist that the private sector’s relationship with the Chinese Communist Party is “complex and varied in practice”—a variability that eludes other Western Chinese observers. Certainly President XI and his cohorts have no qualms about the ethics of proliferating AI research and development, which is why the authors’ desire to look for ways to integrate restraints on AI “into a responsible pattern of international relations” seems not only quaint but out of date. We aren’t going to get this djinn back in its bottle, and their proposal of handing the future of AI over to “the leadership of a small group of respected figures from the highest levels of government, business, and academia” seems like the worst of all possible solutions.
For whatever reasons, the authors prefer to write about China as if we were all facing the same dilemmas about AI, and we could all ultimately agree on how to address them. The truth is, we can’t. Instead, the U.S. must embrace the AI arms race with China and seize the dual-use advantages AI offers, including helping to build future quantum computers that can break every existing public encryption system. (Quantum is the one technology that bears any comparison to the destructive potential of nuclear weapons.)
Two undeniable truths stand out at the end of this journey. The first is that nuclear nonproliferation worked (more or less) because the U.S. dominated the technology from the start and was able to force other countries, including Russia and China, to abide by the rules or face consequences. The same will be true of AI nonproliferation, if such a project is even possible.
The second is that the real check on the abuse of AI/ML isn’t international agreements but the moral judgments of the builders and operators. Those judgments need to be shaped by Western values, including the belief in protecting freedom as a matter of national security. Otherwise, “the values, motivations, goals, and judgments” that drive AI in the 21st century will be those of the Chinese Communist Party, while we dither about who should sit next to the president of Harvard on an AI oversight commission.
All that said, there is no doubt that The Age of AI will be a foundational document in any debate and discussion about where this technology will take us. But like the current state of AI itself, it’s hardly the final word on the subject—and hardly the key to the future it wants to be.
We want to hear your thoughts about this article. Click here to send a letter to the editor.