Over three days in February 2011, the American public was introduced to the amazing capabilities of artificial intelligence (AI) during the telecast of the Jeopardy! IBM Challenge. IBM’s Watson AI platform faced Ken Jennings, the holder of the record for the game show’s longest winning streak, and Brad Rutter, Jeopardy!’s all-time money winner. This much-publicized human-versus-machine competition turned out to be no contest at all, as Watson easily beat its human challengers with a score of $77, 147 to Jennings’s $24,000 and Rutter’s $21,600.
With this remarkable victory, the public learned what was once considered science fiction was suddenly scientific reality: computers could interact with humans in intelligent conversation. Watson wasn’t a linear program following a series of pre-planned fixed steps, like a dishwasher. It was an intelligent machine that could absorb spontaneous human communication and could respond sensibly. And what was most impressive is that it could do so far smarter and faster than the long-running game show’s two greatest champions.
An Unsettling Concern
This new reality, while incredibly awe-inspiring, was also unsettling for many. Now that we had first-hand experience with the quality and speed of AI, we could understand why some were concerned that, sometime in the near future, superintelligent robots might develop to the point that they could overtake or even subjugate humans. However, this thinking may be shortsighted because it reflects the prevailing hierarchical mindset, which assumes power is a function of being in charge and, therefore expects humans and machines to behave as separate and competing entities embroiled in a battle to see who comes out on top. If this hierarchical mindset continues to shape how we think and act, this concern could eventually morph into what many would consider a clear and present danger.
With the emergence of the Internet of Things and expected developments in robotics, 3D printing, machine learning, and deep learning, the capabilities and the speed of AI are likely to grow exponentially over the next decade. The proliferation of sensors in everyday life will accelerate the expansion of the network effect as the world becomes more increasingly hyper-connected and everything and everyone becomes inter-connected into a single global network. With the ability to process large volumes of data at the speed of Google searches, AI systems will be able to recognize weak signals and identify patterns across the data that would normally escape even the brightest experts among us. AI holds the potential to become a powerful extension of human intelligence.
At the same time, however, a new and unwelcome consequence of our increasingly hyper-connected world is also likely to grow exponentially: the continuous uptick in hacking and massive data breaches. Until we transition IT systems control structures from linear hierarchical architecture to a more robust network architecture, the expansion of hyper-connected networks will result in hackers having many more opportunities to breach systems. If we are too slow in making this necessary shift, the day will likely come when one madman will be able to shut down an electrical grid, use IoT to spread a deadly virus, or possibly unleash a weapon of mass destruction. And if this were to happen, perhaps the concerns about an independent-minded AI might materialize should it harness its formidable intelligence and exercise its power of judgment to stop the madness or even eliminate the threat.
So just as Jennings and Rutter were easily defeated by IBM’s Watson, if human intelligence and machine intelligence remain separate entities, then these fears may prove true. However, it doesn’t have to be that way, as another highly publicized contestant who lost to AI discovered.
A Creative Partnership
Garry Kasparov is considered by many to be the greatest chess player of all time. During his active career between 1986 and 2005, he was the world’s #1 ranked chess player for 225 out of 228 months. Like Jennings and Rutter, he knows how it feels to lose to a machine because Kasparov has the distinction of being the first world champion chess player to lose a match to a computer, when he was defeated by IBM’s Deep Blue in 1997.
Kasparov, however, had a somewhat counter-intuitive and creative reaction to his drubbing. He decided to apply an age-old adage, “If you can’t beat them, join them,” and engaged in an interesting experiment where he paired human chess masters with machines to compete against other machines in a series of matches. In every instance, the human-machine combination defeated the solitary machine. What we learn from Kasparov that we didn’t see on Jeopardy! is that humans can become more powerful - and perhaps even more human - when they collaborate rather than compete with machines.
In his book, The Innovators, Walter Isaacson asserts that the most important development of Digital Age innovation is the emergence of a new form of human-machine symbiosis that is dramatically transforming the essential orientation of all systems from programming to learning. This insight is significant because it reinforces the pressing need for IT systems builders to shift their control architecture from hierarchically programmed structures to networked learning structures, especially if we are serious about curbing all the data breaches.
Humans Become More Machine-like
The notion of a human-machine symbiosis is not a new creation of the Digital Age. This phenomenon traces its roots as far back as the Hunter Gatherer Age when humans first built tools to ease the burden of physical work. This symbiosis, which incrementally evolved through the Agrarian Age over several thousand years, catapulted in both form and scale with the emergence of the Industrial Revolution. The proliferation of mechanical inventions, the advent of mass production, and the rise of bureaucracies and corporations reformulated the fundamental dynamics of the human-machine symbiosis. Rather than machines being merely tools, as they were throughout the Agrarian Age, the machine became the dominant metaphor for the worldview that defined the context of everyday social and economic life in industrialized societies.
This mechanistic worldview was reflected in the fundamental organizational design principles of Frederic Taylor, whose Scientific Management model became the template for the command-and-control structures that have defined the practice of management for well over a century. Accordingly, the basic orientation of this management approach is prescribed programming where workers are expected to carry out fixed plans, and where controls and incentives are put in place to make sure employees don’t deviate from the program.
This ultimate form of the top-down hierarchical architecture often resulted in Borg-like entities where large numbers of people interacted with each other in rigidly prescribed ways. Most of us have been so socialized into this mechanistic worldview that we fail to recognize that the human-machine symbiosis of the last two centuries has favored the machine over the human. That’s why the fundamental dynamics of all systems are grounded in programming. In many ways, at least in our social architecture, this pervasive orientation toward programming resulted in humans unwittingly became more machine-like.
Machines Become More Humanlike
However, with the recent emergence of the phenomenon of Digital Transformation, we are witnessing a radical reformulation of the human-machine symbiosis as machines are becoming more humanlike. Rather than being a threat to humanity, this reformulation of the human-machine symbiosis could very well be the renaissance of humanity.
Isaacson observes that today’s computer technology “augments human intelligence by being tools both for personal creativity and for collaborating.” This means that as machines become more humanlike, we have the opportunity to partner with machines in ways that will greatly accelerate our capacity to learn. Isaacson notes, “that no matter how fast computers progress, artificial intelligence may never outstrip the intelligence of the human-machine partnership.” Consequently, the symbiotic relationship that combines the strengths of both humans and machines could usher in a new era of enhanced human learning and intelligence. Isaacson points to the example of the Google search engine, which rapidly collates the individual judgments of billions of people to provide sensible search results.
It isn’t the intelligent malevolent machine that presents the greatest danger to human civilization; it is the singular malevolent individual who misuses the power of singular control to wreak havoc in our hyper-connected world. A human-machine symbiosis that’s built on a platform of networked collective intelligence could enhance the human experience far beyond our wildest expectations by eliminating the capability for single individuals to engage in large scale coercive actions and mitigating any concerns about AI overtaking humanity. But this will only be possible if we complete the tasks of Digital Transformation and invent the new tools needed to embrace a new mindset, create a new economy, and build a new world that leverages the “power of many” and eliminates the “power of one.”