Is it possible for AI (artificial intelligence) to create intelligence greater than itself and surpass human intelligence?

As our species embraced the power of rational thinking, we began to develop machines that could simulate rational thought. Beginning at first as an adjunct to human intelligence, the field of artificial intelligence (AI) expanded its purview to create computers that could outperform humans in chess, Jeopardy and a host of other cerebral challenges. Some computer scientists now believe that quantum computers may eventually yield offspring of superior intelligence, leading to a sequence of generations of computers that far surpass their intellectually inferior human creators. Those who ascribe to this belief called “hard AI” argue that computers may ultimately develop emotions and identities, evolving in a reverse sequence to the evolution of human cognition and culminating in a machine like HAL, depicted in the 1968 film 2001: A Space Odyssey.

Throughout more than 99.9% of its evolution, homo sapiens was unaware of the power of reasoning and abstract thought. However, in this most recent sliver of time, our species has gradually become self-aware. While conscious of the awesome power of the neocortex, we are acutely aware of the imminent threat of self-annihilation posed by the older limbic brain beneath it.

In 1997, the world witnessed the defeat of world chess master Gary Kasparov by IBM’s Deep Blue. On July 25, 2007, Jonathan Schaeffer announced that he had succeeded in solving the game of checkers. Using clever computer algorithms that reduced the number of moves to be searched from the 1020 possible board configurations to 1014, he was able to prove that checkers (like tic-tac-toe) will always end in a draw if neither player makes a mistake, i.e., makes a less than optimal move. The sheer number-crunching capabilities of computing machines, made it inevitable that humans would look to them to perform cognitive tasks at, or well beyond, their own capability. 

As Toffler had predicted, changes are occurring at an exponential rate in the first decades of the 21st century. Ray Kurzweil observed: 

Artificial intelligence is all around us … If  all the AI systems decided to go on strike tomorrow, our civilization would be crippled: We couldn’t get money from our bank, and indeed, our money would disappear; communication, transportation, and manufacturing would all grind to a halt. Fortunately, our intelligent machines are not yet intelligent enough to organize such a conspiracy.  

Though artificial intelligence is ubiquitous in today’s world, some AI researchers suggest that the dream of a sentient computer like the HAL 9000 presented in the movie 2001: A Space Odyssey, will never be a reality. Roger Penrose argues that consciousness is distinct from algorithmic processing and that human thought has a non-algorithmic dimension not accessible by computers of any given processing power:

There is as much mystery and beauty as one might wish in the precise Platonic mathematical world, and most of this mystery resides with concepts that lie outside the comparatively limited part of it where algorithms and computations reside.

American biologist Edward O. Wilson argues that computers lack the lifetime of interactions that a human accumulates as unknown knowns in the unconscious mind, and therefore, could never successfully mimic human thinking:

To be human, the artificial mind must imitate that of an individual person, with its memory banks filled with a lifetime’s experience–visual, auditory, chemoreceptive, tactile, and kinesthetic, all freighted with nuances of emotion. And social: There must be intellectual and emotional exposure to countless human contacts. And with these memories, there must be meaning, the expansive connections made to each and every word and bit of sensory information given [in] the programs. Without all these tasks completed, the artificial mind is fated to fail Turing’s test. Any human jury could tear away the pretense of the machine in minutes. Either that, or certifiably commit it to a psychiatric institution.

The jury is still deliberating the case of strong AI vs. weak AI. Is there a distinct demarcation between consciousness and intellectual processing, or will artificial intelligence ultimately develop a self-awareness as its power increases? If the latter, machines could develop self-interest, and with it all of the politics that characterize human instincts.

Verified by MonsterInsights