Since time immemorial, technology has altered the way humans interact with the environment, paving the way toward progress (and leaving countless casualties in its wake). The Agricultural and Industrial revolutions enabled our interactions with the environment to become more efficient, while the Information Revolution changed our relationship with knowledge. The next one will eclipse the past three. The revolution of Artificial Intelligence (AI) will, for the first time, mimic and best humans in cognitive capabilities as well as mere physical ones.
The next [revolution] will eclipse the past three.
The how and when of this fourth revolution is an enormous topic that would require many volumes (or databases) to cover. Intuitive explanations can be found at Wait But Why, Yuval Noah Harari’s Homo Deus, and Richard Susskind’s The Future of the Professions. Here’s a simplified story of what's to come.
Planet of the Rabbits
Imagine that the earth is populated by rabbits rather than humans as its main inhabitants. They go about their lives chasing carrots and digging holes. Now imagine that you are a single human being in this sea of cuteness (as ENIAC was in 1946). However, you begin your life with intelligence less than that of the average bunny and subsequently live at their whim. One day, the rabbits develop a method of injecting brain cells into your skull, slowly ‘enhancing’ you over time. (As computers evolved from the IBM PC to the Macintosh128K to the K computer). This continues until one day, you realise that you have become smarter than every rabbit alive (in AI terms, this is known as the ‘singularity’). You are now a fully-fledged human of the 21st century, able to imagine, create and destroy things that your fluffy overlords can’t begin to comprehend. Add to that the ability to create copies of yourself and leverage the vast resources of an interconnected global system. How would you use this power? More importantly, what would you do to the rabbits?
The developed human in this story is AI and the rabbits are us. If you concluded that you would take the world into your own hands - a rational course of action - there’s no reason to believe that AI wouldn’t do the same. So far, the only scientific explanation for human dominance on earth is our superior intelligence. We’ve yet been able to prove that our species possess a divine right, soul or heavenly mandate. If intelligence is truly the only thing that sets us apart from ‘lesser’ creatures, then we had better watch out. Because once a computer outsmarts us, we lose not only our edge, but also our place in history.
Building upon this foundational knowledge, this article aims to compare the magnitude of the threat posed by AI vis-a-vis two of humanity’s other pressing concerns - climate change and disease.
It’s Getting Hot in Here
Climate change is the new talk of the century. While skeptics continue to exist, their arguments are beginning to lose traction in the face of an overwhelming scientific consensus. As a 'tragedy of the commons' resource that hitherto clashed with economic development, the protection of the global ecosystem has become an enormous challenge for humankind as a whole.
But how does the threat of global warming, rising sea levels and ravaging bushfires compare to that of a supercomputer? There are two factors that make climate change a milder threat to humanity as a whole.
The first is time. While ultimately fatal to the human race, climate change is a slow process, operating on a timescale of decades and centuries. A blip in the timeline of history, but enough to afford humans many opportunities to pause and collectively reflect on their actions, especially as the effects of a changing world become more and more tangible. In other words, while we may not recover fully, the damage is salvageable and risk of complete extinction is low. In contrast, the threat from AI would be instant. Once the singularity is reached, all of the computer’s actions are at its own whim and humans will completely lose control. Unlike climate change, the process of reaching the singularity cannot be reversed, even a little bit. There is no turning back.
The second is the relentless progress of history. Heraclitus once said that the only constant is change. That is true because it is within our nature as a species to strive toward strength, knowledge and advancement. This is an iron rule of history. As such, projects that contradict progress are a fool’s errand. For most of human history, climate change has been a natural byproduct of progress. Our hunter-gatherer ancestors set forests on fire to hunt more efficiently, agricultural-era farmers used fire to convert forests to fields and industrial capitalists burnt millions of tonnes of coal to power cities. If an environmental activist tried to convince any of these three groups to consider the ozone layer, she would have been laughed out of the room.
In the two decades since the Kyoto Protocol, environmental awareness has gained traction for several key reasons. First, the key sources of the movement stem from developed countries where the average standard of living is high enough such that economic development no longer runs contrary to the basic necessities of life. And while the intentions of activists may be pure, it is difficult to ignore the hypocrisy of developed countries demanding developing countries to eliminate the very source of progress that cemented their own positions on the world stage. Second, and perhaps most importantly, as the cost of renewable energy continues to fall, the incentive to protect the climate is slowly shifting into alignment with economic incentives. In other words, mother nature has finally been granted admission to travel alongside the bandwagon of progress.
As the cost of renewable energy continues to fall, the incentive to protect the climate is slowly shifting into alignment with economic incentives.
It’s because of this second reason that I believe humans will be able to look back in a hundred years and acknowledge climate change as a challenge that was successfully overcome.
World War Z
Countless movies depict a post-apocalyptic world resulting from a strain of a deadly virus. From Resident Evil to 28 Days Later, I Am Legend to World War Z, popular culture would have you believe that Earth’s doomsday scenario is irrevocably predetermined. However, a logical assessment of the war between viruses and medicine shows that this is unlikely to be the case.
Viruses have been around since the dawn of time, and its scourge has led to the deaths of countless sentient beings. In the Middle Ages, the Black Death resulted in the death of a quarter of the global population. For an inhabitant of that era, it would seem that the war against this unknown, invisible and cruel invention of nature was certainly lost. But today, things are different. The advent of modern medicine ushered in an era of victories against diseases of all kinds. The invention of the thermometer and the discoveries of x-ray imaging of penicillin, for example, have revolutionised the way humans view treatment. Thanks to the thousands of researchers in laboratories around the world, the causes of suffering are no longer unknown. If I catch a cold in winter, I know it’s because of the lower temperatures supporting the virus’ proliferation and lower my immune system’s capabilities at the same time. More importantly, I know how to mitigate the risk of it happening again. The miracle of modern medicine has imbued us not just with knowledge, but also the power to manipulate nature.
The miracle of modern medicine has imbued us not just with knowledge, but also the power to manipulate nature.
But what about the potential of a super-virus developing? It’s certainly possible, but the chances that we can’t contain it are slim. Foremost, this is because, in the war between nature and medical science, the former is random while the latter is targeted. Diseases develop congruently with the laws of nature, through random mutations. While a flu virus may find a weak spot in the human genome by chance, it is unlikely to repeat this feat on a systematic basis. On the other hand, medical science is targeted and cumulative. Once a unique virus surfaces, researchers understand what they are dealing with and work to develop an antidote for that specific use-case. Moreover, they have the benefit of building upon a plethora of previous research on how to deal with similar viruses in the past.
As an illustration, imagine a blind boxing match, where both contestants have a piece of cloth wrapped around their eyes. In the blue corner is the virus champion, ‘V’. He is muscular, sizeable and confident from years of defeating lesser opponents. However, he only has one strategy, and that is to punch randomly in all directions, hoping to hit the opponent in a soft spot. In the red corner is team science’s ‘S’, who, while young and inexperienced, is a quick learner. While he starts out without a strategy, S begins to use auditory cues to determine the opponent’s distance, swing trajectory, and ring boundaries. He is able to build on this information until eventually, he fights like he is not blind at all, hitting V in the jaw every punch. Who do you think will win? Sure, S may take a few big hits, but if history is any guide, he is yet to be knocked out. He’s only ever improving, whereas V is still air-punching like an insane oaf.
Rabbits, climate, and zombies. Is the future truly so bleak? Despite all we read and hear, I nonetheless remain optimistic. Perhaps, just maybe, we can find a way to co-exist with super-machines. Am I irrationally optimistic? Perhaps all humans are, which is why we may not see the truth until it is too late. Nor the irony that AI may simultaneously be both humanity’s greatest achievement and its greatest threat.
Comments