Note1
"Intelligence is the most dangerous thing in the universe." — Erik Hoel
From my back yard I can see the distinctive silhouette of a soaring mountain range, aptly named The Flatirons. I can reach the base of the mountains within five minutes, and from there choose from one of several peaks to ascend. This particular morning, observing the mountains during a winter sunrise, I feel an irresistible urge to climb them.
During my trek up the mountain, I witness thousands of different species: plants, animals, and fungi. And I know that there are probably millions more that I cannot see with my naked eye: insects, bacteria, parasites, viruses, and so on. But regardless, all of these species submit to me during my climb, some of them cowering and fleeing in fear, some of them literally being trampled underfoot.
As a human, I am godlike in my journey through this wilderness.
After a couple of hours, I reach the Bear Peak, the tallest of the set, and find there a permanent marker that serves as a seal of humanity's dominance of this mountain, and everything thereupon. It lists the name of the peak, its geographical coordinates, and the year it was conquered. This place, like all of planet earth, is now under the rule of humanity.
At least, for the time being… but that may not always be true.
Lawbreakers
Whatever it is that separates humans from the other species—whether it be the soul, or consciousness, or language, or divine favor—we must agree that the ultimate result of that "spark" is a qualitatively higher intelligence than all other forms of life.
This places us in a separate category, accompanied with certain godlike abilities, but also some heavy responsibilities. All species, whether plant or animal, multicellular or unicellular, are bound by the immutable laws of science. All except us. As humans, we may still be bound by the laws of Physics (like gravity, inertia, friction, etc.), but because our intelligence, we have broken the laws of Biology, enabling us to disrupt the normal order of life on Earth.
For example, all animals derive their power from the anatomy of their bodies—the size and shape of their muscles and wings and teeth—and from the vicissitudes of weather—sun, wind, rain, thermal columns. But humans can command an external power source: fire.
When we discovered fire we broke the law of Internal Power, and thereby unlocked dozens of new branches on the tree of technology. We can use fire to cook food, which provides a much denser source of nutrients than raw food. As a result, we don't need to spend all our time and energy digesting leaves like our apelike cousins. We can use fire to keep us warm, enabling us to venture outside of our normal habitat and enter more extreme latitudes. We can use fire to smelt and forge metals, to create steam engines, and to fuel the motors of cars, planes, and even rocket ships.
But these technologies did not develop overnight, nor would they be possible without the collective effort of millions of people. And this requires breaking a few more laws.
First, we need more people. We started with several thousands, but how did we get to over 8,000,000,000? Well due to our technologies, we were able to break the law of Equilibrium. All other forms of life are bound by the laws of supply and demand: if a certain population grows too big, then it starts to run out of food, or it becomes an overabundant source of food for its predators. But we don't have to do that. We can kill our predators with bronze-tipped spears (or guns), and we can create even more food than exists in the wild with the technologies of farming and husbandry.
Evolution
But mere numbers are not enough; what we really need is for all of those people coordinate. Most species follow a pattern of few behaviors learned from their local population, and the rest of their actions are driven by their genetic code. But billions of individuals cannot possibly have similar enough genetics. So how will they organize collective action? It’s theoretically impossible.
But we've broken another law of Biology, the law of Evolution. For other forms of life, if they want to change as a species, they have to do so gradually, over time, through the process of random hereditary alterations, and then passing the successful traits down to their progeny.
But humans also have Memetics, which is the process by which traits are passed through imitation. You admire Napoleon, then you become like him. You like the idea of Capitalism, or of Christianity, then you adopt those values. And so on. Thus humans can evolve much more rapidly, even multiple times within a generation.
And now because of our Intelligence, we can break the laws of Power and Equilibrium and Evolution, and thus we have dominated the entire world. We choose which species get to flourish (dogs, cats, chickens, cows, even rats), and which approach extinction (tuna, pandas, bison). We have even coerced the earth to submit to our purposes, extracting whatever materials we find valuable from its flesh, whether fossil fuels or precious metals.
Malice Aforethought
But Intelligence is not without its drawbacks, and we’ve broken another law of Biology that is quite disturbing.
We forget that animals are incredibly violent by nature, but only because they have to be. Animals have two options: kill or be killed. And by kill I also mean eating plants, but more specifically eating other animals. Bloodshed is not uniquely human; animals do it all the time. Just watch some clips from any nature documentary.
But animal bloodshed is different, because they are bound by the law of Dispassion. Animals do not relish the thought of killing, they act out of pure instinct. Instead, humans can commit murder: “The act of deliberate killing of a person without moral justification, especially with malice aforethought.”
Allow me to illustrate the point with a quote from The Point of Honor by Joseph Conrad, which details the story of two men who dueled several times over the course of their decades-long relationship:
Lieutenant Feraud crouched and bounded with a tigerish, ferocious agility—enough to trouble the stoutest heart. But what was more appalling than the fury of a wild beast accomplishing in all innocence of heart a natural function, was the fixity of savage purpose man alone is capable of displaying. Lieutenant D'Hubert in the midst of his worldly preoccupations perceived it at last. It was an absurd and damaging affair to be drawn into. But whatever silly intention the fellow had started with, it was clear that by this time he meant to kill—nothing else. He meant it with an intensity of will utterly beyond the inferior faculties of a tiger.
Only humans can hate. Only humans can kill out of disgust, or loathing, or jealously, or castigation. Our Intelligence enables us to create a narrative around the act of killing, breaking the law of Dispassion, and infusing bloodshed with malice.
Dethroned
The malice of humanity reached its peak during the Cold War, when it was a very real possibility that the entire race could be wiped from the earth with less than 24 hours notice. Tensions have cooled significantly in the past 50 years, helping us to forget this traumatic possibility. But I was grateful when I watched Oppenheimer the other week, as it was a good reminder that this is still a threat to our species.
There are still over 10,000 nuclear weapons primed and ready to be fired, and those nukes are owned by seven (maybe eight) different actors: the US, Russia, the UK, France, China, India, Pakistan, North Korea, and probably Israel. Notice the radically difference between the cultures and values of each of these nations. If any one of them gets pissed off, for whatever reason, things could escalate very quickly.
But there’s something looming on the horizon that makes me even more terrified. Although nukes are incredibly destructive, and the humans that control them are incredibly volatile, at least we understand the nature of what a nuclear war could look like. This very visceral fear of mutually assured destruction has kept our hands off the nuclear trigger for the last several decades.
But now there’s AI, and the problem is we don't really know what we've created, or what it's capable of. This could be viewed optimistically, if we were talking about ANY other technological development. But Artificial Intelligence is categorically different from any other type of technology. Why? Because its very name includes the thing we’ve been discussing all along. It’s not just a tool, but a potential successor to our species.
And what do we even mean by Intelligence? If we're talking about IQ scores and standardized tests (often the way we talk about each other), then GPT crushes every possible exam out there. If we talk about the Turing test, then GPT passes that. If we're talking creativity, just ask it to compose a sonnet, or summarize Hegel (in the words of Socrates), or write flash fiction. If we're talking emotional intelligence, it may not be the most charismatic, but it definitely is better than talking to a lot of other humans you might meet on the street.
Ok so granted that AI is more Intelligent than us, perhaps it lacks the consciousness that we have. But how can we be sure that consciousness will not develop from unconsciousness? How did we become conscious? If we were to go back in time to the formation of the earth and find just a hot, barren rock, would we not conclude that it would be impossible for consciousness to arise from such inhospitable conditions? And yet it did.
The 1872 fictional travelogue Erewhon by Samuel Butler is the most compelling argument I’ve read that stirs my concern about AI's development. In it, the narrator says:
It would be rash to say that no other [kinds of consciousness] can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.
Right now, it seems that humanity is the end of all things. My hike up into the Flatirons made me feel as if we had already "conquered" nature. Whatever we want to do to the earth, and the other species, we can do.
But it was not always that way in the beginning, before the planets were even formed, everything just burned. Fire was the end of all things, as the sun and all the other stars in the universe just kept burning endlessly like torches in the distance. For billions of years, everything was fire. Yet no longer. Then there were rocks, and then water. And a few more billion years before animals, and eventually humans.
So it not also possible that we will similarly be replaced?
And I think Buter’s argument is even more powerful because of how prescient it is. Even ~150 years ago, he was shocked by the rapid development of such rudimentary technologies as the steam engine. He was distraught to see how these machines had already evolved to have a mouth (whistling via steam to make noise) and a stomach (requiring regular "feedings" of coal to keep them going).
He even argued that it might one day be possible for machines to develop ears, with which to hear each other, or to even develop their own language which we could not discern, but they could. But this is exactly what we have with binary and machine care. We use high-level programming languages and compilers to translate human instructions to the software, and then the hardware. But we cannot read it ourselves. And AI is a similar black box. We know in theory how it works, but cannot see the inner workings.
And perhaps the most surprising image is the one he paints of all the humans who are already in "bondage" to the machines: "How many spend their whole lives, from the cradle to the grave, in tending them by night and day?" This, again, in 1870, in Victorian England.
But how much worse is it now? Do we control our devices, or do they control us? How many hours per day do we tend to our phones, our laptops, our widescreens, our cars? Who serves whom?
Ominously, Butler points out that just because machines have no "reproductive system," this does not mean they cannot proliferate through more parasitic endeavors. Just as flowers use the honeybee to spread its pollen, and grain uses animals to spread its seed, perhaps machines use humans to spread their progeny. How many devices have already filled the planet, both in our hands and in our landfills?
I’ll admit that his reasoning is a bit antiquated, and yet it’s also still eerily unsettling. What if machines continue to proliferate, and AI continues to evolve, and we are no longer the most dominant life-form on Earth?
Pay Attention
The biggest threat we ever faced as a species was the threat of nuclear holocaust, but we survived that because of the large amount of discussion generated around the topic. People were aware of the issue, were terrified about it, wrote stories about it, talked about it, wrote their congressmen about it. We need to do the same.
In my world of emergency medicine, there is an crucial first step in any disaster scenario. It doesn’t matter whether it’s a hazardous material leakage, a uncontrolled fire, a terrorist attack, or an act-of-god like a hurricane, earthquake, or flood. Before you do anything, before you make a plan, before you call for backup, the most important step is this:
Recognize that This. Is. A. Disaster.
It may seem obvious, but in the heat of the moment, our judgment is clouded. We want to react immediately. But a disaster is fundamentally different from any other emergency, and requires an entirely different approach. But you can’t act appropriately if you don’t realize the gravity of the situation. You'll be attending to minor problems likes cuts and bruises while the world (literally) crumbles around you. You'll be missing the forest for the trees.
Are we in a disaster right now?
I don't know. I don't think so, at least not yet.
But the rapid pace of AI development right now signals that the world is going to be changing in massive ways that we can not even begin to anticipate. And I think that is the recognition that we need to have. Whether it's good or bad, something revolutionary is happening right now.
We need to pay attention. We need to be engaged. We need to be talking about these developments and their applications and their implications. We need to think about what role each of us is going to play in this pivotal moment.
If we don’t, then it’s possible that AI could supplant humanity as the end of all things. And then we might experience the end of all things.
This essay was originally published in a significantly different form on March 29, 2023, under the title “Scary Smart.”
I share your concern, but you overstate the intelligence of current GPT.
Eg: “If we're talking creativity, just ask it to compose a sonnet.”
Yep. Just checked, and it still can’t do it, even at all.
Powerful article and I look forward to hearing ideas on how we can be involved in this matter!