We easily forget that the world as we know it could end within the next 24 hours.
There are still over 10,000 nuclear weapons primed and ready to be fired, and those nukes are owned by several different actors — eight nations, in fact: the United States, Russia, the UK, France, China, India, Pakistan, North Korea (and probably Israel). If any one of them gets pissed off for whatever reason — and actually, let’s be clear, if any single human with sufficient power or influence gets pissed off — things could escalate very quickly.
But the Cold War ended in the 90s, and we as a species survived that whole circus, and it’s been nearly 30 years since then, so…. we’re probably safe, right?
Probably.
But now another potential world-devourer has reared its curious head: Artificial Intelligence. And AI is nothing new; we’ve actually been using it since the 1950s. What IS new is the absolutely breakneck pace at which AI has developed in the past six months. This essay is going to be completely irrelevant in another three months, because we are seeing insane developments happening on a weekly — no, daily — basis.
Now I don't like to write about the "News," but rather about timeless truths concerning the human condition. But it genuinely feels like we are living in a critical moment in history like the development of the computer, or even the Internet, which, for better or worse, have radically altered how we interact with the world and each other (even if some of our pre-digital instincts remain intact).
What's different about this revolution is that instead of taking years for the novel technology to reach the public masses, to develop into something usable for the layman, and to evolve into some powerful or interesting applications, it's now taking weeks. Anyone can now use GPT-4, and its use cases are already staggering.
I think you would be hard pressed to find anyone who is informed on the topic that will argue with you that this is NOT a big deal. Maybe the biggest deal in our lifetimes.
Which statement will certainly be true if we don't live very long. And that may very well be a realistic possibility.
My purpose in this essay is not to talk about nukes (that was just for context), or the recent developments in AI (which you can find on your own), but rather to discuss some of the implications around the progression of AI in 2023 that are eerily similar to that of atomic science in the 1940s.
The premise is this: we don't really know what we've created, or what it's capable of. That statement could be viewed optimistically, which would be the default response if we were talking about LITERALLY any other new technology in recent years. And you know what, there is probably a >95% chance that that's exactly what's going to happen with AI. It will probably provide untold benefits and amazing new applications and space-age / Jetsons kind of cool stuff.
Intelligence
But consider this statement: "Intelligence is the most dangerous thing in the universe."1 Whatever it is that separates humans from the rest of the animal kingdom (or the other kingdoms or domains, for that matter), whether it be a soul, or consciousness, or divine favor, or language, or pure luck -- whatever it is that gives us the "spark," it must be said that we appear to have some level of intelligence that is greater than the other forms of life around us.
And because of our intelligence, we have broken the laws that bind all other species, and have disrupted the normal order of things. We may still be bound by the laws of Physics (like gravity, inertia, friction, etc.), but not necessarily the laws of Biology, particularly the law of Equilibrium. You see, all other forms of biological life are bound by the laws of supply and demand, as it relates to their food and their predators. If a certain population starts to be really successful, it will grow... up to a point. Once the population gets too big, then it either starts to run out of food, or it becomes an overabundant source of food for its predators, either of which swiftly brings it back down to its previous population size.
Do humans follow the law of Equilibrium?
No. We just keep growing. We've gone from a few hundred thousand to now 8,000,000,000. The only thing that's slowing us is a mixture of cultural values, such as the declining desire to start families, and not any inherent limitations placed on us by our physical environment.
But Equilibrium isn’t the only other law we’ve surpassed. We’ve also broken Evolution. For other forms of life, if they want to change as a species, they have to do so gradually, over time, through the process of random genetic adaptations, and then passing the successful traits down to their progeny. That is Genetics.
But humans also have Memetics, which is the process by which traits are passed through imitation. You admire Napoleon, then you become like him. You admire Michael Jordan, then you become like him. Your culture supports Capitalism, or Christianity, or places an emphasis on music or martial arts or monogamy, then you will likely adopt those values as well, and your activity and personality will conform. But you can also change your values, either by embedding yourself in a new culture, or simply reading about it in a book (or online). And you can literally reinvent yourself. Thus humans can evolve much more rapidly through imitation than merely through producing offspring.
Because of our intelligence we can break the laws of Equilibrium and Evolution, and thus we have dominated the entire world. We have subjugated all other species. We choose which ones get to flourish (dogs, cats, chickens, cows, even rats), and which approach extinction (tuna and rhinos and pandas and bison). We have even coerced the earth to submit to our purposes, extracting whatever materials we find valuable from its flesh, whether fossil fuels or precious metals.
But intelligence also has one more ominous quality - Malice. We forget that animals are incredibly violent by nature, but only because they have to be. They are bound by another Biological law: "Survival of the Fittest." Animals have two options: kill or be killed. Whether they are eating plants or other animals, it's the same. Murder is not uniquely human; animals do it all the time. Just watch some clips from any nature documentary. Top-of-the-food-chain predators are particularly savage.
But animal murder is different than human, because animals are just doing what they have to in order to survive. Perhaps there was a time when this was how murder worked for humans, but no longer. Now, homicide is by definition malicious (as distinct from manslaughter). Let me illustrate the point with a quote from "The Point of Honor" by Joseph Conrad, which details the story of two men who dueled several times over the course of their decades-long relationship:
Lieutenant Feraud crouched and bounded with a tigerish, ferocious agility—enough to trouble the stoutest heart. But what was more appalling than the fury of a wild beast accomplishing in all innocence of heart a natural function, was the fixity of savage purpose man alone is capable of displaying. Lieutenant D'Hubert in the midst of his worldly preoccupations perceived it at last. It was an absurd and damaging affair to be drawn into. But whatever silly intention the fellow had started with, it was clear that by this time he meant to kill—nothing else. He meant it with an intensity of will utterly beyond the inferior faculties of a tiger.
Only humans can hate. Only humans can kill out of disgust, or loathing, or jealously, or castigation. Something about our intelligence allows us to create a narrative around the act of murder, and this only makes the act that much more appalling to us.
Dumb Devices? But we call them Smartphones
All this to make the argument, that again, "Intelligence is the most dangerous thing in the universe." And this statement takes on a chilling connotation when we connect it to AI. Is AI intelligent? It's easy to make the argument that tools like GPT-4 are merely autocorrect on steroids, an impressive tool, but still just that: a tool (it's even easier to make this argument with Bard).
And what do we even mean by intelligence? If we're talking about IQ scores and standardized tests (often the way we talk about each other), then GPT crushes every possible exam out there. If we talk about the Turing test, then GPT passes that. If we're talking creativity, just ask it to compose a sonnet, or summarize Hegel (in the words of Socrates), or write flash fiction. If we're talking emotional intelligence, it may not be the most charismatic, but it definitely is better than talking to a lot of other humans you might meet in, say, the subway or the Subway or reddit or the Red sox game.
Above: Eric Andre’s topical skit about AI and cyborgs
Ok so maybe you're not convinced that AI is intelligent yet, that it won't act like humans did when we grew massively beyond our initial Equilibrium, when we broke Evolution to move beyond pounding rocks and learn how to use fire and pickaxes and windmills and bicycles and automobiles and airplanes and computers and the internet, or when we evoke Malice when we commit pre-meditated murder or rape or genocide. Maybe AI isn't capable of that, and never will be, because it's just a machine.
But I think the most compelling counterargument to this point is from a proto-science-fiction story, the 1872 fictional travelogue "Erewhon" by Samuel Butler. In it, the protagonist travels to a previously undiscovered nation (they didn't have the idea of visiting other planets back then, and like Gulliver's Travels, just invented new countries), which is called "Erewhon" (spelled backwards is "Nowhere," i.e. "Utopia"). In that country, they had a revolution about 500 years prior in which they ludditically destroyed all the machines in the entire nation, for fear of what they were beginning to witness.
The argument goes something like this: How can we be sure that consciousness will not develop from unconsciousness? For example, if we were to go back in time to the formation of the earth and find just a hot, barren rock, would we not conclude that it would be impossible for consciousness to arise from such inhospitable conditions? And yet it did.
It would be rash to say that no other [kinds of consciousness] can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.
Right now, it seems that humanity is the end of all things. That everything serves to further our purposes, whether accidentally or through divine favor or through our deliberate intentions. But it was not always that way — in the beginning, before the planets were even formed, everything just burned. Fire was the end of all things, as the sun and all the other stars in the universe just kept burning endlessly like torches in the distance. For billions of years, everything was fire. Yet no longer. Then there were rocks, and then water. And a few more billion years before animals, and eventually humans. So now humans are the end of all things, but is it not also possible that we will similarly be replaced?
And I think Buter’s argument is even more powerful because of how prescient it is. Even nearly 150 years ago, he was shocked by the rapid development of such rudimentary technologies as the steam engine. These machines had already evolved to have a mouth (whistling via steam to make noise) and a stomach (requiring regular "feedings" of coal to keep them going).
He even argued that it might one day be possible for machines to develop ears, with which to hear each other, or to even develop their own language which we could not discern, but they could.
Isn't this exactly what binary and machine code are? We use high-level programming languages and compilers to translate human instructions to the software, and then the hardware. But beyond this, computers talk to each other in all kinds of ways that we cannot detect (including wifi, bluetooth, RFID, etc.)
And perhaps the most surprising image is the one he paints of all the humans who are already in "bondage" to the machines. "How many spend their whole lives, from the cradle to the grave, in tending them by night and day?" This, again, in 1870, in Victorian England. But how much worse is it now: Do we control our devices, or do they control us? How many hours per day do we tend to our phones, our laptops, our widescreens, our cars? Who serves whom?
Ominously, Butler points out that just because machines have no "reproductive system," this does not mean they cannot proliferate through more parasitic endeavors. Just as flowers use the honeybee to spread its pollen, and grain uses animals to spread its seed, perhaps machines use humans to spread their progeny. How many devices have already filled the planet, both in our hands and in our landfills?
Pay Attention
Ok, so before things get too disheartening, let's take a step back. My point in writing this essay is not to assert the doomsday, but merely to create a discussion around the possibility, however unlikely. I'm not arguing that AI will cause an existential threat, but rather that it's hard to prove that it won't. Because I earnestly think that one of the main reasons why nuclear winter was avoided was the large amount of discussion generated around the topic due to compelling literature, whether fiction or non-.
So what are our responses? Are we going to have a machine revolution and destroy all technology in the world? Probably not. Other suggested alternatives, such as intense state-sponsored surveillance, or a unified global monogovernment, are similarly unlikely.
Perhaps there’s only one thing we can do: pay attention and recognize the situation we are in.
In the world of emergency medicine (EMS), there is an crucial first step in any disaster scenario. It doesn’t matter whether it’s a hazardous material leakage, a uncontrolled fire, a terror attack, or an act-of-god natural disaster like a hurricane, earthquake, or flood. Before you do anything, before you make a plan, before you call for backup, the most important step is this:
Recognize that This. Is. A. Disaster.
It may seem obvious, but in the heat of the moment, our judgment is clouded. We want to react immediately. But a disaster is fundamentally different from any other emergency, and requires an entirely different approach. But you can’t act appropriately if you don’t realize the gravity of the situation. You'll be attending to minor problems likes cuts and bruises while the world (literally) crumbles around you. You'll be missing the forest for the trees.
Are we in a disaster right now?
I don't know. I don't think so, at least not yet.
But the rapid pace of AI development right now signals that the world is going to be changing in massive ways that we can not even begin to anticipate. And I think that is the recognition that we need to have. Whether it's good or bad, something revolutionary is happening right now.
We need to pay attention. We need to be engaged. We need to be talking about these developments and their applications and their implications. We need to think about what role each of us is going to play in this pivotal moment.
I’m indebted to Erik Hoel for both the quote at the center of this essay, and for inspiring its genesis by raising awareness about some of the potential risks around AI right now. Here’s the link: