Opinion - We should all be carbon supremacists

A carbon supremacist holds as an ideological tenet that carbon-based intelligences (humans, cats, fish, bees) are superior to silicon-based intelligence, also known as AI. Conversely, silicon-based intelligences exist only to serve carbon-based ones.

I am a carbon supremacist. You should think about being one, too.

In recent years, I have had the following conversation more times than I wish to recall: “Well, it’s really interesting that [name technological advancement here] is now in the market, but it could never do my job because of [insert carefully constructed and ultimately easy to overcome reason].”

I’ve been watching carbon intelligence — humans — continuously redefine what it means to be uniquely human in ways that are neither convincing nor comforting. And in time, I have come to realize two essential truths of our age.

First, continuously moving the metaphorical goalposts is a losing proposition. With each passing advancement, the size of intellectual territory marked as “uniquely human” gets ever-smaller. I’ll refer to this as the humanist argument; it continuously stumbles on its long road toward ultimate demise.

The second truth is that no amount of marketing, demonstrations or suggestions will make me accept silicon-based intelligences as equals. They exist for one thing: servitude.

Now is the time to name this attitude and think this way, lest we find ourselves accidentally on a path to silicon-based intelligence being dominant. As there is no intellectual argument about silicon-based intelligence that will sway my attitude, this is no longer an intellectual discussion; my stance requires a new name: carbon supremacism.

I live in a technological world; my degrees are in electrical engineering and operations research and I’ve even been accused of being a data scientist. I build these machines on a small scale as part of my living.

My fears are not irrational. From my desk, I can see out a window past a grove of trees to the Pacific. But in my near view is a pile of small AI boards that I have been developing for narrow tasks.

My attitude toward the trees in front of me, and toward the carbon-based intelligences that share my living space, is that of partner and caretaker, stemming from love. It matters not a bit to me if you think that our capacity to love is evolutionary or divine; the point is that we have it. Other carbon-based intelligences have it too.

Conversely, my attitude towards silicon-based intelligences on my desk is one of depraved indifference. I do not hesitate to enhance their performance at the cost of their longevity. If beating their little contacts would make them perform better, I would not hesitate. I don’t know, or more importantly, care, what a silicon-based intelligence would want. Silicon intelligences do not have eons of evolution, and it is difficult for me to think they were touched by the divine.

Should a silicon-based intelligence become adequately sophisticated, what would it interpret as “love?” Would it even gain this capacity? Both yes and no are equally dark from a carbon supremacist’s point of view.

What actions do we propose as carbon supremacists? As of now, I believe I’m the only openly declared one, so admittedly I’m still kind of figuring it out. It involves a somewhat adversarial attitude towards AI. Here are some things I think are part of it:

  1. Carbon supremacists believe at every moment of every day we are either being educated to remain the masters of silicon or being trained to become its servant. If you tune out your math teacher to scroll (name website here), you are collaborating with silicon intelligence toward your own obsolescence.

  2. Have you noticed that it’s harder to get in touch with an actual human being if you have a problem? It should be a law that a machine has to disclose to humans when it is a machine, and that there be a non-machine option. As a carbon supremacist, I should always have the option to treat with other carbon beings.

  3. We should think seriously as a society about what it means to be subject to the judgment of a computer (note — you likely already are). It is untenable to declare that humans will never have decisions made about their credit scores or other aspects of their lives made entirely by an AI. However, we should take legislative action to ensure that computers are only allowed to make positive, permissive decisions and can only recommend adverse actions. A carbon intelligence has to approve, enact and if required, explain adverse actions.

Each of these issues will require money, in education and regulation. The path that got us to where we are — particularly the second and third issues — was paved with a well-meaning desire to save money.

Being human is not cost-effective, and the current incentives for business are to continue down the path that we are already on. Therefore, the only way out of this conundrum is through both grassroots action and legislation.

Carbon supremacists should begin by demanding to interact with a human in all transactions, and by making this part of the discussion with our elected and appointed leadership.

Harrison Schramm is a professional statistician and U.S. Navy veteran who sits at the intersection of policy and mathematics. He teaches courses in Logistics and Operations Research at the Naval Postgraduate School. 

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to The Hill.