Thursday, May 29, 2025

AI and being human in 2025

Anytime you take up a tool, and consider its use, imagine it in your worst enemy's hand, with the sharpest part of it pressed against your neck.

By almost any objective measure, this is the greatest time to be alive in all of human history. Poverty is at a low level, and even what it means to be poor represents a lifestyle better than how many people lived throughout history. Even by recent standards, life is better today for most people than it was a mere 25 years ago, to say nothing of 100 years ago.

But the 21st century is not without its own challenges. Despite the objective improvements, life feels precarious these days. Many of the most notable technological developments of the last decade have been failures, scams, or both. Theranos. Wearables. Smart homes. Blockchain. Cryptocurrency. The ongoing degradation of social media and the general enshittification of many other services.

While it might be a good time to be alive, it feels like the experience of being human has been compromised. 

What does it mean to be human? 


Tools

When I was young, I was told the defining characteristics of humanity were its development and use of tools, enabled by the opposable thumb and a large and complex brain.

When you think of “technology” these days, what do you think of? Apps? Smartphones? The internet, whatever that is these days? The miracle of MRNA vaccines?

These are all iterations or updates of ideas and products that have been around for a long time. We think we are living in a wondrous present filled with futuristic innovation, but the reality is most of the 21st century so far is just updates of old ideas, perhaps enabled at new scale or with new shamelessness, or more likely just new marketing.

The beginning of the 20th century was a different story. No time in recorded history could match the euphoria and curiosity about the future that gripped the Western world around 1900.  The previous 2 decades had produced telephone, the light bulb, the automobile (including new assembly line mass production techniques), and in 1903 the airplane. Humans could fly.

In 1905 Fritz Haber and Carl Bosch developed the Haber-Bosch process for making ammonia from its elements, a milestone in industrial chemistry and helped feed the world.

These were not incremental improvements on existing ideas and technology (like a cellular phone or Compact Disc), but were an explosive expansion beyond what had previously been assumed were the limits of human possibility.

By 1915, every single one of those innovations would be transformed into an horrific weapon of war, and deployed to kill human beings on a scale that was also previously unimagined and beyond what people had assumed was possible.

The fist picks up the club, then the sword, then the gun. These are ways of applying force at a distance. We end up in 2025, where combat often means pushing a button to kill a person you cannot even see. Or all the people.

We have built tools that exceed our ability to wield them, or to even comprehend them. Aside from weapons of total destruction, we have created cheap food with little nutrition value but plenty of calories and tooled ourselves into an obesity epidemic. We built the greatest information collection and distribution system in human history, made it open to all, and then filled it with lies, manipulation, advertising, and a soupçon of sentimentality.

Technology always has consequences, and is always used in unexpected and unanticipated ways. Not all of them are good.

And now there is artificial intelligence, or AI. The dream of AI is that humans can create a computer program that is “intelligent.” But what does that even mean? 

The hilarious and sad thing is we don’t know. And I mean this in both the sense of “we don’t know what a thinking, intelligent program does, or how to make one” but also “we don’t even know how we think, or what intelligence in humans -- or any other creature -- is.”

That has not stopped over $300 billion in investment from just the United States in the last 5 years. The combination of weird optimism and desperate greed strikes me as particularly human.


AI isn’t new

There’s a Greek myth about Talos, a big bronze robot that protects Crete by flinging boulders at ships. This dates back to 700 BC or so. It is somewhat telling that even back then, AI was used for military purposes.

I think our modern conception of AI dates back 100 years to RUR -- Rossum’s Universal Robots, written in 1920 by Karel Čapek. It is where the word “robot” comes from, being derived from the Czech word “robota”, and means, more or less, slave labor. The name “Rossum” translates to “brain” or “intellectual”.

The play covers all the issues you can imagine. After wondering what it means to be human, and realizing they are vastly more capable than their human creators and masters, the robots rise up and take over the world, displacing humanity. It is The Terminator. It is Blade Runner. It is a critique of capitalism and the use of technology for war.


But this is art, this is speculation. When I say AI isn’t new, I also mean there’s already been technology that can pass the Turing Test. I am speaking of Eliza.

Eliza is a computer program created in 1964 by Joseph Weizenbaum. Weizenbaum was trying to create a program that worked like a person-centered therapist in the style of Carl Rogers. PCT, or Rogerian therapy, has the therapist reflect a client’s statements back to them. (You can try Eliza here.)

Weizenbaum thought it was somewhat silly and limited -- he had programmed Eliza, he knew how it worked, how little it actually did, and how Eliza absolutely did not and could not think. And yet, his secretary tried it, and quickly asked Weizenbaum to leave the room, because she was legitimately “connecting” with the machine. As far as the secretary was concerned, she was having an important, enlightening conversation about private matters. 

Weizenbaum had survived the early years of Nazi Germany. He endured a challenging childhood, and later, a divorce where he lost custody of his child. He got into computers and psychoanalysis. By the early 1960s, he was working for General Electric, where he built computers for the Navy that launched missiles and computers for banks that processed checks.

He was asked to join the MIT faculty not long after, where he worked with legends like Marvin Minsky, and John McCarthy -- the man who coined the term “artificial intelligence”.

Weizenbaum was no dummy, and no booster of technology. He spent a lot of time thinking about what it means to be human. By the 1970s, Weizenbaum called AI “an index of the insanity in our world.” 

He saw the so-called computer revolution as going backwards, the wrong direction. He felt it “strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines." 

Joseph Weizenbaum
By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.

Weizenbaum’s thinking had been influenced by his experiences in psychotherapy. Unlike his colleague Marvin Minsky, he did not see the human mind as a “meat machine”, he saw it as something fundamentally unknowable, opaque, deep, and strange.

Weizenbaum knew Eliza was a trick, and he could see how this trick would lead people to see computers as having actual judgment and being deserving of credibility. Weizenbaum knew they had neither.


Randomness

I have been working with computers as musical devices since the mid-1980s. Initially, one thing people loved about computers, sequencers, and drum machines was they kept perfect time. Give them a tempo, and they’ll stay locked to it forever, like a metronome, perfectly repeating what you asked them to play. 

But artists realized this didn’t feel right. It felt cold, inhuman, too perfect. Real drummers and real players were better, or at least different. New music was made that leaned into this perfection and coldness, but many people wanted something that felt more human.

Programmers responded. They added a feature called “humanize”. This feature would randomly vary the timing and loudness of the notes. The more you increased the value of “humanize”, the more frequently the variations happened, and the more extreme they became. 


This is what the programmers thought being a human was -- like the machine, but randomly sloppier, with the extreme settings producing something comical and unlistenable . “Being human” was reduced to being an inferior version of the machine.

It turns out that teaching machines to play music with “human feel” is far more difficult than people thought. Because it isn’t about variations in timing or volume, random or otherwise. It’s not just that “feel” and “swing” exist. It is how they exist and what drives them. AI cannot “feel the music”. All it can do is analyze the output of people who did feel the music, and perform a shallow interpolation.


Creativity

Alban Berg wrote strange music. He was a disciple of Arnold Schoenberg, but Berg wanted to wed Schoenberg’s new 12-tone system to German expressionist ideas, to get at the unconscious, to get at intense feelings. In order to do this, he created something genuinely new and controversial. To some, his masterpiece opera Wozzeck is bizarre garbage. To others, it is sublime. But all agree nothing like it had existed before.

One can look at all the major 20th century art movements -- impressionism, expressionism, Dada, surrealism, futurism, cubism, minimalism, and so on and see them as mere reactions or responses to what came before. To some degree, that is true, but it ignores the passion, the fury, the desire that fueled that need to create something which felt new and unprecedented. It also ignores the obsessiveness, silliness, darkness, and other feelings that drove artists to create all those things the way they did.

AI in its current state is incapable of generating anything new. AI cannot create from a place of feeling or memory, for it has none. AI did not have a traumatic childhood or a loving and supportive family or a failed first marriage or money problems or substance abuse issues. AI has never looked at a painting and felt peace or beauty or challenged or disturbed. It has never felt anything at all.

I have been writing songs for almost 40 years. I am still learning how to do it. A good chunk of my work involves interpolation, or being inspired by the work of others. But even when I am writing in the style of someone else, my own strange self seeps through. I cannot help it. I will put in my own feelings, my own twists, my own obsessions and favorite ideas.

The machine cannot do that. It has no inspiration, no favorites, no artists it loves, no artists it hates, no feelings or memories attached to anything it has ever ingested or “created”. It has as much pride, satisfaction, or frustration in producing art as a vending machine has in dropping a soda can into the pick-up slot.

It may not matter. I have written before about the sorry state of art and art literacy in our modern times. When it seems like all media is merely a way for internet celebrities and major corporations to monetize their brands, perhaps greatness doesn’t matter, and instead custom-fit pandering content is the future.

But AI is never going to make Bruce Springsteen’s “Nebraska”. AI is never going to make Bon Iver’s “For Emma, Forever Ago” or Low’s “Double Negative”. AI is never going to create Citizen Kane or Five Easy Pieces or Birdman. It might create something like those things after they’re out, but AI is never going to do it first. AI is never going to rebel against the status quo, it is never going to write something because it has to, because its brain is on fire with ideas and it is unable to sleep.


(Emotions &) Relationships

Some argue that our relationships are the defining characteristic of being human.  When you think about your life so far, it is possible you think about physical objects -- the house you grew up in, your favorite stuffed animal, your musical instrument. But I would bet that you think most about people, the times you had, the relationships you had, and how they made you feel.

AI is never going to be happy to see you. It will never be jealous of your success. It will not miss you when you’re gone. It won’t get misty about the old days.

I have learned the hard way that relationships require vulnerability to be truly intimate. You can never have a real relationship with AI, because AI has no vulnerability. It has nothing to lose. It has no warmth to give, no deeper layers to show, no stories to tell.

AI will never trust you more or less, and it will never respect you more or less. AI is never going to have a bad day or a bad week and need your comfort or understanding or help. 

AI is incapable of relationships in either direction, any more than a toaster is.


Death

Humans are considered to be unique among animals in that we understand and can conceptualize the idea of our own death. You are going to die. So is everyone you ever loved or knew. Reckoning with this inescapable reality is something we all must do at some point in our lives. 

In my own experience, it is significant and transformational. 6 years ago, I had cancer. I was extremely fortunate, and survived -- for now -- with some physical damage and scars. 

Last year, I had to put my father into assisted living. He is 81. He has vascular dementia and has been declining rapidly for the last 2 years. It is unlikely he will be around much longer. He is already diminished mentally and physically, fading away before my eyes.

These 2 events and the close contact with the Grim Reaper changed me. You have likely already had a few encounters with mortality, and you will have more, if you are lucky. You may have to bury your own parents some day. It will change you, and memories of them will haunt your life and your dreams.

You are still young, and yet, I am sure you think about death differently now than you did when you were a child, or 10 years ago. I was your age once. Now I am 55 and the realization that I have perhaps 20 or 30 years remaining -- and that the last third of those probably won’t be much fun -- is sobering, and gives my every remaining day a sense of urgency.

AI does not die. It has no conception of death. It knows no sadness or melancholy at the thought that everyone it has ever known is fated to oblivion. AI cannot cry because it misses its dead friends. AI will not change its life or thinking or perspective or philosophy in any profound way because of what it thinks or experiences, because it cannot think and does not experience. AI won’t rue or celebrate the wasted days and years of its youth. It has no concerns about losing its vision or ability to walk or its memory.

AI does not miss its mother. It cannot miss its friends. And it cannot translate any of that feeling or knowledge into any insights or changes in who it is, or its behavior. 

As humans, we do. We have no choice.  


Therapy

Last year, I started grad school to become a therapist. More than one of my friends has asked me if I am worried about AI therapists -- the offspring of Eliza and LLMs, the next wave of chatbots -- taking all the jobs or displacing human therapists. One of my friends is already using their own ChatGPT instance as a kind of therapist.

In this past semester, we studied Carl Rogers -- the man whose particular therapy modality was used as the basis for Eliza. Rogers himself said the only thing that mattered was the relationship between the therapist and the client. The therapist needed to do 3 things: show empathy, view the client with “unconditional positive regard”, and be “congruent” -- to live by their values. Doing that, and reflecting what the client says, is enough for the client to have a positive outcome.

Decades of research have validated Rogers, showing consistently that the best predictor of good therapeutic outcomes is not what kind of therapy is used, or how long, or anything other than the quality of the relationship between the therapist and the client.

Here’s what I know. No AI can be a Rogerian therapist. Machines cannot show empathy, for they have none. Empathy is the ability to understand and share the feelings of another person. Empathy means being aware of another person's emotional state, seeing things from their perspective, and imagining yourself in their place. Machines are unaware. They have no imagination. All they can do is repeat what other people have said.

AI cannot view the client with “unconditional positive regard”. They can fake it and use positive words, but machines have no regard. Their gaze is as vacant as that of a corpse. And AI cannot live by any values, for AI has neither life nor values. 

Sure, you can have a therapy-like experience with a chatbot. It is not the same. There are some people who may find it better or more useful for themselves in the short term. Some people prefer artificial fruit-flavored candy to actual fruit. Some people would rather masturbate than have actual sex with an actual human. It is not the same. 

Weizenbaum said that it would be a “monstrous obscenity” to let a computer act as a therapist in a clinical setting.

Humans have an innate and strong cognitive bias towards anthropomorphism. Modern technology is exploiting that bias. Studies have shown that the level of anthropomorphism in AI products affects consumers' purchase intentions and brand evaluations, and that anthropomorphic design cues, like human-like voices or appearances, can increase trust in robots and other agents. Thus companies implement these features, so you will trust their robot agents, think favorably of their brand, and buy more stuff. It is manipulation of your cognitive bias for their ends, and that’s before we get to their actual platforms.


The Platforms

So let’s talk about those platforms for a moment, and how they affect us as humans.

Back in the pre-smartphone, pre-internet days, life was a little more difficult. But it was also yours. If you had a computer and bought a program, you owned your copy. It would run forever, bugs and all, unchanging. If you bought a calculator, it was useful forever.

Many products you buy today are integrated hardware and software, and often rely on some kind of PC to work. If the manufacturer stops supporting your thing, either because they went out of business or just decided they didn’t want to deal with it anymore, it will eventually stop working. This has happened with stereo equipment, mobile phones, home security systems, music recording and playback software, and more.

I still have books, vinyl albums, and compact discs I bought in the 80s. They are mine, and I can use them any time I want (assuming I have a turntable or CD player). But the books on my Kindle, the music I stream from Spotify, the games I buy, excuse me, license from Steam -- though I paid for them in transactions designed to look like purchases, they are not mine. If the companies who own the platforms decide to, they can deactivate individual pieces of content or the entire platform, leaving me with nothing.

Today, all of our life activities are tracked by apps and platforms, which mediate the experience, show advertising, take a cut of the action, and mine and sell our data. Your smartphone itself is a platform on a platform, with a multi-billion dollar company making and selling the hardware and operating system, which report back all sorts of data. The company can and will grant law enforcement access to your phone. The phone itself is running on a network, operated by another multi-billion dollar monopoly in all but name. That network company may be funneling data intentionally or not to governments and corporations.

You open your phone to read the news. You almost certainly aren’t looking at an actual website for a news provider anymore (if one can call Washington Post -- owned by billionaire Jeff Bezos, CNN, owned by Warner Bros./Discovery, or the once-great New York Times “NEWS” these days).

You are probably looking at a social media feed. A multi-billion dollar company called Meta owns Facebook and Instagram. X, formerly Twitter, is owned by billionaire and world’s richest doofus Elon Musk, who treats it as his personal vanity website. TikTok is owned by Chinese investors and the Chinese government exerts considerable control. YouTube is owned by Alphabet, who also own Google and its suite of products, including Gmail, and the biggest advertising business in the world.

All of those companies control what you see. They are effectively unregulated and can do whatever they want. Once, social media was nothing but what you and your friends posted. Now, those are the increasingly rare oases in a vast desert of advertisements and “suggested posts” from professional influencers. You literally don’t know what you’re missing, or what you’re getting. What you’re mostly getting is content selected to promote engagement, which is often achieved by activating your negative emotions -- rage, despair, sadness. The robots are pushing our buttons, and they are good at it.

Go out to get some food, and your phone will track you. If you disable those features (which require you to disable other useful things), your credit card will do just fine. Your purchase data is aggregated by the 3 big credit ratings bureaus, all of whom have built shadow dossiers on all of us, with our social security numbers, credit ratings, purchase histories, former addresses, and more. That data is packaged and sold all the time, including back to the big tech companies, who use their own sophisticated tools to match it up with their own shadow profiles of you.

If you use Apple Pay or similar features, other companies get a cut and a chance at the data, too. And of course, you want to put in your phone number or loyalty number to save a few bucks, even as doing so shares more of you with the technosphere.

At your job, if you’re lucky enough to still have one, you are working on yet more platforms. Slack. Microsoft Office. Google Suite. Workday. The Atlassian family. Salesforce. On the plus side, corporations demand autonomy and privacy, so your data (or rather, the data belonging to your employer) isn’t shared. On the minus side, nearly all of these products are ugly, tedious, and don’t connect to each other. 

Even romance is not immune to platforms. Once an unthinkable invasion of privacy, even dating is now mediated by platforms. Tinder. Hinge. Bumble. The first two are owned by a single company, Match Group, who also own Match, PlentyOfFish, Meetic, OKCupid, OurTime, and Pairs. Not everybody looks for companionship this way, but most recent data shows that half of people under 30 have used dating apps. Your dates are monetized and data-scraped. You have no idea what kind of influence these companies are exercising. The machine, the platform, the billionaires, have become intermediaries for love and connection.

Free time? You’re watching streaming channels, where the content is ever-changing and disappearing. We have more choices and more content than ever before, but it all feels more trivial, weightless, and superficial than ever.

Our lives are mediated by these platforms now, and these platforms are owned either by faceless, soulless corporations, or worse, mercurial billionaires who think they know best.

AI is likely to be one more, pushed into everything until it is inescapable.


What about simulation?

Some may think that any of the above experiences I cite and discuss can be simulated or emulated. Let’s just program the AI to act sad or worried, or otherwise try to be human.

But the key thing here is the AI is not actually sad or worried or happy. It is executing a pantomime of those things, based on someone else’s observations of the superficial, external signs of those emotions. It is like a sociopath practicing smiling and frowning in a mirror -- the AI doesn’t feel the feelings, but it can put on a mask that might help you suspend your disbelief.

Creators of AI will absolutely add this sociopathic mask, because as we’ve said, it helps you buy what the AI is selling, in every respect. But this is deception and dishonesty. It is a used car salesman, repeating your name to you because Dale Carnegie’s 1912 book said that it made people like you. It is the phoniest of smiles pasted over the robot’s face.

Simulation is also inherently reductive. The question is “what is the least amount of work we have to do to create something good enough?”

Simulation isn’t intelligence. I can simulate being wise by reading the answers out of the back of a textbook. All it shows is that I know how to repeat what someone else figured out or said.


It’s not all bad

Only fools make predictions. AI is still in its early days, and it is possible that it will get better and find a wide range of uses that don’t just lead to impoverishment of the human experience and manipulation of our bank balances. 

There are some clear areas where AI is promising, mostly tasks where humans are required to learn and analyze a vast amount of complicated material, or where humans are required to, more or less, act like robots themselves, doing limited, repetitive tasks. You’d be surprised how many of these jobs there are.

An example of the former is medical imaging. Many medical test results depend on a highly skilled human looking at slides or other imagery and deciding whether or not something is abnormal. Humans learn how to do this by looking at a whole bunch of slides and being told which is good and which is bad. AI with computer vision can be trained on every image ever taken, and can make determinations much faster and at scale. AI can even give you a confidence score about how certain it is, and for edge cases, those highly skilled humans can step in to validate.

An example of the latter is something like being a customer support representative, where your job is to follow a script to the letter based on user information, to be professional and cheerful, and not react to rude or emotional people. AI with voice synthesis will be great at this, and it will be even more infuriating to deal with than a clueless human operator.

Another example of the latter is much lawyering. Many lawyers are paid lots of money to essentially fill out special legal Mad-Libs or forms. AI is going to be great at this, but it is unlikely the various bar associations will approve any of this. So these lazy lawyers will now have AI do all the work and sign their name to it. Things won’t get better, but they may get a lot cheaper. Don’t go into this kind of law if you’re thinking about it!


Conclusion

This is a little dark. I’m sorry about that. I do think our current moment is a bit dark, though. While it is possible to recognize that generally things are better than they have been, when we get to specifics, it feels kind of grim. 

Because the tools we build and take up aren’t inert. They amplify who we are, for better or worse. And they change us. To paraphrase Nietzsche, when you wield a tool, the tool also wields you. The things we build, particularly our technology, change us. We can see it in language, where the limitations of texting and phone keyboards combine with decimated attention spans to produce clipped, nuance-free communication ever more reliant on acronyms and symbols, and the stripping of contextual cues and robustness means people misunderstand each other more frequently.

Perhaps we are seeing it in other levels of life, too. We are practically living inside the machine now, with our robots and platforms in every corner of our lives. 

As I consider what artificial intelligence means now and tomorrow, I can’t help but wonder if the real revelation is not about the nature of the machine’s supposed intelligence, but our own. 

How willing we are to convince ourselves that we are smart and special while simultaneously demonstrating how quickly we will reduce our options and lives down to the binary choices offered by our own creations.

I believe we, you, can still make a difference and make choices in the work we do and the lives we lead that bring us to a more enlightened, more intelligent, more human place. 

--

This is an adaptation of a talk I gave for Mark Delong's graduate-level technology seminar at Duke University on 12/03/2024.