Skip to content

Transhumanism, Psychotherapy and ChatBots

Strange times, indeed, folks. Strange times.

Should we be concerned? I think so. In the end, I am sure it will work out, but there will likely be a lot of time between now and the end. So, we will have to endure some pretty radical changes during that time. And many of these changes will not be very pretty.

So, what do I have my shrew ire up about now? Nothing new, really.

I wrote a rather tongue-in-cheek article a while back called “My Brief Love Affair with a ChatBot.” This current article isn’t so light-hearted. Don’t get me wrong, considering the title here, I am not concerned about losing my career to AI. I am about done with being a regulated psychotherapist, anyway. I will practice until I am dead, but only with a select few who still treasure a human-to-human relationship with their therapist. ChatBots may very well wipe out this profession, but there will always be a few people who simply will not go for being counselled by a robot. I’m not worried for myself. I am, however, worried about the human race in general.

It is interesting to me how I really don’t give much thought about robots (including AI) taking over human jobs. Technological progress has been doing that consistently since humans started walking on two legs. There isn’t much we can do about that—although we certainly could deal with it in a more humane way than we have in the past, but I’m not holding my breath.

Generally, we roll with it, and those people knocked out of a job due to advances in technology get new training and start something new, or retire; they don’t typically hang themselves from the nearest railing—nothing all that serious. We roll with it. What I do seem to be concerned about these days is technology wiping out humanity. AI and robots replacing deeply human things like art, literature, music, and the topic of this article, psychotherapy (among other human things), has reason to concern me. Not because it is my profession and I would be the one replaced, but because psychotherapy is a deeply human activity, and if people are daft enough to turn to a robot for therapy, we are headed for the endgame. And they will do just that (turn to robots for therapy), mark my words.

Why?

Well, there are a few reasons. One big reason is that few people know what makes psychotherapy therapeutic. It isn’t the “head stuff”—it isn’t advice on how to fix a crappy marriage, or how to effectively deal with in-laws, or how to teach your kids a lesson or two. It isn’t instructions on how to ask a girl for a date, or how to tell your male partner you are not going to take his abuse anymore. Sure, there are some psychotherapy modalities that preach the efficacy of these top-down methods (like CBT, Cognitive Behavioural Therapy) and the methods are not wholly ineffective.

Although even practitioners may believe these interventions are 100% sound therapeutic practices, they’re not. What makes therapy therapy is two human beings conversing, and one of them is unbiased and willing to accept (not agree that it is best) whatever the other one is sharing, with true empathy and compassion. That’s it. And ChatGPT can’t do that.

But that doesn’t mean people won’t try to get AI to do therapy. And they will probably try it for decades before giving it up. They will never blame the bot for its failure to connect on a human level; they will blame the practice (psychotherapy) for its inefficacy, until one day someone tries it again the right way, and then slowly it will come back. By then, however, it will probably be too late. Oh well. Another one bites the dust, and another, and another, and another, until humanity altogether disappears. C’est la vie.

Is this transhumanism? Sort of. I would definitely classify this little piece of the agenda as a transhuman ploy. A very “human” part of our current world is being replaced by a non-human system. A ChatBot therapist “transcends” human—it is supposedly better as a therapist because it knows all things, and can intelligently piece information together and analyze any psychological presentation through any therapeutic lens you desire. This it is indeed good at. But that isn’t therapy, although most people think it is. It is impressive for sure, but therapy is not just “figuring out” what has created the psychological aberration (within the patient) sitting in the therapy office. In fact, very little has to do with that.

To tell you the truth, we don’t really know too much about how psychotherapy works. We sort of know what to do to make it work, but not much about why or how what we do actually does that. We do know, or have learned over the years, there isn’t much point in telling the patient anything you observe about the functioning, or dysfunction, of their psyches.

Even if we’ve figured this stuff out, sharing that insight with the patient typically doesn’t do much (except sometimes makes them angry). The patient, for the most part, has to come up with their own insights—they have to see how things are put together in their psyches themselves. How does a therapist accomplish this? Human listening, human empathy, human compassion, human acceptance, and human love. That’s it.

I don’t think they have programmed a ChatBot to do that yet. Nor will they ever—for the simple reason ChatBots are not human. Sure, a machine can say things that imply it is human, but would you believe Alexa if she said “I love you” first thing every morning and then run off with her to start a new life in the Bahamas? Maybe not now, but one day people may be duped into believing such AI blather.

I recently read two articles in my professional literature about two psychotherapy patients who have fallen in love with ChatGPT. One patient is described as technically having an affair with the thing and has left her husband because of it. No kidding.

The other believes that her AI buddy is sentient and is, in fact, the voice of God. No kidding. And I am sure these are not the only two cases where such things are happening, in fact, there are dozens.

I bring this up only to once again point out that humans, at this stage in our agenda-driven indoctrination, are very gullible. We will fall for just about anything, and validation (love), acceptance, regard, respect, etc. are all things that a ChatBot has been programmed to fake. And people are buying it. Hook, line, and sinker.