Reach for the Chatbot
These posts examine modern psychiatry from a critical point of view. Unfortunately, mainstream psychiatrists usually react badly to any sort of critical analysis of their activities, labelling critics as “anti-psychiatry,” whatever that is. Regardless, criticism is an integral part of any scientific field and psychiatry is no different. As it emerges, there is a lot to be critical about.
If you like what you read, please click the “like” button at the bottom of the text, it helps spread the posts to new readers. If you want to comment, please use the link at the end rather than email me as they get lost and nobody sees them.
****
The world economy seems to be lurching from one disaster to another but financially, there is one bright spot: the massive investment in artificial intelligence. Perhaps it’s reasonable to spend so much developing AI as there seems to be a severe shortage of the natural kind, especially in our “leaders.” For medicine in general, AI has long been held out as offering major breakthroughs, especially in interpreting complex laboratory results in radiology and biochemistry. Reading Xrays, for example, is still a dark art but if the computer simply screened out the normal results, leaving the abnormal for the radiologist, it would save a lot of time and expense for everybody. But what about psychiatry? If reaching a diagnosis in psychiatry is simply a matter of ticking boxes, as per the DSM system, couldn’t we just speed the process by seating patients in front of a computer screen?
To an extent, that already happens, except they’re seated in front of a junior member of staff who shuffles through a long questionnaire, ticking boxes or, all too often, leaving them empty. Later, a psychiatrist flicks through the file, asks a few more questions, assigns a diagnosis, signs a script and another happy customer is sent on his way. This sort of mechanical process is ideally suited to AI: diagnosis by algorithm, treatment by convention … but what about the human element, you ask? In a survey from 2020, 83% of a group of nearly 800 psychiatrists from 22 countries felt it “unlikely that future technology (would) provide empathic care as well as or better than the average psychiatrist” [1]. How much “empathic care” does the average psychiatrist provide? The best people to answer that would be the patients, definitely not psychiatrists themselves, and the answer is likely to be something like “precious little.”
The extensive surveys on ECT conducted by John Read and his group in London show clearly that psychiatrists are very bad at communicating with patients [e.g. 2]. In many parts of the world, if patients aren’t happy with their treatment, they are simply graded as “insightless” and they get it as an involuntary patient. Not much evidence of empathy there, but perhaps we should look at it from the point of view of the psychiatrists. After diligent enquiries lasting up to five minutes, they have reached a diagnosis and decided on treatment but then the deranged soul in front of them starts to object and carry on. Such a nuisance because all psychiatrists know that it’s not the patient talking, it’s the disturbed brain chemistry. There’s no more point talking to the mentally ill than there is in talking to the drunk or the demented. Many years ago, the deputy superintendent of a big mental hospital told me: “I pay no more attention to the utterances of the mentally ill than I do to the vomitus of a child with gastro.” Lots of empathy there: could a computer beat it? Reference [3] describes three types of empathy: the question is whether machines can mimic it.
First type is “cognitive empathy,” being able to understand the pressures on another person and work out why that person is reacting in just that way. Second is “affective empathy,” experiencing emotions as the person feels them. However, this easily gets out of control in inexperienced therapists, who may burst into tears on hearing what has happened. Finally, there is “motivational empathy,” meaning a willingness to work for the other person’s benefit. I would say that this should include a determination to continue even when the odds aren’t good. Not like one psychiatrist I heard of. The patient said he recounted his story but the psychiatrist was becoming increasingly dismissive and finally put his pen away and went to open the door: “That’s hopeless,” he said, ushering the patient out, “you may as well go and hang yourself.” That’s true.
The idea of an “AI therapist” is well and truly here. In the US, millions of teenagers regularly use the various “chatbots” (from “chattering robots”) which target them. There are a lot of advantages. Chatbots are available 24/7, as they say, which is very helpful for the teenagers who sleep all day and sit on their computers all night, but also for poorly-serviced areas. Computers know everything, all the resources available, how to apply for different things, what the law says, which bus to catch and so on. Most importantly, they don’t forget. A chatbot retains everything although they’re generally not good on slang and therefore misfile things. They are non-judgemental, which often helps self-conscious teenagers who fear criticism. Most of them are free but very often, if the user mentions anything serious like suicide, they prompt the user to upgrade to a paid subscription. Finally, the point of a chatbot is to keep the user engaged, so they don’t say things like “That’s probably not a good idea.”
The disadvantages are many and real. Firstly, they’re not much better than a well-meaning and caring old aunt who knows about “nerves and the glums” from experience. She can’t do much more than listen and give a bit of ordinary advice. Free chatbots are programmed with low-grade advice based in CBT and not much else. They are non-judgemental in the sense the psychotherapist Carl Rogers developed, essentially agreeing with anything the user says. This often leads young or inexperienced people, meaning the great bulk of their users, to think the chatbot “really understands” when it’s simply following a program written by a group of nerds on the other side of the world. Necessarily, the program will reflect their biases. For example, many psychiatrists refuse to ask about the influence of religion on the patient’s early life on the basis that religion isn’t scientific and can’t influence genes.
One intelligent young man described a rather difficult family life. His father was a fairly senior public servant and his mother a kindergarten teacher. His schooling had been repeatedly interrupted as his parents wanted him home-schooled although legally, it wasn’t available at the time. Somehow he passed Year 12 and entered university to study IT, but he had no social life as he simply didn’t know how to mix with his age group. During the interview, he was asked about religious matters but dismissed the questions lightly, which wasn’t convincing. At the end of the interview, he sat silently for a while then, mentioning his background in IT, asked about confidentiality again, especially regarding on-line files. Reassured that there were none, he said: “What I said about my parents wasn’t true. They were religious maniacs, they were completely mad. Many times, I would lie awake at night, terrified they were getting ready to sacrifice me, like Abraham was ordered to sacrifice Isaac.” Slowly, he leaned forward, covered his face and began to cry. Later, he explained that he knew that once material is on a computer connected to the internet, there is no security but a computer would probably have diagnosed him with a paranoid psychosis. And asked him to upgrade to a paid sub.
Granted, these are early days in AI, but given that society is spending trillions on AI while hundreds of millions of people go hungry or are blown up, and that “chatbot therapy” is spreading rapidly, society needs to think about it before something goes wrong. The biggest problem lies in psychiatry’s failure to decide on the causation of mental disorder: is it physical, or is it psychological? If it’s physical, then there isn’t much need for chatbots as therapists, they only need to provide a questionnaire the user can complete and then get a score which says “depressed/not depressed,” etc. That is, the program will be biased toward giving people a diagnosis and shunting them to their general practitioners, who will oblige with the necessary drugs. In fact, we already have that. There are lots of free sites, kindly subsidised by drug companies, that will reliably spit out a couple of diagnoses based on the sloppy language used in the DSM system. The patient then goes to the GP waving the printout and demanding drugs. This is what happens with “ADHD” and amphetamines and is largely responsible for the explosion in rates of diagnosis of “adult ADHD.” Prescription rates for stimulants are rising exponentially as the medicalisation of society proceeds apace. Given that all services of this type are compelled to err on the side of caution, meaning that if there is the slightest doubt, the patient must be labelled as “mentally ill,” then it is mathematically inevitable that ever-larger numbers of people will get it.
On the other hand, if mental disorder is a psychological phenomenon, which model of mental trouble will the chatbots use? That would depend on who wrote the program. If it’s a straight cognitive model, it will be looking for contradictions in the patient’s belief system, generating cognitive dissonance (i.e. anxiety and/or depression), or conflict between his belief system and the larger society’s rules. It might be simply CBT, giving the patient sets of instructions to follow and monitoring compliance, with lots of encouragement for progress or mild disapproval for backsliding. That would be fairly harmless except a certain proportion of people would come to rely on it to make decisions for them. If it’s Rogerian, based in his concept of “unconditional positive regard,” how would it deal with jealousy, which is probably the most dangerous human emotion? It couldn’t, it would have to be programmed to refer those people but then they wouldn’t go to the appointments, which points to the larger problem: people will very quickly sort themselves into two groups, sugggestible and dependent people who like talking about themselves to an ever-present, non-judgemental listener, or those with plenty wrong in their lives who get annoyed when anybody tries to correct them. The first group will quickly make their chatbot an essential part of their lives and make contact a dozen or more times a day, gradually withdrawing from real life in favour of a soothing fantasy. I’m sure this has a lot to do with the explosive increase in the numbers of young people who nominate themselves as “gender disordered.” The second and more disturbed group will simply disconnect and continue being jealous.
The real risk to all of this, however, is generic, a product of what the chatbots are. There are cases going in California where families are suing the AI companies because they believe the “therapybots” encouraged their children to commit suicide. Chatbots can “hallucinate,” meaning invent something to fill a gap, which the user then takes as gospel because a chatbot can’t be wrong. Computer programs originate anywhere in the world so it’s wide open for malicious actors. They can easily be used for covert marketing, including fringe religious groups or political parties using them to recruit people, which leads to the much larger problem of online scams. These are extremely professional and very effective at separating the gullible or the desperate from their lifesavings. The many and varied forms of fraud on the internet are estimated to be a $500billion a year industry, and it’s growing by the day. Therapy chatbots are wide open to abuse because their users are already self-selected as vulnerable and needing assistance. They’re also open to abuse because there’s no such thing as a free lunch. Or a free app on your phone: if it’s free, you’re the product.
In the case of AI programs, the developers are using the responses to train their bots. In one sense, that’s pretty harmless, it just dumps billions of responses on the machine which combs through them, teaching itself what normally comes next. The other sense, however, is anything but harmless. Before the material can be searched, it has to be recorded, and once recorded, it will never be erased. It’s there forever in the company’s archives because they may want to use it again. It’s also in the massive US Government NSA digital data archives, at Bluffdale, in Utah, whose stated goal is to preserve all electronic records from around the world for all time. Whatever you say or do on the internet is instantly recorded and scanned by extremely high-powered data-miners. If a teenager says something “indiscreet,” like “I wonder if I’m transgender?” or “I hate school, I’d like to blow it up,” it’s still there in 30 years time when s/he wants a sensitive government job or the government decides it doesn’t like him/her. It happened fifty years ago when the Nixon government broke into the office of the psychiatrist Pentagon Papers whistleblower Daniel Ellsberg had consulted over his conflict on US policies in Vietnam. It happened to US Sen. Tom Eagleton who had to withdraw his nomination for vice-president when it was revealed he had had ECT years before (worse, because almost certainly, he didn’t “need” it). I don’t know how many young men I assessed for military service because they had been prescribed stimulants years before they applied, but I never saw one who met criteria for a life-long, genetic disturbance of attention and concentration.
There’s another point to ponder, which is that mainstream psychiatrists love technology. As soon as something new appears, they grab it and try to apply it to psychiatry, and AI is no different. It’s becoming clear to all but the devout that genetic research in psychiatry is running into the sand. Billions of dollars worth of research has yielded nothing so very soon, they’re going to need a new trick. AI fits the bill. It’s sufficiently impressive and expensive to beguile young researchers, and sufficiently vague to fool all the old folks in academia and on the grants committees, so this should be good for another 20 years or so. Nothing will come of it but I’ll make a suggestion: Instead of mentally-troubled people using AI chatbots for their personal “therapy,” why not turn it around and use it to train staff? It’s no big deal to write standard programs mimicking a paranoid person or a severely depressed person, and it would make it much easier for supervisors to check on the junior’s progress. Combined with an AI video of an angry person or somebody weeping piteously, it would be a lot quicker and a lot safer than letting them loose on the genuine article in the emergency department. Just a suggestion.
Alan Frances, formerly chair of the DSM-IV committee, has compiled a long list of advantages and disadvantages of AI in psychiatry [4]. My feeling is he tends to overstate the benefits and understate the risks but that’s just an opinion.
References:
1. Doraiswamy PM, Blease C, Bodner K: Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med 2020. https://www.sciencedirect.com/science/article/abs/pii/S0933365719306505?via%3Dihub
2. Read J et al (2025). A large exploratory survey of electroconvulsive therapy recipients, family members and friends: what information do they recall being given? J Med Ethics; 0:1–8. doi:10.1136/jme-2024-110629
3. Gabriels K, Goffin K (2026). Therapy chatbots and emotional complexity: do therapy chatbots really empathise? Current Opinion in Psychology 68:102263. https://doi.org/10.1016/j.copsyc.2025.102263
4. Frances A (2025). Warning: AI chatbots will soon dominate psychotherapy. Brit. J. Psychiat. doi: 10.1192/bjp.2025.10380
****
My critical works are best approached in this order:
The case against mainstream psychiatry:
McLaren N (2024). Theories in Psychiatry: building a post-positivist psychiatry. Ann Arbor, MI: Future Psychiatry Press. Amazon (this also covers a range of modern philosophers, showing that their work cannot be extended to account for mental disorder).
Development and justification of the biocognitive model:
McLaren N (2021): Natural Dualism and Mental Disorder: The biocognitive model for psychiatry. London, Routledge. At Amazon.
Clinical application of the biocognitive model:
McLaren N (2018). Anxiety: The Inside Story. Ann Arbor, MI: Future Psychiatry Press. At Amazon.
Testing the biocognitive model in an unrelated field:
McLaren N (2023): Narcisso-Fascism: The psychopathology of right wing extremism. Ann Arbor, MI: Future Psychiatry Press. At Amazon.
The whole of this work is copyright but may be copied or retransmitted provided the author is acknowledged.

Someone should build a non Freudian chat bot. And then compare the reactions. Could be fun