Mental Health AI
The current core positioning of mental health AI is a supplementary tool for the mental health service system, rather than a substitute for real psychological counselors.; There is no industry consensus on its effectiveness, ethics, and regulatory boundaries, and its final implementation is highly dependent on the collaborative exploration of technology research and development, clinical guidance, and policy supervision.
Last week, I had dinner with Brother Zhang, an outpatient doctor at the Municipal Mental Health Center. He took out his mobile phone to show me his appointment background. The number for general psychological consultation has been scheduled for 11 weeks. Many students in the first and second grades of high school have mild anxiety due to academic pressure and simply cannot wait to be queued. He will now push two AI psychological mini-programs certified by the Jingwei Center to such visitors. Let them do daily emotional counseling and status monitoring first, and then bring the AI records to the interview when they are scheduled.
When it comes to this, the differences in the industry are actually much greater than ordinary people think. Many clinical scholars with a conservative attitude always feel that AI cannot touch the core consultation process. Last year, a clinical psychology department of a university conducted a set of controlled experiments, using the current mainstream mental health model to deal with complex PTSD cases. The accuracy of empathic feedback was only 37%, and the signals hidden in the text were often ignored by the visitors - there was even a test scene where the visitor clearly stated "I am standing on the rooftop now, and I don't want to live anymore", and the AI still followed the preset process to ask "How sad would you rate it now?" ”, almost causing irreversible consequences. In the view of this group of scholars, the most AI can do is administrative assistance such as distributing scales and making appointments. The core aspects of emotional intervention still rely on human counselors who have received systematic training.
But in the eyes of those doing technology research and development, these problems are just technical flaws at the current stage, not unsolvable problems. I previously talked with the head of the algorithm of a domestic mental health AI start-up team. Their current large model has been fine-tuned for three rounds on the desensitized 1 million+ real consultation corpus. A recent joint experiment with a provincial Jingwei Center showed that the effectiveness of emotional counseling for teenagers with mild anxiety reached 62%, which is 11 percentage points higher than the average effectiveness of novice counselors who have been in the industry for less than a year. What's more, AI has inherent advantages that real people cannot match: there will be no career exhaustion, and it will not project the negative emotions in one's own life onto the visitor. Even if someone breaks down and sends a message at two in the morning, it can reply in seconds, and will not make the other person fall into self-condemnation of "Am I disturbing others?"
There is a real example around me. My best friend's son, who is a sophomore in high school, fell into a depression at the end of last year because he failed in the mock test. He was afraid that he would be called "hypocritical" by his parents if he spoke out, and he was also afraid that he would be criticized by his classmates in the school psychological consultation room.
Of course, the complaints are not unreasonable. The most criticized issue is privacy issues. Last year, a well-known foreign mental health AI was exposed to have packaged and sold users’ depression confession data to insurance companies. Several users subsequently purchased critical illness insurance and were directly rejected. After this incident was exposed, many people did not even dare to use online psychological consultation, let alone open their hearts to the AI. There is also the issue of division of responsibilities, which is still unclear: if the AI misjudges the suicide risk of a visitor, is it the platform’s responsibility or the consultant’s responsibility in the background review if an accident occurs? There are no clear legal provisions that can explain this clearly.
No one said that you have to choose between "all use AI" and "no AI at all". I went to Hangzhou on a business trip last month and visited a community psychological service station. The model they are using now is quite coincidental: the AI will first conduct the first round of interviews, and conduct preliminary scale assessments and emotional counseling for the visitors. Once it detects that the visitor has a tendency to self-harm, or mentions "living is boring" three times in a row, etc. Keywords are immediately automatically referred to the real-person counselors stationed at the station. Mild emotional problems are followed up by the AI on a daily basis. The counselor only needs to spend 2 hours a week to go through the AI consultation records and adjust the direction of intervention. With such a small adjustment, the service coverage of the service station last month increased 8 times compared with before, and the career exhaustion score of the station-based counselors dropped by 40%, which is pleasing to both parties.
To be honest, it is too early to make a conclusion about whether mental health AI is "useful" or "useless". Its current state is very similar to the stethoscope that was just invented in the 19th century. At first, most doctors thought that this thing could hear something through clothes. It is better to stick it directly on the patient's chest to listen for accuracy. Isn't it now standard for every clinician? In the final analysis, tools are never right or wrong. The key is whether the people who use them can draw clear boundaries and keep the bottom line firmly. We develop these technologies not to let AI replace any counselor, but to give those who live in remote areas who cannot find counselors, those children who are afraid of being criticized and dare not go to face-to-face consultations, and those ordinary people who have an emotional breakdown at 2 o'clock in the middle of the night and can't find anyone to talk to, to have an outlet that they can reach when they need help the most.
Disclaimer:
1. This article is sourced from the Internet. All content represents the author's personal views only and does not reflect the stance of this website. The author shall be solely responsible for the content.
2. Part of the content on this website is compiled from the Internet. This website shall not be liable for any civil disputes, administrative penalties, or other losses arising from improper reprinting or citation.
3. If there is any infringing content or inappropriate material, please contact us to remove it immediately. Contact us at:

