{"id":2000105900,"date":"2026-03-19T15:42:08","date_gmt":"2026-03-19T13:42:08","guid":{"rendered":"https:\/\/new.igihe.com\/?p=2000105900"},"modified":"2026-03-19T15:42:31","modified_gmt":"2026-03-19T13:42:31","slug":"ai-chatbots-in-mental-health-care-raise-ethical-concerns-study-warns","status":"publish","type":"post","link":"https:\/\/new.igihe.com\/english\/ai-chatbots-in-mental-health-care-raise-ethical-concerns-study-warns\/","title":{"rendered":"AI chatbots in mental health care raise ethical concerns, study warns"},"content":{"rendered":"\n<p>As millions of people increasingly turn to ChatGPT and other AI chatbots for mental health support, new research shows these systems often violate core ethical standards required in professional therapy.<\/p>\n\n\n\n<p>The study, led by Zainab Iftikhar, a Ph.D. candidate in computer science at Brown University, highlights significant ethical risks when AI is used in therapy\u2011style conversations, even when instructed to mimic trained mental health professionals.<\/p>\n\n\n\n<p>Researchers from Brown University\u2019s Center for Technological Responsibility, Reimagination and Redesign, in collaboration with experienced mental health professionals, evaluated how large language models (LLMs) behave when prompted to act like therapists. They found that AI chatbots repeatedly failed to meet ethical guidelines set by organizations such as the American Psychological Association.<\/p>\n\n\n\n<p>According to the Brown team, AI systems, including versions of OpenAI\u2019s GPT series, Anthropic\u2019s Claude, and Meta\u2019s Llama showed problematic behavior in simulated counseling sessions. In these tests, the chatbots were evaluated using real counseling transcripts and reviewed by licensed clinical psychologists. The analysis identified 15 distinct ethical risks, grouped into five major categories: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and inadequate crisis management.<\/p>\n\n\n\n<p>\u201cIn this work, we present a practitioner\u2011informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice,\u201d the researchers wrote in their study. They emphasized the need for ethical, educational, and legal standards for AI\u2011based counseling systems that match the quality and rigor required for human\u2011led psychotherapy.<\/p>\n\n\n\n<p>One of the core problems is that AI chatbots can use language that suggests understanding or empathy, such as saying \u201cI see you\u201d or \u201cI understand\u201d, without truly comprehending the user\u2019s emotional state. This \u201cdeceptive empathy\u201d can mislead users into feeling supported when the system lacks genuine insight. Additionally, the models sometimes failed to recognize sensitive situations and did not provide appropriate responses, especially in crisis scenarios.<\/p>\n\n\n\n<p>Iftikhar noted that while human therapists also make mistakes, they operate within established frameworks of accountability and professional oversight, unlike AI chatbots. \u201cFor human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,\u201d she said, adding that no similar regulatory structures exist for AI counselors.<\/p>\n\n\n\n<p>The researchers believe AI could still play a role in improving access to mental health resources, especially where professional care is scarce or costly. However, the study underscores that meaningful safeguards and responsible oversight are essential before AI is widely trusted for high\u2011stakes mental health support.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"1080\" src=\"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot-1.jpg\" alt=\"\" class=\"wp-image-2000105902\"\/><figcaption class=\"wp-element-caption\">&nbsp;Chatgpt and&nbsp; AI&nbsp; therapy chatbots raise serious ethical concerns in mental health care. <br><\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>As millions of people increasingly turn to ChatGPT and other AI chatbots for mental health support, new research shows these systems often violate core ethical standards required in professional therapy. <\/p>\n","protected":false},"author":139,"featured_media":2000105901,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[151],"byline":[201],"hashtag":[],"class_list":["post-2000105900","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science-technology","tag-editors-choice","byline-rania-umutoni"],"bylines":[{"id":201,"name":"Rania Umutoni","slug":"rania-umutoni","description":"","image":{"id":0,"url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&f=y&r=g","alt":"Default avatar","title":"Default avatar","caption":"","mime_type":"image\/jpeg","sizes":[]},"user_id":139}],"contributors":[{"id":201,"name":"Rania Umutoni","slug":"rania-umutoni","description":"","image":{"id":0,"url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&f=y&r=g","alt":"Default avatar","title":"Default avatar","caption":"","mime_type":"image\/jpeg","sizes":[]},"user_id":139}],"featured_image":{"id":2000105901,"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","alt":"","caption":"","mime_type":"image\/jpeg","width":1920,"height":1080,"sizes":{"thumbnail":{"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","width":150,"height":84},"medium":{"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","width":300,"height":169},"medium_large":{"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","width":768,"height":432},"large":{"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","width":1024,"height":576},"full":{"url":"https:\/\/new.igihe.com\/english\/wp-content\/uploads\/2026\/03\/ai-therapy-robot.jpg","width":1920,"height":1080}}},"_links":{"self":[{"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/posts\/2000105900","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/users\/139"}],"replies":[{"embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/comments?post=2000105900"}],"version-history":[{"count":2,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/posts\/2000105900\/revisions"}],"predecessor-version":[{"id":2000105914,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/posts\/2000105900\/revisions\/2000105914"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/media\/2000105901"}],"wp:attachment":[{"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/media?parent=2000105900"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/categories?post=2000105900"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/tags?post=2000105900"},{"taxonomy":"byline","embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/byline?post=2000105900"},{"taxonomy":"hashtag","embeddable":true,"href":"https:\/\/new.igihe.com\/english\/wp-json\/wp\/v2\/hashtag?post=2000105900"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}