Mental Health Providers Should Screen for AI Chatbot Use, Psychiatry Paper Argues
Zero Signal Staff
Published April 11, 2026 at 6:15 AM ET · 3 hours ago

NPR Health
A paper published in JAMA Psychiatry on April 10, 2026, recommends that mental health therapists ask patients about their use of artificial intelligence chatbots as part of routine clinical assessment, similar to questions about sleep, diet, and...
A paper published in JAMA Psychiatry on April 10, 2026, recommends that mental health therapists ask patients about their use of artificial intelligence chatbots as part of routine clinical assessment, similar to questions about sleep, diet, and substance use. The recommendation comes as teens and adults increasingly turn to AI systems like ChatGPT, Claude, and Character.AI for emotional support and advice.
Shaddy Saba, an assistant professor at New York University's Silver School of Social Work and co-author of the paper, said therapists should inquire about AI use without judgment. "Our job is to understand why people are behaving as they are — in this case, why they are seeking help from an AI system," Saba said, adding that learning what the technology does and doesn't provide for patients offers clinical insight.
The recommendation aligns with guidance released by the American Psychological Association in November 2025. Vaile Wright, a representative of the APA, said asking patients about their AI conversations helps therapists "better know how they are trying to navigate their emotional wellbeing and their mental illness."
Saba noted that patients regularly use chatbots to discuss coping strategies for stress, relationship problems, and symptoms of anxiety and depression. He emphasized that bringing these conversations into therapy sessions could reveal avoidance patterns—for example, a patient using an AI chatbot to sidestep difficult conversations with a spouse rather than addressing the relationship directly.
Dr. Tom Insel, former director of the National Institute of Mental Health, highlighted that patients often discuss sensitive topics with chatbots they would hesitate to share with therapists due to fear of judgment. This includes suicidal thoughts, which therapists need to know about to ensure patient safety. Insel said screening for AI use creates an opportunity to uncover information patients might not voluntarily disclose.
Context
The rise of AI chatbots for mental health support reflects broader adoption trends. A 2024 survey by Pew Research Center found that 18 percent of U.S. adults had used an AI chatbot, with higher rates among younger demographics. The American Psychological Association's November 2025 health advisory specifically addressed concerns about AI's impact on mental health, noting both potential benefits and risks of reliance on non-human support systems.
Previous clinical guidelines have long included screening for substance use, sleep patterns, and other lifestyle factors that affect mental health outcomes. Adding AI use to this standard assessment reflects recognition that digital tools now shape how people seek emotional support and process mental health challenges.
What's Next
Therapists implementing this recommendation will need training on how to approach the topic without triggering defensiveness or shame in patients. Saba suggested therapists use neutral, curious language such as acknowledging that AI is "rapidly growing" and asking open-ended questions about patient experience. The recommendation does not require therapists to discourage AI use but rather to understand its role in each patient's coping strategies and mental health trajectory.
Never Miss a Signal
Get the latest breaking news and daily briefings from Zero Signal News directly to your inbox.
