AI in Behavioral Health: Supporting the Human Touch with Ethical Technology
Kayla Briones
/
By Michael McAlpin, Co-Founder & Chief Revenue Officer, Alleva
The landscape of behavioral healthcare is rapidly evolving, with artificial intelligence (AI) emerging as a transformative force. While the integration of AI for behavioral health promises exciting opportunities to enhance practice and address complex challenges, it’s crucial to remember that its primary role should be to augment treatment, not replace it. As Michael McAlpin, Co-Founder and CRO of Alleva, says, echoing the sentiment of leading psychological organizations, “technology should support the bond between clinician and patient, not undermine it.”
Addressing Pain Points: Where AI Can Truly Help
Clinicians often face significant administrative burdens, such as extensive documentation and inconsistent charting, which can lead to burnout. Behavioral health AI tools offer significant relief in these areas and highlight the many benefits of AI in healthcare when designed ethically and responsibly.
Smart Charting Support:
AI EMR and AI EHR features can assist with transcription and SOAP notes, streamlining the documentation process. This can result in a significant drop in clinician charting time — for example, one case study showed a 32% reduction, freeing up valuable time for more clinician-patient engagement.
Predictive Analytics:
AI can flag patterns like missed appointments and stress indicators, which can be crucial for relapse prevention in addiction recovery, potentially leading to a decrease in 90-day relapse rates.
Personalized Care:
AI can recommend personalized care based on a patient’s history, tailoring treatment pathways. This is an example of what AI can do for behavioral health clinicians who need timely, data-driven insights while staying focused on the human connection.
However, the key ethical safeguard in all these applications is that AI assists, supports, and flags, but clinicians must make all decisions. There must always be a “human-in-the-loop” for all AI outputs.
Ethical Foundations and Preserving Human Connection
For AI to be truly beneficial in behavioral health, it must be built and implemented on a strong ethical foundation. This includes core principles such as:
Transparency:
Understanding how AI tools work and influence outcomes.
Accountability:
Ensuring clear lines of responsibility for AI’s outputs.
Equity:
Designing tools that do not perpetuate or amplify existing biases and are accessible to diverse populations. Bias testing is required for personalized care recommendations.
Consent:
Obtaining explicit informed consent from patients regarding data usage and the application of behavioral health AI tools in their care. The tool should provide guidance or require attestation that informed consent has been obtained.
Compassion:
Maintaining the human element at the core of care delivery.
Data privacy and security are paramount. Any AI tool must attest that it is HIPAA compliant and/or compliant with applicable data privacy laws and regulations in the jurisdiction of practice. This includes providing a business associate agreement (BAA) and ensuring personal and user data are encrypted. Companies should clearly communicate what personal data they collect (e.g., name, email, IP address, location, Personal Health Information) and whether data is shared with third parties or sold, offering users the option to opt-out where applicable. If user data is used to train the AI model, this should also be disclosed, and users should have the right to delete, correct, or amend their data. Current HIPAA regulations, enacted nearly 30 years ago, may not adequately cover protected health information (PHI) collected by companies today that do not meet the definition of covered entities.
The American Psychological Association (APA) emphasizes the necessity for the psychology field to adapt and develop standards for the effective and responsible use of mental health technologies. This requires updating practice standards to cover the ethical creation, dissemination, training, implementation, evaluation, and continuous improvement of digital tools, always ensuring safety, quality, and cultural responsiveness.
A critical recommendation from the APA is the need for interdisciplinary collaboration, involving human factors psychologists, IT professionals, and AI specialists, to ensure comprehensive standards for mental health technology service delivery.
The Role of Psychologists in an AI-Enabled Future
AI isn’t meant to replace clinicians but to empower them and their clients. While the increasing use of mental health technologies presents an opportunity to expand access to care, it also raises concerns about maintaining the quality of care and the invaluable empathetic and targeted treatment that comes from years of psychological training.
The APA’s Mental Health Technologies Advisory Committee explicitly addresses the concern of “job loss” and the potential for psychologists to be “swapped out for lesser credentialed providers” due to technology. It highlights the risk of AI models drafting psychological assessment reports in seconds from raw scores. The APA advocates that these tools should not be developed without clinicians’ oversight and should serve to aid clinicians in expanding access and quality of care, rather than replacing them. The goal is to reinforce the value that psychological training and expertise bring and to protect the standards of practice from encroachment by technology.
Furthermore, the APA emphasizes that training and education must adapt to include technological skills as a core competency in pre- and post-doctoral programs. This should cover the history, science, and ethics of AI for behavioral health technologies for intervention and assessment, as well as supervised direct care experiences using these tools. This ongoing education is crucial given the rapid pace of technological change.
AI as a Complement: The Jevons Paradox in Practice
Some economists suggest that AI’s ability to make workers more productive could, paradoxically, increase the demand for human labor — a concept known as the Jevons paradox. Just as jets made pilots more productive, leading to more demand for air travel and thus more pilots, AI could make behavioral health professionals more efficient, potentially increasing demand for their services as costs are lowered and access is expanded. This demonstrates what AI can do for behavioral health clinicians when implemented thoughtfully.
Indeed, the benefits of AI in healthcare include the potential to improve job quality for clinicians by reducing mundane and repetitive tasks, which can lead to increased job satisfaction, autonomy, and engagement. It can also augment workers’ capabilities and increase access to work for differently-abled individuals. However, careful management and communication are essential to prevent increased scrutiny or stress.
As healthcare executives grapple with multi-million dollar AI investment strategies, they are balancing rapid adoption with the need for rigorous oversight and alignment with human-centric patient care. Investments in AI EMR and AI EHR solutions, for example, have shown improved accuracy, outcomes, and patient experience precisely because the technology augments world-class providers without replacing the critical relationship between clinician and patient.
The Path Forward
The integration of AI into behavioral health is not about replacing the deeply human connection and expertise that defines effective care. Instead, it’s about leveraging technology to empower clinicians and clients alike. As Michael McAlpin, Co-Founder of Alleva, states, “Use it to treat the whole person — not just the chart.” By embracing AI for behavioral health thoughtfully, ethically, and with a steadfast commitment to the irreplaceable value of human professionals, we can build a future where mental health care is more accessible, efficient, and profoundly human.
Works Cited
American Psychological Association. “Companion Checklist: Evaluation of an AI-Enabled Clinical or Administrative Tool.” APA-evaluating-artificial-intelligence-tool-checklist.pdf, n.d., pp. 2–7.
American Psychological Association. “Recommendations from the American Psychological Association’s Mental Health Technologies Advisory Committee to Advance Psychological Practice and Research.” APA-virtual-summit-position-paper.pdf, n.d., pp. 1–12.
Becker, Laura Dyrda. “Healthcare’s Billion-Dollar AI Challenge.” Becker’s Hospital Review | Healthcare News & Analysis, 1 May 2025.
Congressional Budget Office. “Artificial Intelligence and Its Potential Effects on the Economy and the Federal Budget.” Artificial Intelligence and Its Potential Effects on the Economy and the Federal Budget | Congressional Budget Office, Dec. 2024, pp. 1–54.
NPR. “Why the AI World Is Suddenly Obsessed with a 160-Year-Old Economics Paradox.” Why the AI World Is Suddenly Obsessed with Jevons Paradox : Planet Money : NPR, 4 Feb. 2025.
“Overview.” AI & SOCIETY, n.d.
“Summary of Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark.” Life 3-Summary.pdf, 2017, pp. 1–6.
Tony Blair Institute for Global Change. “The Impact of AI on the Labour Market.” The Impact of AI on the Labour Market, 8 Nov. 2024, pp. 1–102.
Alleva is a leading behavioral health EHR platform built to support the clinicians and organizations changing lives. Designed by industry professionals, our cloud-based solution simplifies documentation, strengthens compliance, and promotes whole-person care—so providers can focus on what matters most: helping people heal.