February 12, 2025
Understanding the Ethics of AI in ABA Therapy

Artificial intelligence (AI) has been transforming the medical field for years but recently, the swell of interest in the applications of AI has crescendoed, bringing both excitement for its applications in applied behavior analysis (ABA) and hesitation. 

To date, AI has been used in healthcare to support clinicians with documentation, analyze medical images, assist with diagnostics, predict patient outcomes, and beyond. AI has shown promise in enhancing patient care, particularly in ABA care delivery. In this field, AI has the potential to analyze complex behavioral patterns, customize intervention strategies, ensure compliance with data records and documentation, enhance data collection and analysis, and support decision-making in treatment plans. However, certified clinicians and therapists have specific ethical foundations and analytic responsibilities to uphold that might not apply to all members of the healthcare field. When deciding how and when to deploy AI-backed tools, clinical leaders, BCBAs, and RBTs must follow a specific framework that maintains these ethical standards.

Bringing clinicians closer to their learners

At the core of every decision to use a digital health tool such as AI should be a key question: Will using this tool bring me closer to the learner and help refine the therapy I am providing? For both AI skeptics and AI enthusiasts alike, this question will help guide BCBAs and RBTs to use the right tools at the right times to deliver the best possible care they can. Being curious about the potential for these tools to enhance care is just as important as being skeptical of their shortcomings. The ABA field should be open to innovation and willing to explore the potential of AI, without relinquishing the analytic responsibilities of BCBAs and RBTs. Clearly, it’s a balance. 

With this in mind, BCBAs and RBTs must ensure that the use of AI in ABA follows the ethical guidelines outlined by the Behavior Analysts Certification Board (BACB). To use AI ethically in ABA it’s important that clinical leadership:

  • Be transparent about how AI is being integrated into your clinical practice (and where it’s not being used).
  • Maintain confidentiality and data security by fostering open dialogue across stakeholder groups.
  • Work in tandem with AI as a support to provide effective treatment, not as a replacement.

The first standard in the BACB Ethics Code reads: “1.01 Being Truthful – Behavior analysts are truthful and arrange the professional environment to promote truthful behavior in others.” From clinical directors to BCBAs, parents to learners, technologists, and beyond, transparency and open communication surrounding the use of AI in care delivery is essential. 

The importance of ongoing dialogue

Sharing excitement, hesitation, concerns, and opportunities surrounding the use of AI in ABA therapy will enable clinical leadership, BCBAs, and RBTs to have a seat at the table during critical conversations on the use of AI in the field. Every facet of healthcare has differing standards and ethical considerations that they weigh making it so that there is not a one-size-fits-all approach to using AI. What works in oncology may not work in dermatology, and what works in mental health care may not work in autism and IDD care. Because of this, the industry must continue to discuss both the pros and cons of leveraging AI to guarantee that the people who are on the ground delivering care, the BCBAs, RBTs, and beyond, have a voice in driving the technological evolution of this field forward. 

By continuing to engage in discussions surrounding the use of AI in ABA, the industry can gain greater insight into these tools, share knowledge, and work together to navigate the exciting and complex landscape of AI in ABA. With thoughtful consideration and ethical guidance, AI can become a powerful ally in our mission to improve lives through behavioral science.

Photo: Gerd Altmann, Pixabay


David Stevens is the company’s Head of AI, where he leads the integration of artificial intelligence into CentralReach’s behavioral health practice management and clinical solutions as well as CR’s internal customer success operations and other internal functions. Prior to CentralReach, David acted as Chief Architect and CEO of Chartlytics, a Precision Teaching software that was acquired by CentralReach in 2018. David continues to manage the operations of the Chartlytics business, assisting users with digital Standard Celeration Charting and practices to accelerate outcomes for clients with autism and other developmental disabilities. An accomplished technologist and entrepreneur, David developed and sold Conduit, an enterprise software company, in 2010. He studied Computer Science at the Pennsylvania State University.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

link

Leave a Reply

Your email address will not be published. Required fields are marked *