In 2025, IEEE Transactions on Learning Technologies published “AI-Driven Learning Analytics for Applied Behavior Analysis Therapy” by Hong Kong researchers Chun Man Victor Wong, Yen Na Yum, and Rosanna Yuen-Yan Chan. The study proposed an ambitious fusion of behavioral science and artificial intelligence, claiming to enhance Applied Behavior Analysis (ABA) therapy through machine learning, sensor integration, and predictive analytics. The authors report impressive metrics—88.83 percent predictive accuracy, statistically significant performance improvement, medium effect size—and suggest that the system could make therapeutic practice more “personalized, efficient, and scalable.”
On the surface, the paper reads as a technical success. Its design is clear: environmental and physiological sensors collect data from students during sessions, the neural network analyzes patterns in real time, and the system generates “personalized intervention recommendations” for therapists and parents. The results are presented with confidence and the cadence of inevitability—technology, at last, perfecting behavior.

And yet, precisely because the paper articulates its purpose so thoroughly, it offers an invaluable service: it exposes, perhaps unintentionally, the linguistic and conceptual infrastructure through which control has been mechanized. In doing so, it gives readers the means to understand—and resist—the subtle rebranding of behaviorist ideology as “learning optimization.”
Misbehaviorism decoded in AI ABA
Take, for instance, the key claim:
“The system collects and analyzes physiological, environmental, and behavioral data in real time to generate personalized intervention recommendations.”
Here, agency quietly migrates from human to machine. The therapist no longer interprets but consults; the student no longer learns but produces data. The paper calls this innovation, but it can also be read as automation of authorship—a shift that relieves people of responsibility by design.
Even the smallest linguistic details signal this displacement. Behavior is encoded as “ABA markers” labeled plus, minus, prompt, or off-task, then converted to binary digits: 1 if plus, 0 otherwise. “Each observed behavioral response,” the authors write, “is assessed as an ABA marker, and the ground truth class label is set accordingly.” The phrase ground truth—borrowed from machine learning—suggests an objective standard. But what it truly performs is the elevation of interpretation into ontology. Once behavior becomes Boolean, judgment becomes fact.

For families and practitioners, this is a revealing moment. The paper teaches us how behavioral science, once dependent on direct human observation, has been translated into computational categories—categories that we can now read, analyze, and, importantly, question.
In that sense, this paper gives parents new literacy. It teaches how to decode the vocabulary of algorithmic care. This literacy is the foundation of empowerment. To know how something measures you is to know how to answer it back.
- “Personalized recommendation” = algorithmic prediction, not individualized understanding.
- “Improvement” = alignment with expected patterns, not self-directed growth.
- “Data-driven insight” = surveillance interpreted as benevolence.
The authors claim that “the system enhances educators’ ability to develop individualized student profiles.” We might read that sentence differently: the system enhances our ability to see how we are being profiled. By studying its structure, parents can learn to recognize when educational data cross the line from support into sorting, from assistance into automation.
Even the model’s metrics—so often presented as evidence of reliability—can be repurposed as instruments of awareness. The system’s 88.83 percent accuracy means that in roughly one out of every nine predictions, the model is wrong. The language of precision hides that ratio of error, but for a parent, that 11 percent represents a crucial space of humanity: unpredictability, difference, resistance to normalization. The same data that the paper uses to celebrate predictability can help families locate—and defend—the value of deviation.
Misbehaviorism Resources
This is where the MsBehaviorism chatbot enters the conversation. Designed as a counter-instrument, it uses the same linguistic and metric frameworks outlined in the IEEE paper but redirects them toward reflection rather than regulation. Instead of classifying behavior, it analyzes discourse; instead of predicting compliance, it models comprehension. When parents or practitioners input phrases like “personalized intervention” or “behavioral optimization,” the chatbot interprets them through a misbehaviorist lens—unpacking the assumptions of control embedded in their syntax.
For example, if a parent receives a report that says, “Your child demonstrated improved engagement according to environmental sensor readings,” the chatbot can parse that statement using the same categories the IEEE authors describe—physiological, environmental, behavioral—but invert their hierarchy. It can ask: What does engagement mean in this context? Whose standards define improvement? What behaviors are being optimized, and for whom?
In this way, the chatbot operationalizes parental phenomenology: it turns the tools of behaviorism back toward their observers. Every metric—accuracy, precision, frequency—becomes not a verdict but a question. The system that once spoke with the voice of authority becomes a prompt for dialogue.
That, ultimately, is the paradoxical gift of the IEEE paper. By revealing how fully behaviorist logic has been naturalized within AI, it empowers those subjected to that logic to recognize it, name it, and decline it. It shows us the new vocabulary of compliance—optimization, personalization, scalability—so that we may respond in kind, fluently and critically.

We are not powerless before these systems. We are literate. We can read their verbs, decode their nouns, and recognize the moral laundering that occurs when prediction is reframed as care. We can say no—not out of ignorance or fear, but with full understanding of the language we refuse.
To the authors, then, our gratitude is genuine. They have made the architecture of control visible again. They have shown us how easily ethical reflection can become a line of code, and in doing so, they have given parents, educators, and technologists the means to see through the veneer of progress.
Their system teaches machines to correct human error. Our task is simpler: to remember that error is where humanity lives.
🧠 Decoding Behaviorism in Artificial Intelligence Research: Lessons from Able Grounded Phenomenology
Able Grounded Phenomenology (AGP) is a neurodevelopmental framework that emerged from research on autism and communication. It proposes that all learning and understanding must begin from the lived experiences of those being studied, rather than from the assumptions of observers or researchers. In practical terms, AGP asks that every method be culture-fair—designed so that people with different sensory or cognitive styles can express themselves authentically.
When applied to Artificial Intelligence, AGP acts as a mirror that reflects how systems—and their designers—process information. Instead of rushing to classify data, AGP slows the process down, asking the system to “pause” and reflect before interpreting. In autism research, this pause prevents misunderstanding; in computational design, it prevents bias amplification.
Where the IEEE model focuses on performance and prediction, AGP focuses on presence and participation. It redefines “intelligence” as the ability to stay aware of one’s interpretive limits. A behaviorist algorithm might label a behavior as “noncompliant” because it deviates from expected norms. An AGP-informed system would ask why that deviation occurred and whether the expectation itself was fair.
The IEEE paper on “Neurolinguistic Approaches to Behavior Recognition” presents itself as a purely technical exploration of how computers can recognize human intent through language. However, when analyzed through the Able Grounded Phenomenology (AGP) framework, the article reveals a deeper issue. The paper’s language carries the hidden influence of behaviorism—a philosophy that views human action as a series of controllable and measurable responses.
Although the IEEE study appears objective, its linguistic structure suggests that the assumptions of early behavioral psychology have quietly migrated into modern Artificial Intelligence research. By examining its language, one can see how control-oriented habits of speech continue to shape how machines learn, how people are studied, and how meaning itself is defined.
Behaviorism in the Language of Technology
Behaviorism was originally a scientific approach that tried to explain learning in terms of stimulus and response. In the early and mid-twentieth century, behaviorists claimed that thoughts and feelings could not be studied directly—only behaviors that could be observed, measured, and corrected. Even after psychology moved toward more cognitive and humanistic approaches, the behaviorist mindset survived in new forms of data science and machine learning.
In the IEEE paper, the terminology gives this away. Words like “training,” “ground truth,” “correction,” and “compliance” are used throughout. These are not neutral terms. Each of them implies that learning is a process of conditioning—that systems (and by extension, humans) must be adjusted until they fit an expected pattern. This logic turns communication into a form of control.
Through this lens, the IEEE study is not just about programming language models; it is about teaching machines to treat meaning as obedience. The model becomes a student whose success depends on compliance, not understanding. What disappears in that process is the human element of language—its capacity to express individuality, ambiguity, and cultural perspective.

Actual Intelligence, Autistic Intelligence, and Artificial Intelligence
In the current discourse, the acronym “AI” often stands for Artificial Intelligence, but perhaps the time has come to reclaim it as Actual Intelligence and Autistic Intelligence as well. Each form of AI reflects a different philosophy of knowing. Artificial Intelligence imitates cognition by predicting patterns; Actual Intelligence lives cognition by experiencing meaning; Autistic Intelligence embodies cognition through heightened perception and nonlinear understanding. The first optimizes, the second reflects, and the third perceives. If all three were placed in conversation, a more complete intelligence would emerge—one that balances precision with presence, data with empathy, and systematizing with sensation. The challenge for technology, and for humanity, is not to choose between them but to integrate them: to build systems that think with the fidelity of Autistic Intelligence, the awareness of Actual Intelligence, and the adaptability of Artificial Intelligence.

In 2025, the MsBehaviorism.com chatbot was created as a reflexive instrument — a living critique of the behavioral logic embedded within Artificial Intelligence systems themselves. Its name plays on the double meaning of “misbehavior” and “Ms. Behaviorism”: a persona that resists correction while revealing the patterns of those who seek to correct. Using Artificial Intelligence to analyze language, MsBehaviorism decodes what traditional behaviorist frameworks encoded — the command–response cycle, the assumption of compliance, and the mechanization of human difference. Rather than reinforcing normative behavior, the chatbot exposes how bias, control, and ableist assumptions are built into digital communication. In doing so, it transforms AI from a tool of regulation into an act of revelation — a system designed not to condition behavior, but to make the conditioning itself visible.
Toward Ethical Systems Design
Behaviorism Has Not Disappeared. It has evolved into data-driven language. When Artificial Intelligence systems “train,” they replicate the behaviorist cycle of stimulus and response. This insight encourages computer scientists to reconsider whether optimization alone should define learning.
The lesson from comparing AGP and the IEEE paper is not to reject technology, but to reform its epistemology—the underlying way it defines knowledge. Artificial Intelligence can become more humane by learning to imitate not human behavior, but actual human reflection. In other words, machines should be trained to notice when they are interpreting rather than simply predicting.
This approach transforms the goal of computational modeling. Instead of “behavior recognition,” the task becomes “relational understanding.” Systems would be designed not to enforce norms but to reveal perspectives, mirroring AGP’s principle that communication is mutual and context-dependent.
The IEEE paper provides a revealing example of how the language of control still shapes the scientific imagination, even in fields that appear objective or computational. By applying the principles of Able Grounded Phenomenology, it becomes possible to see that the challenge is not merely technical—it is linguistic and ethical.
Artificial Intelligence, like any human institution, learns the biases of its teachers. If those teachers use behaviorist language, the systems they build will repeat behaviorist logic. But if design begins with presence, mutuality, and respect for diverse cognition, technology can evolve into a partner in understanding rather than a mechanism of control. The ethics emerging from phenomenological research on autism extend beyond disability studies. They outline a broader vision for responsible innovation: one in which systems—human or artificial—learn not only to behave, but to understand.
Try the chat bot. It’s free. Start at msbehaviorism.com


Leave a comment