In April 2025, behaviorist Cory Morris and others published a paper with a modest-sounding goal: to examine how behavior-analytic journals cite a single study. The study in question was Kupferstein’s 2018 paper reporting elevated rates of post-traumatic stress symptoms among individuals exposed to Applied Behavior Analysis (ABA). The Morris paper does not test Kupferstein’s findings. It does not attempt replication. It does not analyze PTSD prevalence, trauma mechanisms, or long-term outcomes. Instead, it asks a narrower, procedural question: how do behavior analysts reference this study when they cite it?
At first glance, this appears to be an exercise in bibliographic housekeeping. Fields routinely examine citation practices to improve clarity and prevent misrepresentation. When analyzed, however, the paper performs a different kind of work. It demonstrates how a behaviorist identifies an epistemic disruption in their field, reclassifies it as a problem, and then deploys its own methodological framework—behaviorism itself—to regulate the disruption.
What the PTSD Study Did—and Did Not Do
Kupferstein’s 2018 work engaged 460 participants to evaluate their own experience with early childhood interventions. The use of online data collection made participation easier for a wider range of respondents and supported accessibility as a research practice rather than an afterthought. The recruitment materials were written in an adaptive style, aiming for roughly a fifth-grade readability level—one of the conditions required for Institutional Review Board approval, where the ethical standard is not elegance but comprehensibility, especially when a study involves vulnerable human participants and emotionally charged subject matter.
The largest group of respondents ended up reporting exposure to ABA therapies, a phenomenon that could not be presumed in advance. Recruitment came from more than one pathway. Roughly half of the sample was recruited through the Simmons Foundation. Funded by Autism Speaks, the Simmons Foundation maintains a database of formally diagnosed autistics who had indicated they were willing to be contacted for research. The remainder were recruited through email distribution lists, flyers, and social media posts. This mixed recruitment approach was built to measure reported experience across a reachable population. It was not built to select a clinic cohort or verify treatment protocols
Methodologically, Kupferstein’s paper was a prevalence study. It surveyed autistic adults and caregivers of autistic children and analyzed rates of PTSD symptoms in relation to reported intervention histories. It did not evaluate ABA efficacy, and it did not argue for or against ABA as a practice. It did not propose policy changes or clinical bans. It did not claim that ABA causes PTSD. It reported an association in a self-report sample and, importantly, recommended future research to examine long-term outcomes of early childhood interventions offered to autistic people. In other words, the study asked an empirical question about outcomes, not a normative question about interventions.
Humanistic psychologist Abraham Maslow, has written in 1966, that “the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.” If behaviorists with doctorate degrees in Applied Behavior Analysis (ABA) are literally practitioners of a behavioral intervention protocol. Social Science researchers used the scientific method to solve problems. Behaviorists use behaviorism for conducting behavior interventions. Maslow says that the carpenter with the hammer, tends to use familiar solutions for all problems, even when inappropriate, highlighting a limiting perspective. It’s a cautionary saying about narrow thinking and the need for diverse tools (solutions) to tackle varied challenges, not just hammering everything into place. For behaviorists, PTSD research is treated as a problem to be hammered into extinction.

Step One: Establishing the Problem
Kupferstein’s 2018 study was a prevalence study. The first and most consequential move in the Morris et al. (2025) paper that occurs, is the reclassification of a piece of PTSD prevalence research as a problem requiring management rather than a finding requiring evaluation. This reclassification is accomplished through framing, not additional research.
In the abstract, Morris et al. open by invoking Andrew Wakefield’s retracted vaccine paper as the paradigmatic example of misinformation with lasting societal harm. Wakefield is associated explicitly with ethical violations, retraction, and the persistence of false beliefs despite correction. This establishes, at the outset, what “misinformation” means in the paper: work that exceeds the bounds of legitimate scientific disagreement and therefore warrants active countermeasures. In 2010, the UK’s General Medical Council (GMC) found Andrew Wakefield guilty of serious professional misconduct, dishonesty, and a “callous disregard” for the distress of the children involved in his research.
Andrew Wakefield lost his medical license in the UK, but he was not sent to jail or formally exiled. He voluntarily moved to the United States and continues to be a prominent figure in the anti-vaccination movement. In the next breath of the abstract, following the Wakefield sentence, Morris frames the PTSD research by Kupferstein, as “one such source of misinformation about behavior analysis.” Kupferstein’s study is not first described in methodological terms, nor evaluated on its evidentiary merits. It is placed directly into the same category as Wakefield—before any analysis begins.
From that point forward, the paper no longer treats Kupferstein’s work as data. It treats it as a stimulus with undesirable effects. Once the research is defined as misinformation, the relevant question is no longer whether its findings are accurate, incomplete, or worthy of further investigation. The relevant question becomes how the field should respond to its presence.
Step Two: Citation Becomes Behavior
Once the study is framed as a problem, the paper moves on to a new task. The authors begin looking not at Kupferstein’s research, but at how their colleagues talk about it. They review a small group of articles from ABA journals that mention the PTSD study and pay close attention to how those mentions are written.
What they are looking for is a particular kind of signal. Does the author pause to warn the reader? Do they add language that suggests the study is questionable or should not be taken too seriously? Citations that include this kind of caution are treated as responsible. Citations that do not, are treated as more concerning.
At this stage, citation becomes the unit of analysis. Morris et al. operationalize citation as an observable professional behavior and code articles according to how they reference PTSD research. Articles are sorted into three response types:
- those that label the study as misinformation
- those that present it as a critique of ABA accompanied by qualifying language
- those that present it as a critique of ABA without such qualification
What matters here is not whether citations accurately describe the study’s methods or situate it appropriately within prevalence research. What matters is whether they include language that constrains interpretation—language that signals to the reader that the study should not be treated as legitimate evidence.
The standard for such language is not independently derived. It closely tracks the methodological objections raised in a 2018 letter to the editor authored by behaviorist Justin Leaf and associated colleagues, which is treated as the authoritative corrective reference. Alignment with that response functions as the benchmark for appropriate scholarly conduct. Failure to reference or allude to this corrective framework is not treated as scholarly disagreement. It is treated as insufficient qualification in need of censorship from the trade journal’s review.

Step Three: Colleagues as Contributors to the Problem
The review yields a pattern that is notable less for disagreement than for its distribution:
- 70% of reviewed articles cite Kupferstein as a critique of ABA without qualifying language
- 20% cite it explicitly as misinformation
- 10% cite it as a critique accompanied by qualifying language
At a purely descriptive level, these figures indicate diversity in how the field references the study. Within the logic of the paper, however, this variation is reframed as a secondary problem. The concern articulated by Morris et al. is that citation without qualifying language may lend legitimacy to a source already designated as misinformation.
Here, the role of qualifying language becomes clear. It does not serve as a marker of scholarly balance or methodological nuance. It functions as a protective signal—a cue intended to constrain how the cited work is interpreted. A citation that includes qualifying language is acceptable not because it advances understanding, but because it limits uptake. It ensures that the research does not operate as credible evidence within the reader’s interpretive frame. Seen this way, qualifying language operates as a compliance indicator. Thus, the Morris colleagues who cite the PTSD study without performing this signaling function are framed as inadvertently sustaining the problem.
Step Four: Regulating the Field Through Citation Norms
Morris et al. suggests several reasons why authors might fail to include qualifying language, including lack of awareness, avoidance of controversy, or assumptions of shared knowledge. These explanations are presented neutrally. But functionally, they normalize the idea that non-qualified citation is an undesirable state, regardless of intent.
What matters is not why the behavior occurs.
What matters is that it occurs—and that it may have unintended consequences.
The solution proposed is not further debate about PTSD prevalence or methodological disagreement. It calls for a modification of citation behavior. Citation norms, editorial guidance, and professional expectations become the mechanisms through which behavior is shaped. The proposed remedy is behavioral regulation. Morris et al. recommend that:
- authors should add qualifying language,
- redirect readers to corrective sources (such as Leaf and his letter to an editor),
- or explicitly label the work as problematic.
This is a familiar sequence within behaviorist intervention. When an undesirable response persists, the environment is adjusted so that the response becomes less likely. Intent is irrelevant; contingencies are sufficient. Citation norms, editorial guidance, and professional expectations become the tools through which the field of behaviorism regulates itself in print.
Step Five: Image Management as Outcome
The cumulative effect of this process is not refutation or synthesis, but reduction of exposure. If the PTSD study is consistently framed as misinformation, if citations within trade journals are constrained, and if unqualified references are discouraged, the influence of the research diminishes without ever being directly engaged.
This functions as a hands-off intervention protocol similar to excluding a journal from major indexing databases. The work remains published and technically accessible, but its circulation, citability, and downstream influence are systematically reduced through environmental controls rather than substantive debate.

The proposed censorship intervention is not framed as punishment. It is framed as professional responsibility. But structurally, it mirrors extinction logic: remove reinforcement pathways until the stimulus no longer elicits response. Meaning, if I don’t see it in my industry’s trade journal, I don’t have to deal with it. What is being managed, ultimately, is not a dataset or a set of findings. It is the field’s public image and epistemic authority. Behaviorists here are people with eyes who “see not” (Jeremiah 5:21). Those who are “blind” in this sense are the hardest to reach because they reject understanding, unlike those who genuinely lack sight or knowledge.
In closing, the method of behaviorism reveals itself. Taken together, the Morris et al. (2025) paper demonstrates how a behaviorist framework extends from managing individual stimuli to regulating collective practice.
- Research becomes behavior
- Citation becomes compliance
- Colleagues become subjects of correction
The result is not open inquiry, but stabilized authority—maintained through procedural control rather than epistemic engagement. This is not a controversy; it is a method, applied consistently. Morris’ paper is a crack at the status quo, which was then upcycled into a conference presentation to earn peer validity. Cody Morris is grossly averse to misbehaviorism, by his own rubric. Here the carpentry practitioner calls for change of behaviorist conduct. He is asking his colleagues to neutralize this body of knowledge and keep it out of their industry manuals.
In the spirit of behaviorism, Cody Morris makes extraordinary efforts to prevent behaviorist students from exposure to this content. The nail in the coffin is the misbehavior of his colleagues. Misbehaviorism is central to a dystopian society where firemen burn books (which ignite at 451°F) to suppress knowledge, critical thinking, and individuality, fostering government control and mindless entertainment via “parlor walls” (giant TVs).
- Also read Cody’s doctoral dissertation “research” from 2019: When Data Collectors Become the Experiment: What Counts as Research in ABA When the Outcome Is Five Seconds?
Leave a comment