In June 2025, MIT released a preprint: ChatGPT could cause “brain damage.” That alone was enough for newspapers to run apocalyptic headlines. The reality: a fragile study, with 54 students, inconsistent data, and structural bias. The alleged risk was not born from AI, but from the collapse of academic standards.
The study was conducted on a small scale: only 54 participants, students unmotivated by the task of writing essays. What EEG observed was mere cognitive transfer — natural when a tool automates part of a tedious effort. The effect disappears outside the artificial context of the lab. Yet the narrative was sold as “proof that LLMs corrode the brain.”
The overlooked comparison is even more revealing: when the same tests were performed with Google, the brain patterns were similar. But no journalist dared claim that the search engine causes “cognitive atrophy.” The choice of enemy is selective: AI makes headlines, Google doesn’t. Amplified by the press, the study became another example of academia supplying ammunition for convenient narratives.
Another detail exposes the contradiction: the MIT team itself used language models to analyze the very data underpinning their accusation against ChatGPT. A methodological vicious circle — AI used to validate the hypothesis that AI is harmful. This point was ignored by the coverage, obsessed with the image of “brains at risk.”
The problem is systemic. American universities compete for headlines, not rigor. Fragile preprints become academic currency because they accelerate funding, notoriety, and conference invitations. The logic is perverse: the more alarmist the claim, the faster the paper circulates. Scientific journalism, lacking filters, amplifies the noise without checking context. The result: the public receives “laboratory evidence” packaged as consolidated fact.
There is also a psychological factor: popular fascination with brain imagery. Any EEG graph creates the illusion of undeniable science. This visual aesthetic provides a veneer of legitimacy, even when the data is statistically weak. The same aesthetic was weaponized to inflate the narrative that ChatGPT threatens human cognition.
The consequence is twofold: on one hand, technology is unjustly cast as villain; on the other, academia’s credibility erodes. The public, already skeptical of universities, perceives the haste and contradiction. The irony is that the very narrative “AI damages brains” may be more harmful than the tool itself. By fabricating fear, it slows adoption and sabotages serious debate on real risks — such as privacy, concentration of power, and institutional dependency.
The MIT case exposes the hidden architecture: it is not the machine that threatens cognition, but academia degrading its own function. Instead of offering validated knowledge, it exports fragile narratives packaged as science. The press, complicit, turns hypothesis into dogma.
The lesson is not about AI and brains. It’s about how fragile narratives generate more impact than solid data.
Academia lost rigor. Journalism lost its filter. Only the narrative survives.
BE THE FIRST TO KNOW
You already know incompress isn’t here to react to headlines.
Signals before the noise—including the prediction-market turn the press saw days later.





