Parachutes vs. antidepressants
- Why the new study on antidepressant withdrawal gets it all wrong
In 2018, the prestigious medical journal BMJ published a sensational study. Researchers set out to test whether parachutes reduce the risk of death when jumping from an aircraft. In other words: is the parachute necessary?
The study design was simple. They compared two groups – one jumping with a parachute and one without – and assessed the number of deaths in each.
The result? No significant difference. The authors concluded that parachutes offered no protective effect.
There was just one small detail – one that only showed when reading the study’s Methods section: all participants had jumped from an aircraft parked on the ground, at an average height of 0.6 meters. And as the researchers themselves cautioned, one should be careful in extrapolating their findings to jumps from greater heights, such as the 9.000 meters most airplanes fly at, thus “diminishing the applicability of the results to clinical practice.”
Meant as satire, the study exposed a very real problem in science: when researchers draw conclusions based on data detached from reality, we end up with findings that are also detached from reality.
Fast forward to one week ago, on July 9, 2025, a strikingly similar study was published in JAMA Psychiatry by Michail Kalfas and colleagues. Only this time, though, with no satire intended.
The team of twenty (!) researchers examined data from 50 randomized clinical trials (N=17,828) reporting on withdrawal symptoms from stopping antidepressants. The conclusion? Antidepressants do not cause significant withdrawal effects when coming off.
It’s a reassuring message. But dig into the methods section, and a different picture emerges.
Their primary outcomes were based on just 11 trials comparing withdrawal symptoms in people stopping vs. continuing antidepressants. And while most real-life patients take antidepressants for years and years, the study participants in Kalfas’ meta-analysis had been on them for only eight weeks in six of the trials, 12 weeks in four trials, and 26 weeks in just one of the trials. That’s like jumping off a curb and saying it proves the safety of skydiving without a parachute.
Enough said: this study is methodologically speaking utter garbage. Yet garbage with real consequences.
The danger lies not in the data, but in how it's interpreted and communicated. A meta-analysis of short-term trials, designed for drug approval and never intended to measure withdrawal symptoms, cannot tell us how withdrawal unfolds after years of use. This kind of methodological sleight of hand has real consequences. When flawed evidence is presented as definitive, it shapes clinical attitudes and public health messaging. Doctors may downplay the risk of withdrawal, misdiagnose it as relapse, or discourage patients from trying to taper at all. And patients may be denied genuine informed consent when starting treatment; often unaware that stopping could be harder, and sometimes way harder, than expected.
Perhaps more importantly, the study says little to nothing about what happens when long-term users try to come off their medication. These are the people filling online forums and support groups, battling unexpected insomnia, anxiety, inner restlessness, dizziness, panic, nausea, physical aches and pains, suicidal thoughts, and emotional volatility for weeks, months, or years after stopping – because they weren’t warned. These people are not represented in the Kalfas data, yet the paper’s title, abstract, and discussion generalize far beyond the studied population.
Once again, we are jumping from stationary aircrafts and declaring the skies safe.
Well said Anders. The perfect comparative description. ♥️
👏🏼👏🏼