tt
tt
tt
tt
tt
Science
•
Oct 15, 2025
The Paradox of Harmful Information
Risk and progress in the sciences.

A pointed critique of someone’s job performance. Details of how a publicly traded company is poorly run. The first account of Harvey Weinstein’s abusive behavior. The fact that a widely-heralded government initiative is failing due to mismanagement.
All of this information would be very helpful for public decision-making. Yet it is personally harmful to the people or organizations involved. As a result, it will likely never see the light of day.
The paradox of harmful information is this:
When information is globally helpful (i.e., beneficial to governments, markets, and society at large) but locally harmful (i.e., personally or professionally damaging to the actors involved), then the people who are best suited to share this information are overwhelmingly incentivized to keep quiet. They, or people close to them, will almost certainly be harmed by sharing, and they will generally not incur any of the global benefits of the information being made public.
By contrast, the kind of person who does share personally harmful information is often motivated by spite, jealousy, misinformation, etc. As a result, the harmful information that does come our way is often wrong, while the harmful information that would actually help people make good decisions is far too rare.
This paradox means that we rarely learn about prominent failures (even if insiders know about them full well), but we also are unable to know what to trust when bad information does come out (e.g., if someone was just accused of sexual assault, is that the next Weinstein or was it someone who made his colleagues jealous or was guilty of a misunderstanding?) Most importantly, the paradox of harmful information contributes to a culture where it is acceptable, even expected, to pretend that everything is going well, all of the time, even in fields where there is statistically no way for that to be true.
Some harmful information (like sexual assault accusations) rightly comes with moral blame; that information will always be harmful to the individual involved, and so the paradox of harmful information will always be somewhat in play.
But in many cases, harmful information is only harmful because of our punishing cultural norms around failure. If someone tests a bold new scientific theory, starts a great company that fails due to rookie mismanagement errors, or makes understandable errors on the job, they are not morally to blame.
Indeed, we should celebrate people who took on a gnarly problem, knowing that the chance of success was slim, did everything they could to come out on top, and failed nonetheless. However, when we view any failure as shameful, then stories of ordinary, understandable, and important failure become subject to the paradox of harmful information, harming us all.
We believe that the paradox of harmful information can be solved by a number of approaches to encourage truthful people to come forward, including greater legislative protections and financial rewards for whistleblowers, as well as changing cultural norms about coming forward. The purpose of this essay is to make the paradox of harmful information more transparent by unpacking examples of suppressed but useful harmful information, explaining why harmful information is simultaneously invaluable and undersupplied, and then offering potential solutions that would help the many all benefit from inside information that is potentially harmful to the few.
There is undersupplied harmful information in every sector, but the paradox of harmful information is especially damaging to scientific progress. With better incentives, we could vastly improve scientific progress by channeling our energies in the right direction
What Is the Paradox of Harmful Information?
Many scientific initiatives launch to great fanfare, only to fade into the historic record with no legible results. The public almost never hears any official evaluation about what various initiatives actually accomplished, let alone which particular elements actually failed or why. When was the last time you read an article about a failed research program?
Yet information about the success and failure of various initiatives definitely exists: from chemists knowing that a specific investigator’s popular work is actually a house of cards, to national-level healthcare initiatives that are privately acknowledged to have failed. Object-level information would be incredibly valuable for improving future initiatives, knowing which scientists should receive further funding, and generally not wasting scientific funds. Instead, scientific failure today is so illegible that well-meaning actors can’t even know if they are repeating the mistakes of the past.
For example, in 2012, the National Plan to Address Alzheimer’s Disease promised to “prevent and effectively treat Alzheimer’s and related dementias by 2025.” It is now 2025, and we are nowhere near being able to prevent or effectively treat Alzheimer’s (let alone both).
What went wrong? Is the field just more technically challenging than anyone knew in 2012? Did the program fund too many of the same old researchers doing the same old things? Did too many neuroscientists descend upon the field because that’s where all the money was? [One prominent Alzheimer’s scientist suggested candidly (but anonymously) that his field was better when there was less money in it, because now every neuroscientist just puts the word “Alzheimer’s” in their proposal.] Or — as now seems likely — was the entire field infected with widespread academic fraud?
Many people surely know more about this initiative’s failure than they are publicly sharing. But we will likely never know the full story. There will almost certainly never be an honest public conversation about the national initiative’s failure to achieve its objective.
After all, too many politicians, bureaucrats, and researchers built their careers around the initiative and don’t want to be affiliated with a failure. All of the people who worked on the initiative continue to interact with each other professionally, and so anybody who breaks ranks will likely suffer career-wise and socially. And the rewards for breaking rankings are unlikely to outweigh the costs: a report dissecting the program might make a splash among a niche group, but it would be unlikely to boost the authors’ status or careers.
Even at DARPA—which is widely celebrated for funding “high risk, high reward research”— there is significant pressure not to do anything that fails or to talk about the failure after it happens. As Adam Russell (a former leader at DARPA, IARPA, and ARPA-H) puts it in his article titled, “How I Learned to Stop Worrying and Love Intelligible Failure”:
There’s a perfectly good reason ARPAs don’t glorify failure or prioritize intelligibility: doing so invites all kinds of criticism, and that can be tough for organizations that, by design, lack careerists who can defend their institution. Especially in conditions of low trust, it’s far more comfortable to avoid scrutiny; an agency can’t be attacked for what it isn’t set up to know. But that avoidance ultimately does ARPAs a disservice. If an organization can’t learn from failure, then it can’t quantify—much less communicate—how failure contributes to its mission. And if the organization doesn’t build in feedback loops to reward turning failure into insight, expect people to make safer—if still flashy—bets.
Russell’s point extends far beyond DARPA and all of the scientific organizations that are named after it (or hope to imitate it). We all need far more informed discussions about the results of scientific research, with blunt analysis of when and why major efforts failed to live up to expectations. Where is the honest and independent assessment of the National Alzheimer’s Project Act? The BRAIN Initiative? The National Nanotechnology Initiative? We are aware of insiders who have a negative impression of one or more of these initiatives, but who are unwilling to say so publicly so as to avoid controversy.
And if harmful information is so hidden in the scientific community, then how much more harmful information is suppressed in society writ large? How many people are aware that their immediate boss was dishonest in reporting results up to the Vice President or President of the company, yet never come forward? How many insiders know that their company or hospital is defrauding Medicare Advantage, but are afraid to be a whistleblower?
The same is true in the startup world. People with insider knowledge (VCs, founders, and employees) rarely, if ever, want to be fully transparent about how a given company is doing. Why would they? All the incentives — equity valuations, glory, social respect — align to make sure that anyone with even the slightest affiliation with a given startup will brag that the company is “crushing it!” so that the company in question can raise another round. However, the truth comes for everyone eventually. Every year, some of the former hottest startups in the Valley suddenly fold.
In Silicon Valley, harmful information is particularly difficult to share because startup outcomes are so uncertain, and variable: you never know who will make it big, or who you will need to raise money from in the future, and so people are hesitant to burn bridges under any circumstances.
The same uncertainty curtails the flow of harmful information in science as well: you never know who will be writing you a tenure letter, who might do a stint as a NSF program officer, or who might be reviewing your paper. As a result, the few people who do share harmful information are often acting irrationally as the result of a professional beef, which casts the validity of all harmful information into doubt.
Why Can’t Insiders Share?
Ironically, the people who possess true, potentially harmful information — and who are the most responsible and careful about it — are precisely the people who have many reasons not to share that information. The closer someone is to the “source” of harmful information, the closer they probably are — socially and professionally — to the person who would be harmed if that information were made public. They might even be the person who would be harmed, or worry about “going down with the ship” if harmful information came out against their colleagues.
Among the many reasons that responsible insiders often refuse to share harmful information:
They worry about whether they have gotten all the information correct, or whether they are missing something. Why take the risk if you are not totally certain?
They might still have a job at the same organization, and they would rightly worry that if they are seen as stabbing a boss or colleague in the back, they might get fired.
A major government or corporate initiative was likely sponsored by powerful people. Anyone who knows such an initiative was a failure might not want to risk launching a public attack on powerful people.
Outside of outrageous scandals, routine failures often aren’t a popular story. It’s better to move on.
In science, it’s often somewhat ambiguous whether a project or initiative even was a failure. Calling something out a “failure” or “underperformance” could just be stirring up trouble.
Most people don’t like negative people very much (outside of investigative journalists who make a career out of scandals). Being known as a person who calls out failures could impact your future career and relationships.
In some circles, there’s an implicit code: “If I make you look good, you’ll make me look good.” After all, it can be way more profitable if everyone implicitly agrees not to point out bad actors.
In addition to actual secrets, most NDAs cover disparaging information and many employment contracts come with non-disparagement clauses. Thus, on top of all of the social risks, potential information-sharers might be worried about lawsuits.
Negative information is often a market for lemons (a phrase that hearkens back to George Akerlof’s 1970 classic in the economics literature, where he explained why used cars are often worse than advertised). Yes, a few people who point out failures are willing to selflessly incur all of the personal risks of sharing harmful information. But in many cases, people who share harmful information are wrong. Perhaps they misunderstood the situation, are a congenital liar, have a personal grudge, are motivated by ideology rather than truth, or are a sociopath who doesn’t care about personal consequences.
As a result, even when people do share true harmful information, it’s easy to dismiss them as an unreliable source. This creates an adverse selection problem: to the extent that people sharing harmful information get dismissed as crackpots (often correctly!), people with true but harmful information become even less likely to share it, to avoid association with an (ever worsening) group. Like the market for used cars, this downward spiral means that the people promoting a particular view [or car] are more and more likely to be pushing an inaccurate view.
Discussing (and Accepting) Failure
What could we similarly do to increase true-but-harmful information that is a net benefit to society or to organizations?
Laws protecting and even rewarding whistleblowers are one such tactic. Indeed, such laws already exist: The federal False Claims Act provides that if you blow the whistle on fraud that led to federal payments, you can get a portion of the proceeds from a lawsuit. In one famous case, a whistleblower as to academic fraud was due to receive 30% of an astonishing $112.5 million settlement between Duke University and the federal government.
However, the False Claims Act is far too limited and difficult to deploy in most cases. The False Claims Act allows the whistleblower to file a federal lawsuit (which can then be taken over by the federal government itself), and then to share in any ultimate award. But unless the whistleblower is already independently wealthy, he or she will need to find a lawyer willing to take on the case for free, and then bear the risk of a multi-year lawsuit whose outcome will be hard to predict.
We could radically expand the possibilities for whistleblowers by promising a bounty of 30% for reporting academic fraud or other research misconduct to the federal government, without expecting the whistleblower to figure out how to file a lawsuit and wait several years. Of course, federal agencies would then have to make sure to set up a system to rapidly judge whether whistleblower complaints were valid (rather than being borne out of a grudge or misunderstanding).
However, we suspect that the formal legal system barely scratches the surface of harmful information that is being suppressed by cultural practices. When admitting failure is extremely professionally risky, harmful information is unlikely to ever be made public under any circumstances, even when strong legal protections exist for whistleblowers.
For scientific research in particular, we desperately need better ways to get people to talk about failure openly and honestly. As Nobel Laureate Robert Lefkowitz has said, “Science is 99 percent failure, and that’s an optimistic view.” But the scientific establishment (journals, funders, universities) keeps presenting an endless stream of success stories. It’s long past time to stop pretending that this narrative is even remotely believable.
Further, we can’t expect to see as much actual scientific success if no one is ever willing to admit that a grant, project, or major initiative failed. When no one can even mention a failure, everyone is incentivized to hide the truth, pick easy problems, look the other way, to p-hack, and even to commit fraud to pretend that a failure was a success.
The main obstacle to beneficial harmful information being made public is likely a cultural hostility to failure. Beyond technocratic solutions like new laws, we need a culture that accepts failure, or even celebrates it in some cases, and that in any event discusses it honestly and without embarrassment. Cultural change may be difficult, but not impossible. Scientific norms have updated before: indeed, while some today might think of peer review as an indispensable part of science, peer review is actually relatively new and didn’t become common until the mid-20th century.
But for deep cultural change to occur, the top leaders at science funders and universities will have to explicitly call for, and then actually reward, honest discussions of scientific failure–to be sure, failure due to ambition rather than dishonesty or incompetence. The leaders’ follow-through is key. Rhetoric is easy, but most people will change their behavior only if they see their leaders taking actions like hiring a new scientist whose previous publications announced prominent failures, or promoting (rather than firing) mid-level managers who correctly identified their own division’s strategy as a failure.
When it comes to science, the only way to guarantee success is to study marginal questions or to fake the results. But if you’re not regularly failing, you’re not at the scientific frontier. Rather than the paradox in which no one talks about legitimate failure (while the few public discussions about failure are driven by crackpots), we need a wholesome culture of aiming for the stars and being willing to admit when we didn’t hit them. That’s the sweet spot for scientific innovation.

About the Author
Stuart Buck is the Executive Director of the Good Science Project, a nonpartisan think tank focused on improving science. He is on X @stuartbuck1.
• • •
Ben Reinhardt is the CEO of Speculative Technologies, a nonprofit industrial research lab empowering research misfits and unlocking new paradigms in materials and manufacturing. He is on X @Ben_Reinhardt.