Real scientists are overjoyed to find experimental results that disprove something they or others previously thought was true. But especially in the social sciences, some results — no matter how well-proven — will be greeted less enthusiastically.
You can summarize the situation with a chart like this:
If a situation is “popular” (e.g. appears to promote a cause that everyone likes) then it’s okay to publicize and promote it. Similarly, the scientific establishment will happily publicize when something bad appears to be associated with an unpopular situation.
Here are some pretty obvious examples where the bias is clear-cut:
Evidence that death counts should be revised up or down for historical events such as the Holocaust, the Chinese Great Leap Forward, or the US-Iraq War.
Good vs bad outcomes in children raised by same-sex vs. fundamentalist Christian parents
Safety and efficacy of routine public health interventions such as vaccination, public sewage control, fluoridated water, or oral contraceptives.
Racial or sex differences in just about anything.
If it’s not obvious to you which “result” is more likely to receive publication and support, you are not qualified to be a professional scientist.
Some of this is presumably explained by Carl Sagan’s comment that “Extraordinary claims require extraordinary evidence”, which was his own rewording of Laplace's principle, which says that “the weight of evidence for an extraordinary claim must be proportioned to its strangeness”.
But who defines “strangeness”? Research outcomes that seem pretty obvious to me might not seem that way to you.
Let’s look at a couple of examples from the science of men and women, COVID death rates, and recycling effectiveness.