The gender industry and its minions love to cite data to support their predatory butchery of vulnerable people, but when one digs into the methodologies applied to these so-called “research studies,” one finds little behind their “data” other than smoke, mirrors, and bias error.
Across the board, gender-industry studies invalidate themselves via at least one (and usually multiple) of the following errors:
Sample sizes were too small to be representative of whole groups.
Survey questions were obviously designed to elicit answers that confirm researchers’ bias.
Studies were conducted or funded by transgender activism organizations.
Subjects were recruited from transgender activism organizations or clubs.
Surveys relied on subjects self-reporting about their own mental health status.
Voices of transgender desisters and regretters are consistently excluded from results.
Verifiable data on suicide is not measured.
Studies did not use control groups to get valid measurements of actual differences between groups.
In addition to the above flaws, facts undercut the deceptive "results" of gender-industry claims.
Long-term studies show vastly worse outcomes than do short-term studies.
Studies with more objective measures show consistently worse outcomes than those with more subjective measures.
Studies with little or no loss-to-follow-up** show significantly worse outcomes than those with greater loss-to-follow-up.
Far too many of these flawed and fraudulent “studies” exist for us to outline the errors and outright lies of each, but we can provide this spreadsheet that captures some of the excellent long-term studies that refute the gender industry’s spurious claims.
To investigate reported data (on gender or any other topic) yourself, simply evaluate the study in question according to a few principles, provided by this excellent guide from The Logic of Science:
Read the original study yourself.
Don’t rely on titles and abstracts.
Acquire necessary background knowledge.
Make sure that the study is published in a legitimate journal.
Check the authors for relevant expertise and conflicts of interest.
See if the journal’s impact factor matches the paper’s claims.
Make sure that the study was designed, conducted, and analyzed correctly.
See if the paper is consistent with other studies.
Make sure the paper follows the standard conventions of scientific writing.
If all else fails, check Google for a refutation.
As Madeleine l’Engle famously asserted, “Some things have to be believed to be seen,” and nowhere does this truism apply more aptly than to the gender industry’s body of “research.”
Had these so-called studies been undertaken without a pre-determined set of beliefs and intentions, the results would align with the growing cadre of valid research that contradicts the gender industry’s claptrap part and parcel.
But until the gender industry turns honest and admits that it is interested only in “research” that supports unchecked greed and barbarism, one must look elsewhere for facts and meaningful empirical data.
Erin Brewer & Maria Keffler are partners at Partners for Ethical Care. Contact the authors via support@partnersforethicalcare.com.
** “loss-to-follow-up” refers to patients/subjects who were present at the start of a study but who dropped out and/or lost contact with the researchers before the end of the study.
For Further Information: Download Understanding Transgender Issues: An
Analysis of Studies Used to Support Cross-Sex (Family Watch International)
As a statistician and medical researcher, I am continually shocked by the shoddy, poorly-designed, and frequently invaluable "studies" in this area. Sample sizes are extremely low - 10 per group, 15, whatever. But the more important point is that most studies need to longitudinal. Here we see the most serious issue - LTFU (loss to follow up) is just awful. In studies I have conducted, we are concerned if there is 2-3% loss of subjects. In most trans studies, LTFU can be 50-75%. That means that half or more of the subjects do not complete the study. It's easy to guess why - the subjects who drop out have bad outcomes. So the conclusions from such studies are pretty much…