Here I evaluate the accuracy of various sources of opinions and research. Of course, these evaluations are just general guidelines; individual sources can always be exceptions. No source will ever be categorically dismissed (nor dogmatically trusted) on the basis of these judgments; they are more like priors. By default I include rather than exclude sources and look into anything that comes my way (within the bounds of common sense).

It should go without saying that just because a source is flawed or biased doesn't mean that its stance is actually wrong. It just means that it's an open question where we have to look directly at other arguments on both sides. Also, please note that I’m only judging sources' accuracy and usefulness in providing information for careful research and political judgment; I’m not judging whether their behavior is all-things-considered ethical or responsible or harmful or needs to be changed. It is possible for a source to be reliable but socially harmful, or unreliable yet socially valuable. And sources which mislead lay readers can still provide truthful, useful information to discerning and cautious readers.

I divide sources into 'important' and 'unimportant' categories, with the important sources being the ones that commonly merit reading closely with an open mind, and unimportant sources being ones that can usually be ignored. Of course these are loose generalizations and contextual judgment is required, especially when one source has characteristics from both categories.

An imperfect source is better than no source. This is especially important to remember in cases where some kind of probability or quantitative value is required; a rough estimate of a value is more informative and useful than throwing one's hands up and asserting that we have no idea and cannot make any judgment whatsoever.

Important sources

Academia

Academics are arguably the best experts for a given topic – they have spent the most time learning it. Top academic researchers are typically tenured which gives them flexibility to speak their mind on almost any topic, and there isn’t general political bias in the way that universities hire, so substantial reasons must be identified before claiming that consensus views are mistaken. When academics are mistaken, it can be due to random errors, or it can be part of a pattern of bias.

However, social scientists were no better than laypeople at predicting the social consequences of the COVID-19 pandemic.

Bad science

Huge quantities of social science papers don’t replicate or are otherwise just bad.

News about paper retractions can be found at RetractionWatch.

Political bias

One source of bias is direct pressure for academics to conform to certain political viewpoints. An underlying factor here is the distribution of academics’ views and their explicit willingness for ideological discrimination. Liberals and conservatives in academia are about equally likely to discriminate against each other’s views – 15-20% are willing to discriminate against papers, symposia, and grants, and 30% are willing to discriminate against hiresSee this survey of 600 faculty across disciplines in four California State University campuses, but as a counterexample, note that a survey of 100 intelligence researchers found that only 5% believed in limiting research and publication for the sake of social justice. – but conservatives are only a small minority of researchersSee the aformenetioned CSU survey, see this analysis of voter registration data for faculty from 51 American liberal arts colleges, and this survey of anthropologists., which implies that things will usually be unjustly harder for conservative viewpoints. Compounding the problem, universities do not take notice of their lack of in ideological diversity, and university administrators are even more monolithically left-leaning. “Talking about identity in terms of power, privilege and oppression is no longer the woke insurgency, but rather the cultural establishment.” Diversity statements are used as a basis for promoting or excluding job candidates in some parts of academia, and candidates are apparently disqualified if they take a classically liberal viewpoint or talk about other kinds of diversity rather than leaning into left-wing views on race and gender diversity. Generally speaking, conservatives in America feel most dissuaded from sharing their political opinionsThe CATO 2020 national survey showed that 77% of conservatives and strong conservatives self-censor, compared to 64% of moderates, 52% of liberals, and 42% of strong liberals.. There is a movement to promote ideological diversity in academia, but it seems to be making little progress, with the very idea being ridiculed by many progressive academics.

Strong social pressure applies against views perceived as racially offensive towards black, Hispanic, and Indigenous peopleA journal article defending colonialism was retracted after the journal editor received “serious and credible threats of violence.” An article on cognition among colored South African women was retracted after controversy, despite having normal quality for the field. Anthropologist Napoleon Chagnon was censured in part because of his controversial sociobiological views. One professor was fired for arguing that there are genetic differences in cognitive traits between racial groups. Another professor suffered a ban on funding and a delay in promotion as a result of her controversial research on race and intelligence. Students on one campus demanded that whites join blacks in observing a “day of absence” from campus; when two professors wrote a letter opposing these demands, they were compelled to leave the campus under threat of violence and resign. Two other professors were made to resign after they argued that the university should not forbid people from wearing costumes of other cultures. Amy Wax was pulled from teaching after her views were considered racially offensive. A student was kicked out of a university honors program for posting on Facebook that he was going to practice cultural appropriation and didn’t care if people got offended by it. A professor was physically injured by students while accompanying controversial political scientist Charles Murray at a university. Australian professors were warned against teaching the true origin of the Aborigine people, though most ignored the guidelines. A professor’s comments were condemned by the school dean when she said that America’s immigration policy should favor whites over nonwhites. Author David Epstein found that some though not all experts were unwilling to share information on genetic differences for fear of losing their jobs. A postgraduate fellow was fired from St Edmund’s College for writing papers arguing that race science may not be socially harmful and that stereotypes about immigrants are accurate, and for being associated with race scientists. Physicist Stephen Hsu resigned from a leadership position after sparking outrage for racial issues including research ideas that were perceived as eugenicist. A math professor was fired for commenting that an activist flier warning about microaggressions was “garbage.” One publisher in the UK retracted a book defending free speech because it was deemed possibly likely to incite racial hatred (and therefore legally risky in the UK) despite the author clearly having no such intentions, and because of the legal and reputational risk of discussing certain controversial events. The Journal of Intelligence openly refuses to “publish articles that may lead to or enhance political controversies,” including race-related research, even if they are good quality.  A math professor was fired for commenting that an activist flier warning about microaggressions was “garbage.” An economist suffered some professional consequences as a result of allegedly making fun of Martin Luther King, Jr. in class, and indirectly as a result of criticizing Black Lives Matter. The chair of a Canadian university’s board of governors resigned after outrage following his liking of a tweet which said that Black Lives Matter was a paramilitary wing of the Democratic Party. A historian had to resign after saying “Slavery was not genocide. Otherwise, there wouldn't be so many damn blacks.” A peer-reviewed commentary paper on the state of organic chemistry was retracted by the journal after other chemists complained about its assertion that we should hire people on the basis of merit rather than assigning demographic quotas with preferential treatment for women and minorities. In 1988, a national organization blacklisted the American Guidance Service as punishment for distributing a psychologist’s manuscript which contained an argument that low average test scores among Hispanics had a racial cause (see this paper by the targeted psychologist, page 65). Three far-right organizations are officially banned from speaking on a few UK campuses..

There is pressure to conform to the standard progressive views on gender46% of UK universities have censorious transgender policies. A professor was threatened with losing her editing position for criticizing the Stonewall trans program. A professor of sociology was deplatformed from a research seminar for saying that the census should separate sex identity from gender identity. Jordan Peterson’s visiting fellowship was rescinded after student and faculty backlash. A literary editor was fired for saying that, although he loved trans people as “world class human beings,” he found many trans rights activists to be “petulant, small-minded shit-heads.” A physicist was fired at least partly because he argued that the deficiency in female representation in physics was due to statistical differences in ability. A paper suggesting a new theory of gender dysphoria was retracted after outrage. Another paper was retracted because it defended the Greater Male Variability Hypothesis which could explain discrepancies in gender representation. The president of Harvard was forced to resign because he believed the greater male variability hypothesis might partially explain gender disparities among faculty. A peer-reviewed commentary paper on the state of organic chemistry was retracted by the journal after other chemists complained about its assertion that we should hire people on the basis of merit rather than assigning demographic quotas with preferential treatment for women and minorities..

Scientists have faced online abuse, even to the point of leaving the field, for researching psychological treatments for Chronic Fatigue Syndrome. A psychology professor was taken off admissions committees and censured for tweeting that obese students lacked enough willpower to be successful.

A famous and accomplished professor was made to resign after stating in internal emails that a 17-year-old girl, who had been “directed” to have sex with an older man in connection with the Epstein abuse scandal, should not be considered a victim of sexual assault or rape. A professor was slandered and his home was vandalized in retaliation for teaching a less negative interpretation of pederasty in ancient Greece.

A professor was fired for snarky comments critical of white people.

A dean of students had to resign after criticizing the American flag for representing racism. A journal partially bowed to controversy after it published a paper questioning whether the dictator Assad had used chemical weapons on civilians. An adjunct professor was fired for joking that Iran should threaten American cultural sites. Three Islamist organizations are banned from speaking on a few campuses.

One professor lost his job offer because of his tweets expressing severe hostility to Israel, including saying that anti-semitism is honorable and that he hoped West Bank settlers would go missing. One professor was fired for tweeting that he wanted police officers to be murdered; another professor was briefly threatened with investigation for the same views. Another was made to resign from a community college after endorsing antifa, tweeting that he would like to hit President Trump with a bat, and posting a poem saying to kill all evangelical Christians.

A reviewer allegedly opposed a publication because its conclusions might be used to discredit mainstream climate science. A climate scientist was pressured into resigning from a conservative skeptic think tank.

Some of these issues seem to have been less controversial in previous years. FIRE data shows that disinvitation attempts have steadily gained in frequency from being almost nonexistent in 1998. Intelligence research has seen a dramatic spike in controversies since 2017, but intelligence researchers are slightly more likely to say that it’s actually gotten easier to talk about these issues in the last few years. Academic 'cancellations' (including petitions that don't go anywhere) have increased dramatically from 2016 to 2020.

These events are evidence of broader patterns. “For every one of these controversies that goes public, there are vastly more situations where someone self-censors, or is quietly bullied into acquiescing. For every odd example that goes viral, there is no doubt dozens more that occur behind closed doors.”From a great essay by education expert Freddie DeBoer. Moreover, even when attacks and investigations don’t lead to concrete workplace punishments, they can nevertheless sap the targeted person’s time and motivation.

Many people defend some of these patterns as mere reflections of certain points of view being wrong, but the intentions and practices in these studies and anecdotes go well beyond the polite academic rejection which would be justified by that defense. Moreover, in a few cases the suppressed point of view actually seems to be correct. Note that the patterns of discrimination seem to apply mainly towards conservative social views rather than towards conservative economic views. Leftward political bias in academia is often defended with a notion of social responsibility, where researchers seek to avoid platforming views that could be harmful, with “harmful” generally being operationalized in a politically and socially contingent manner, drawing largely upon progressive social activism and the prevailing brown scare (fear of the far right) among educated liberals.

Some more evidence here that I have not reviewed yet: https://areomagazine.com/2019/12/10/threats-to-free-speech-at-university-and-how-to-deal-with-them-part-1/

To summarize, academics are more willing to censor socially conservative ideas, and there is external pressure for academics to avoid criticizing progressive social ideas. Of course, some of these topics have such obvious answers that we shouldn’t care about academics’ opinions on them in the first place – for instance, deaths of police officers and West Bank settlers are unequivocally bad things, regardless of what a professor opines. On other topics, things can be different.

Does this bias really lead to bad research? It seems to matter just a mild amount:

"Some of the most highly publicized social science controversies of the last decade happened at the intersection between political activism and low scientific standards: the implicit association test, stereotype threat, racial resentment, etc. I thought these were representative of a wider phenomenon, but in reality they are exceptions. The vast majority of work is done in good faith.

“While blatant activism is rare, there is a more subtle background ideological influence which affects the assumptions scientists make, the types of questions they ask, and how they go about testing them. It's difficult to say how things would be different under the counterfactual of a more politically balanced professoriate, though.”See this article by a forecaster who went into great detail about the problems in social science research.

There are also some concrete examples where we can see that institutional bias in academia is really not systematic and severe. First, one of the most dangerous views for the academic system is probably the idea that the economic value of university education consists largely of mere signaling and so the government should not spend so much money subsidizing it; however, the relevant experts don’t seem to be under any sort of institutional pressure to reject this point of view. Second, on a topic with perhaps the greatest level of political censure – general intelligence and the causes of interracial and international measured disparities – surveyed experts are still likely to take very controversial positions in surveys, and don’t report a great deal of pressure to conformSee Rindermann et al (2016) and (2020).. Third, American plutocrats have made concerted efforts to promote right-wing views in academic institutions, think tanks and elsewhere, yet the majority of academics are still much more left-wing than most of the population. Fourth, despite the significant power of the pro-Israel lobby in US policy, academic views tend to be much less favorable of Israel. Cases like these show that while academic views may be shifted somewhat by institutional bias, they don’t get flipped entirely the other way.

Finally, note that liberal- and conservative-leaning psychology studies are equally likely to replicate.

Demographic bias

Another possible problem is demographic bias – attitudes of prejudice in academia against other people on the basis of attributes like race and gender. If academics are prejudicial, then their opinions and research may reflect their distorted priorities.

Representation as evidence of discrimination

Some people believe that if academia does not represent the traits of the background population, then it means there is discrimination in hiring. But there are many proposed explanations for unrepresentative distributions of employees, which can be at least as plausible as the idea of proximate workplace discrimination. There may be biological or social explanations for gaps in academic representation. The majority of the male-female gap in U.S. English and chemistry faculty is explained by familial commitments and attitudes towards caregiving. Most of the gender pay gap in Denmark is caused by women providing childcare and this may extend to academic success. And Simpson’s Paradox indicates that gaps don’t conclusively show discrimination even if the applicant groups are equally qualified. Finally, empirical data shows that female representation in STEM remains constant or arguably even declines as countries gain more gender equality. Therefore, unrepresentative faculty demographics are not meaningful evidence of underlying patterns of discrimination in academic institutions.

Direct evidence of demographic discrimination

There can be more direct empirical evidence of demographic discrimination in academia.

Gender

There have been many studies looking at gender bias in academia. For a broad overview of evidence on gender bias in academic science see Lee Jussim's 2019 review or Ceci et al's 2014 review, which find that gender bias is a fairly rare phenomenon. But here I will highlight studies of social science and health because those are the fields which are directly relevant for political judgments.

Contexts with significant pro-male bias: hiring in psychology, promotion in economicsCeci et al (2014), page 42: "Economics is an outlier, with a persistent sex gap in promotion that cannot be readily explained by productivity differences.", expectations about group papers in economics, conference questioning in economics, and Canadian Institutes of Health Research grants.

Contexts without significant gender bias: journal review in social science and the humanities, promotion in psychology, journal review in political science, National Institutes of Health research grant awards, and National Institutes of Health research grant evaluations (conflicting studies, see below).

Contexts with significant pro-female bias: journal review in biomedicine and health science, peer review and citations in economics, promotion in sociology, journal review in behavioral psychology, and National Institutes of Health research grant evaluations (conflicting studies, see above).

Note that while peer review comments are equally harsh towards people of different genders, women and non-binary people feel more dissuaded by the harsh comments.

Other aspects of the university environment appear to have a slight pro-female bias: universities tend to give slight preference to women in undergraduate admissions, and a federal district court found that the University of Michigan reached an erroneous outcome in a sexual assault investigation possibly because it conducted gender discrimination against males.

Telling committees that women face implicit bias caused them to hire more women, and mandatory diversity statements in many colleges have led to a marked reduction in the admission of white males, but these do not clearly indicate the presence or absence of gender bias.

In summary, the evidence shows no systematic trend of gender bias, with both pro- and anti-female bias showing up in particular contexts.

There may be systematic bias affecting this research. First, in line with the general trends of publication bias seen in psychology research, we can expect null results to be underrepresented in the published record, and we can expect the effect sizes of supposed biases to be overestimated. Put simply, there is probably not as much bias as these studies suggest.

Second, researchers might be biased to produce research that fits an ideological worldview. One could allege that universities and researchers are stakeholders in the patriarchal social order and act to reinforce it, which might bias their research towards neglecting problems of anti-female bias. But while universities and researchers certainly have a genealogy stemming from an explicitly patriarchal past, their current incentives and behavior don’t show them to be major stakeholders in patriarchy. Moreover, this idea contradicts a more plausible account of institutional bias in universities.

Specifically, it appears that universities and researchers are systematically biased by progressive ideology to promote stronger accusations of anti-female bias. Self-criticism is not really incentivized, but criticism of competing academic cliques or of the academic system as a whole as being anti-female is well incentivized. I can say this for two reasons. First, as I described previously, there is a serious pattern of academic hostility towards views that contradict the standard progressive views on gender and feminism. Second, as I will describe a bit later, many academics seem to greatly exaggerate the evidence of anti-female bias while neglecting the evidence of pro-female bias. So the academic literature is more likely to be biased to exaggerate anti-female bias.

Race

African-American researchers are 10% less likely to receive NIH funding after controlling for a set of applicant qualifications, with Asian researchers being 4% less likely and Hispanic researchers getting no significant bias. It’s possible that unobserved variables explain these small residual differences. Peer review comments are equally harsh towards people of different races, but minorities feel more dissuaded by the harsh comments. It is theorized, but not adequately substantiated, that members of the physics profession implicitly assume that only whites are capable of objective science and this biases them against ideas expressed by black (esp. black female) physicistsThe argument was made by cosmologist Prescod-Weinstein. However, careful readers pointed out that it lacked support. . Universities tend to give preferential admissions to African-Americans and to Hispanics, while having a slight preference against Asian-Americans, holding qualifications constant. Asian-American applicants to Harvard suffer discrimination such that 19% more would be admitted under a neutral policy, though a federal judge found that Harvard’s admissions system did not consciously discriminate against Asian-AmericansSee this article, but the ruling was criticized by many and seems like it may have been incorrect.. There is good evidence that Yale admissions intentionally disfavor whites and Asian-Americans.

Reconciling direct evidence with academic opinions

Academics themselves seem to usually believe that women and minorities face systematically greater bias. We should be careful about rejecting what appears to be an academic consensus. But there are many justifications for this, besides the research presented above. First, as I described just previously in the section on political bias in academia, there is a clear trend of political pressure in academia to conform to progressive views on race and gender politics. A telling anecdote is that one study showing pro-female hiring bias had to endure eight reviews whereas the modal number for the journal was only two. Moreover, in the aforementioned cases where political pressure boils into controversy, many academic voices have strongly agreed with the decisions to censure or fire the transgressive researchers, and many academic voices have also supported such actions in similar cases outside academia, such as the termination of James Damore. Hence, many academics seem willing to punish people who doubt assumptions of anti-female bias in hiring. And humans in general react more negatively towards research which posits male-favoring sex differences, which is relevant here because claims of such differences are frequently treated as partners of claims that male-dominated fields are not conducting gender discrimination.

Second, academics seem to have flawed understandings of the evidence. There are direct indications that many academics are operating on a biased and incomplete understanding of the relevant research. And a survey experiment found that white liberals, who are more highly represented in academia than any comparable group, heavily overestimated the amount of discrimination faced by blacks in job applications. Many of the “no gender bias” or “anti-male bias” studies cited above seem to have gotten less publicity than the “anti-female bias” studies. In one case, a study showing pro-male bias has been cited far more than a comparable and methodologically superior study showing pro-female bias. A flawed study alleging gender bias in orchestras was uncritically repeated in the media for almost two decades.

Third, academics’ public statements may express a systematically stronger commitment to prevailing social justice narratives compared to what they privately believe, so a proper anonymous survey would be important to determine the true academic views about discrimination, and I am not aware of one.

Finally, academics aren’t necessarily people who actually study bias in academia; this is more akin to asking consultants whether there is bias in consulting, asking cops whether there is bias in policing, and so on – sure they have some firsthand familiarity, but they are not true subject matter experts. Only a few academics, like the authors of some of the aforementioned studies, have properly focused on studying bias in academia.

Therefore, a direct look at the evidence on discrimination in academia is much more reliable than accepting the general opinions of academics about the prevalence of discrimination.

Implicit bias caused by demographics

Even if there is no discrimination in academia and gaps in representation can be explained by psychological and/or social phenomena outside of academia, there might still be implicit bias as a result of these gaps in representation. If an academic field is nonrepresentative of the broader population affected by a given issue, then this might cause them to implicitly favor worse policies because they overlook the perspectives or interests of a particular group. Male and female anthropologists differ significantly on methodological and ethical questions. Gender also helps predict the scientific views of sociologists and intelligence researchers. In these cases, we should consider updating away from the current combined views of the field and moving closer to what the views would be like if the gender split were 50-50.

It turns out that academia is nearly balanced between men and women, but has more whites, more Asian-Americans, and fewer African-Americans than the national averageCompare NCES data about academics with nationwide statistics.. Of course, these figures can be different in particular fields, they can differ more starkly with the composition of the global population, and they can always mask issues of intersectional representation. However, I haven’t come across any policy issues where it is apparent that experts are making an error due to this kind of issue. Also, universities are at least self-conscious about issues of demographic diversity, with much institutional attention directed at recognizing and overcoming alleged race and gender bias thru a plethora of pro-diversity programs and cultural mandates, though it’s not clear what kind of impact these corrective efforts have.

Nonpolitical tribalism

Tribalism can compromise academics in any group, even on nonpolitical issues.

Summary

The biggest problem with social science is methodologically flawed research which fails to replicate or is otherwise unreliable.

There is no systematic trend of race or gender bias in the behavior of academic networks and institutions, but they are not unbiased; various kinds of bias show up in particular contexts. In addition, the collective views of the research community reflect the kinds of people who enter it, so an academic field which is disproportionately constituted by a particular demographic or social class may produce conclusions which disproportionately reflect that group’s particular (and contentious) frames of thinking about the world.

And academics, like all other humans, can be petty and tribalistic on matters which are not directly political.

All that being said, academic research still seems equal or better than any comparable source.

Blogs

Blogs can occasionally be high quality and insightful, and have the freedom and flexibility to tackle some relatively subjective issues which are difficult to address properly in rigorous publication.

Scholar’s Stage is a very good blog written by Tanner Greer with emphases on Chinese and American affairs.

Slate Star Codex is written by psychiatrist Scott Alexander. Alexander has high attention to scholarly research and steelmans different views across a variety of political topics. He is open-minded to different ideas and points of view. He has a solid ethical focus with explicit concern for global, future and animal well-being. He has a very good command of psychiatry research and practice. Leftists say that he misunderstands Marxist theory.

Blogs are not censored to the same degree as other sources, although Medium censored posts that it considered harmful misinformation in the context of the COVID19 pandemic.

Crowd forecasting

The Good Judgment Project and Metaculus both outperformed expert epidemiologists’ short-term predictions of COVID-19 cases.

The Metaculus community was slightly beaten by Vox's Future Perfect journalists Dylan Matthews, Kelsey Piper and Sigal Samuel in forecasting 2020.

Global Guessing showcases forecasts and prediction markets on certain topics. Foresight Exchange is a long-running simulation prediction market (using points rather than real money).

Economists

Economics performs best among social science fields at producing replicable research.

Economics majors applying for graduate school are one of the best-performing academic groups at the GRE, scoring 0.50 standard deviations above the mean.

Within economics, RCTs and RDD studies have relatively little p-hacking, whereas difference-in-differences and instrumental variables studies have moreSee this study; go to figure 2 for an easy comparison..

Economics research systematically excludes animal welfare.

Economists seem to implicitly and incorrectly expect female economists to make smaller contributions to group papers. Economists ask more questions with more demandingness towards female presenters. However, peer review and citations in economics show a mild overall tendency to favor women.

Economists can display bias by agreeing more with statements based on attribution.

Matt Yglesias argues that economists underrate the importance of full employment.

Economics faces an unusually high degree of skepticism from outside the mainstream field. Economics provides the most relevant information for political policy, so there is the greatest incentive for political actors to promote denialism and disingenuous critiques of the field; that being said, there is also probably the most chance for economists themselves to be biased by their political views or by outside actors trying to influence universities and think tanks. A better explanation for popular denialism is the prevalence of folk-economic beliefs, but it’s not clear if this has motivated any of the more sophisticated ‘heterodox’ economic work. We should approach the latter with a more open mind.

Economists can be somewhat toxic towards outsiders’ perspectives, so it is plausible that some good heterodox ideas might be incorrectly dismissed. And I haven’t yet personally explored critiques of mainstream economics in detail, so there is potential to have my mind changed.

However, mainstream economics encompasses a variety of different research methods and is not comprised of a monolithic ‘school’ or ideology. Therefore, it’s very unlikely that the whole field would have a common methodological flaw. And the fact that heterodox economists are still set apart, or set themselves apart, from this otherwise open-minded field raises red flags about the quality of their work.

Many criticisms of economics are clearly false as they rely on strawmen about the nature of the field, such as David Graeber’s.

One area of heterodox criticism takes the form of backlash against the randomized-control trials pioneered by Nobel Prize winners Banerjee, Duflo, and Kremer. Most Effective Altruists side with mainstream economists on this topic rather than the critics, and in personally investigating the topic, I have similarly found the critics to generally be wrong. Their errors include strawmanning or misunderstanding their opponents (e.g., falsely claiming that randomistas think macroeconomic issues are unimportant), isolated demands for rigor (e.g., attacking the external validity of RCTs while not considering how other methodologies are similarly limited), and making uncritical moral judgments without demonstrating significant problems (e.g., objecting that certain practices seem “colonial” or unfair, and objecting to the demographics of researchers, without showing that they actually inhibit research accuracy or program outcomes on a significant scale). One of these papers criticizing an RCT turned out to be deeply flawed. And to the extent that some critiques of RCTs seem right, they are acknowledged by mainstream members of the discipline anyway, so that it is very hard to justify any continued skepticism towards mainstream views. Finally, many people have an irrational moral aversion to randomization, and this may lead them to produce disproportionate and shoddy critiques of RCTs.

Modern monetary theory is flawed.See this article by Gregory Mankiw and this book review by Alberto Bisin.

The ergodic critique of mainstream economics is flawed.See this reply to the critique.

Other lines of heterodox thought include the Austrian school, Marxist economics, post-Keynesian economics. The broader economics, policy and Effective Altruist communities usually reject them.

Accepting mainstream economics is validated by my direct examinations of economic policy issues. In the process of directly investigating research on the impacts of things like immigration and taxation, I’ve generally found that mainstream economists’ views are perfectly sensible. Not only that, but the broader EA community has usually judged the same way as well. There are a few cases where I’ve mildly disagreed with economist surveys, but usually only because of new studies that were released after the surveys were performed.

Pseudoerasmus believes "there are brilliant heterodox who try to critique the mainstream on its own terms; and the pathetic heterodox who just want economics to be literature and critical theory."

Even if mainstream economics is wrong, its opponents tend to have differing ideas that point in disparate directions. For instance, contrast Austrians emphasizing small government with post-Keynesians emphasizing major government spending. And I have not identified good reason to elevate any one heterodox sect much above others.

Finally, it is more difficult to find reliable information on the guidance that heterodox economists would give for contemporary policy issues; critics are apt to allege that mainstream economists are biased about sweeping questions of economic systems and society but rarely have substantial ideas about how that changes the actual answers for specific policy questions such as those investigated here (if it even does). Sometimes heterodox economists have clear policy opinions, but it’s not immediately apparent if these views are actually justified by their economic thought.

One critique of economists of all stripes is that economic judgments depend on value judgments rather than being purely scientific. This obviously depends on the claim being made. For instance, if an economist claims “higher income taxes on the wealthy would increase federal revenues without significantly reducing GDP”, then this is clearly just a descriptive claim. Taxes, revenues and economic growth are objective, i.e. they mean the same thing regardless of what moral beliefs you have. On the other hand, if an economist claims “the government should reduce occupational licensing in healthcare and allow more people to sell health services,” this claim may not be trustworthy if the economist lacks approximately correct values on the relative importance of various goods such as health access, freedom, safety, and equality.

But in those cases where mainstream economists’ recommendations involve normative judgments, they usually match well with the goal of improving total well-being. Mainstream economists generally care about the interests of the broad population, are decently attuned to the common goals of policymakers and voters, and are sensitive to cases where the appropriate judgment depends on normative assumptions that regular people commonly disagree on. The main exception is that economists give little attention to the interests of animals"nearly 100% of research papers in economics are anthropocentric. Economists only care about humans’ welfare.". In cases where animal impacts are relevant, I adjust my view accordingly. This is easy since I am accustomed to all manner of sources giving insufficient weight to animals and there is nothing unique about economists’ views. In any case, economic studies are universally defined in descriptive terms, though economist blogs, surveys and literature reviews frequently have a normative framing.

Another criticism of economists is that their recommendations have arguably led America down a path of regulatory capture, because useful ideas and concepts can be coopted by corporations. While this might be relevant for judging the political system in general, it doesn’t apply here, where I am looking more comprehensively at arguments and opinions rather than going by whichever few are being promoted by a corporation, and where my normative views are solidly decided. Also, the concern seems like it would apply equally well to any authoritative source. Corporations, interest groups, and the mass public all have a tendency to selectively elevate and promote the opinions of particular economists, social scientists, historians, journalists, laypeople and anyone else whose views confirm their priors. The historical record of economists seems OK at a glance – I would need to see a comprehensive review and comparison with counterfactuals rather than accepting a narrative based on a few high-profile controversies.

Effective Altruists

Judgments from the Effective Altruist community deserve much higher weight than judgments from other segments of the population for a variety of reasons. First, Effective Altruists have correct ethical values, implying a better general focus and approach to judging topics. Second, Effective Altruists tend to use superior methodological tools such as expected value reasoning, probabilistic credences, epistemic modesty, and deferral to expertise. Third, Effective Altruists have a general open-mindedness and tolerance of differences in opinion on social and scientific judgments, more so than most of academia. Fourth, Effective Altruists have high average levels of education and achievement compared to the general population.

The benefits of some of these features are backed up by published research. People who emphasize evidence and expertise have more accurate beliefs, compared to people who follow their intuitions or believe that truth is politically constructed. Probabilistic reasoning and open-mindedness enable superior political predictions. Moreover, in the process of investigating specific issues for this report, I’ve found that the political opinions of Effective Altruists tend to correlate favorably with reliable evidence and expertise.

Criticisms of the Effective Altruism movement are generally mistaken. In fact, certain forms of complaint about Effective Altruism can indicate that a person lacks clear thinking or benevolent ethics. It is extremely rare for someone outside the EA movement to have a similarly high quality of both ethics and rationality, although there are significant minorities of people who match the EA community in one or the other of those categories.

I use small informal polls of the EA community to get better estimates on very subjective issues. The EA Forum is the best location for serious EA discussions.

Fact-checkers

PolitiFact consistently finds Republicans to be less honest than Democrats, which may indicate bias, or may simply be a reflection of real differences between the two parties.

Here I review recent PolitiFact judgments to see whether they are accurate. Usually I just read the writeups and evaluate the subjective choice of rating. If I’m really unsure or suspicious of something then I look at more context. I’m a bit more textualist than they are.

Statement - Donald Trump (summer 2020) PolitiFact rating Correct rating
COVID surge in New Zealand False False
Democrats implemented rolling blackouts False False
Our COVID-19 numbers are great False False
Wiener's push to abolish single-family zoning Half true Mostly true
New York rigged election Mostly false Mostly false
Democrats don't want to protect people from eviction False Pants on fire
Kids almost immune to COVID19 False Mostly false
Biden will defund the police False False
Insulin for virtually pennies Mostly false Mostly false
Absentee voting different from mail-in False False
Record job numbers Mostly false Mostly false
Philly murder spike True True
Low US COVID mortality False False
Biden wants to end school choice Mostly false Mostly false
Biden will take your windows False False
Biden will give welfare to illegal immigrants False False
Biden wants to abolish prisons and police Pants on fire Pants on fire
99% of COVID19 cases are harmless False False
COVID19 mortality is way down Mostly true True
COVID19 cases are rising only because of tests False Mostly false
Biden and Dems want to prosecute churchgoers False False
"Racist baby" video parody of CNN Pants on fire N/A (satire)
I made Juneteenth famous Pants on fire False
Obama admin never tried to reform the police False False
If we stopped testing, we'd have very few cases Pants on fire Half true

The honesty score from PolitiFact ratings is 1.32; the honesty score from corrected ratings is 1.71. This shows a bias of 0.39 points (less than half of a truth-o-meter step) against Trump. Note however that Trump is a unique cultural target who has feuded with the media, so PolitiFact's bias against him may be unique rather than something that they do for all Republicans.

Statement - Joe Biden (summer 2020) PolitiFact rating Correct rating
$400,000 is more than I’ve ever made Mostly false False
Black man invented the light bulb Mostly false Mostly false
Social Security benefits will run out Mostly false Mostly false
First to call for the Defense Production Act False Mostly falseBecause he acknowledged that he might be mistaken.
Crime higher under Trump Half true Mostly true
COVID19 is the biggest cop killer Mostly true True
I am not banning fracking Mostly true Mostly true
Trump is defunding the police Mostly true Mostly true
Unprecedented asylum system Mostly true Mostly true
No game plan for COVID19 vaccine Mostly true Mostly true
McDonald's requires noncompetes False False
Home health workers on welfare True Mostly true
Trump is the first racist president False Pants on fire
10,000 lost businesses Mostly true Mostly true
I warned about COVID-19 Mostly true Mostly true
Trump suggested drinking bleach Mostly false False
Pennsylvanians fighting for the Union Mostly true Mostly true
Trump derailed CDC recommendations Mostly true True
Trump losing trade war that he started Mostly true Mostly true
You couldn't own a cannon in the 1700s False False
Black-white homeownership gap is higher now True True
Trump late on payments to Native American tribes Mostly true Mostly true
NAACP always endorsed me False False
40% of bailout loans didn’t go to small businesses Mostly false False
Trump cut CDC staff in China Half true Mostly true

It looks like PolitiFact is completely unbiased towards Joe Biden.

Bias could still manifest in the surrounding news media and popular culture which affect the selection of statements that PolitiFact finds notable.

For a better methodology, I reviewed the 20 most-liked tweets from Donald Trump and the 20 most-liked tweets from Biden that I could find. I rated the accuracy of factual claims in each tweet, then took a weighted average based on the number of likes; the model is one of the spreadsheets in my Excel model. Overall I found that Biden's claims had an average veracity of 0.89, and Trump's claims had an average veracity of 0.42 (on a 0 to 1 scale). There are major limitations to this calculationThe sample size is small, actually more like 15 tweets each because some tweets contained no factual claims. Three of Trump's most-liked tweets were retweets of suspended accounts so I had to skip them. Most tweets were from the period surrounding the 2020 election, so it may not generalize to other time periods. Inferences from tweets may not generalize to other communications such as speeches. I could not find a proper list of Biden's most liked tweets so I had to search thru them myself; I probably missed some., but it still helps support the idea that PolitiFact was correct to attribute much more honesty to Biden than to Trump.

And of course, it is entirely plausible that one party might systematically lie or be wrong more often than the other, due to differences in education, backgrounds, personality types, and other relevant things which may correlate with political party.

One prominent case of fact-checking was Joe Rogan's three-hour-long interview of Robert Malone. After listening to the podcast and then looking at fact-checkers, I found that the fact checkers generally made useful and fair corrections to Malone's false claims, but sometimes overstated their case a little bit, and never affirmed claims which were actually true. More generally, fact checks are reliable and informative when read responsibly, one just needs to avoid the trap of completely dismissing sources which get called out for (sometimes) spreading misinformation.

FiveThirtyEight

FiveThirtyEight was criticized after the 2016 election for having only assigned Trump a 30% chance of winning the election; however this criticism fundamentally misunderstands the nature of probabilistic forecasting and Trump was in fact unlikely to win given the information available at the time.

A criticism of FiveThirtyEight in 2020 was that their Democratic primary model displayed sharp jumps in its expectations, and that this supposedly means they were miscalibrated or responding to noise. This criticism is not compelling; information in the real world genuinely comes in sporadic chunks, especially given the bandwagoning nature of the primaries, so a jumpy forecast record can be perfectly rational.

I do think FiveThirtyEight's 2020 presidential election model underrated Joe Biden's chances.

News media

Frequent news media consumption is associated with false beliefs about political opponents. Additionally, American trust in the media is low and lower now than it was at any point in polling history. This suggests that it may be inaccurate or misleading.

The news sources most associated with false beliefs about political opponents among audiences are Breitbart, Drudge Report, Slate, Buzzfeed, Daily Kos, Huffington Post, and other ideological sources. New York Times, Washington Post, and Fox News were less associated with false beliefs. Newspapers, local news, Christian news, and mainstream broadcasters were associated with the most accurate beliefs. Of course this correlation doesn't prove causation, but does tell you whether these outlets are catering to audiences who care about the truth.

Regardless, if one is looking at news articles (as opposed to editorials) and reads them and their sources carefully to understand exactly what is being reported then the vast majority of media sources can be trusted to provide correct information. EAs tend to believe that mainstream media news and investigative reporting is factually accurate at least most of the time.On a survey of 17 EAs, the mean estimate for the chance that this claim is true was 73%; I think it's still an underestimate. Bias generally takes the form of editorialization or omitting inconvenient information, not outright falsification.

American news media is increasingly hostage to leftist outrageSee this essay by Matt Taibi. It has been endorsed by several journalists and Noam Chomsky (a leftist expert on American media)..

Opinionated sources

An opinionated source is one that espouses a particular point of view – such as left-wing or right-wing.

Opinionation doesn’t necessarily imply unreliability, and “unbiased” sources can mislead to support centrist narratives or narratives that they have unwittingly picked up from biased sources. Nominally unbiased sources can still be vulnerable to phenomena like availability cascades and moral panics. But ideological alignment is still something to keep in mind when looking at a source.

Some empirical evidence for the detrimental effects of opinionation is the fact that people who are moderates or politically disengaged are less likely to have objectively false views about their political opponents when compared to people with extreme political views, who tend to overestimate the degree of extremism of their opponents. The same is true of people who consumed news from ideological outlets (like Breitbart, Drudge Report, Slate, and Huffington Post), compared to people who consumed no news or news from official broadcast networks.

Also, both left- and right-wing extremists demonstrate overconfidence in their political judgments and intolerance of opposing groups and opinions.

Partisans tend to have accurate beliefs about factual economic matters, and supposed bias on this topic is often simply selective reporting that disappears when people are incentivized to be truthful. But many opinionated sources clearly are not incentivized to be truthful.

Polling

As of 2019, American political polling was reasonably accurate (generally speaking). In the 2020 presidential election, the national polling average was off by 4.4 pointsThe final 538 polling average was +8.4 for Biden, and the eventual outcome was +4.0 for Biden. with much larger errors in many states.

When statistical election modeling, probabilistic forecasting or prediction markets are available to predict the results of an election, polls can typically be safely ignored in favor of these more advanced tools. However in some cases polling is the only thing available.

Polls can be useful in other contexts besides forecasting elections, such as making judgments about international affairs (e.g., "how many Iraqis approve/disapprove of the presence of American troops").

Prediction markets

Prediction markets can be very good. Prediction markets outperformed poll-based election models in the 2020 elections.

Prediction markets have a major problem with "long-shot bias" - they often overestimate unlikely events which may take place later in the future, as it can be unprofitable to make accurate bets in such a situation. Another limitation is that they are less efficient in the aftermath of a new poll, subsequently improving as experienced traders dominate the market again. They can also be manipulated.

PredictIt has major issues with fees and an $850 cap on contributions. Betfair is more efficient and also provides more precise probabilities (with its presidential election results being displayed by primary.guide). But Betfair and PredictIt don’t seem to have more than a 2x difference in market size.

See Election Betting Odds for integration of different betting markets on elections.

Global Guessing showcases forecasts and prediction markets on certain topics.

Randomized control trials

When it comes to internal reliability, RCTs are nearly perfect. When it comes to generalizability, they are neither better nor worse than other study designsSee table 6 of this paper..

Like other study designs, RCTs can suffer from p-hacking. Pre-registration alone does not reduce p-hacking in RCTs, but pre-registered studies with complete pre-analysis plans are significantly less p-hacked.

Researchable

Researchable is fairly new and promising, but some of their writeups seem incomplete and in one case they slightly misrepresented a study's findings (making charter schools seem a bit worse than they actually are).

Regression discontinuity design studies

There is a lot of literature and controversy over RDDs, see:

https://statmodeling.stat.columbia.edu/2020/01/13/how-to-get-out-of-the-credulity-rut-regression-discontinuity-edition-getting-beyond-whack-a-mole/

https://www.tandfonline.com/doi/abs/10.1080/19345747.2019.1634169

https://amstat.tandfonline.com/doi/full/10.1080/07350015.2020.1737081

https://www.nber.org/papers/w27424

https://www.dropbox.com/s/ycql1s4wfc29s50/hartman_rdd_equiv.pdf?dl=0

Rootclaim

I have looked fairly closely at two of Rootclaim's analyses, those of fraud in the 2020 US presidential election and the 2013 chemical attack in Ghouta, Syria. In both cases I thought their concluding odds were off by only a few percentage points. I searched around the Internet for comment board discussions of Rootclaim, and while I found many people dismissing them for superficial reasons, I never found any serious rebuttal of any of their analyses. Rootclaim is not perfect but they are pretty damn impressive and their reports ought to be taken very seriously alongside more traditional academic and think tank sources.

Think tanks

Think tanks vary in quality. Sometimes they are pushed by ideological funding and are closely intertwined with activism. Other times they are on par with good academic research.

Top quality think tanks include RAND, Brookings, and Niskanen Center. Niskanen is particularly notable for having a collection of people who have echoed or associated with Effective Altruist ideas and community in some form or other, namely Samuel Hammond, Matt Yglesias, Jennifer Doleac, and Jeremy Neufeld. CATO is probably overrated, but not too bad.

A CSIS analysis of a possible Russian invasion of Ukraine aged well.

Vox

Vox's Future Perfect journalists Dylan Matthews, Kelsey Piper and Sigal Samuel slightly outperformed the Metaculus community in forecasting 2020. Future Perfect articles are usually pretty good, the rest of the website is more hit-and-miss.

Wikipedia

Wikipedia over-emphasizes editorial journalism, sometimes neglecting both academic research and unpublished sources. Wikipedia can uncritically reflect problems in mainstream media coverage. Wikipedia seems to exclude low-quality right-wing journalism more readily than it excludes low-quality left-wing journalism. Wikipedia editors and admins can be systematically biased.

Wikipedia has been criticized for having a mostly male userbase. However, these male editors are likely to be politically liberal with feminist political sympathies. In addition, there have been dedicated projects to promote women’s issues in the encyclopedia. In my limited experience, articles on gender-related topics on Wikipedia generally seem to be either balanced or slightly biased in favor of feminist viewpoints.

Wikipedia can be toxic or unfriendly to new contributors.

The health of the Wikipedia editing community has been undermined by the Wikimedia Foundation taking the unprecedented move of unilaterally banning an administrator with dubious rationale.

Despite these problems, the end product is often quite useful and unbiased on major policy topics and politician biographies. Polarized crowds with comparable numbers of people on both sides produce good articles, as do honest editors who legitimately care about good content. But Wikipedia can have more flaws on difficult technical topics, and on hot-button sociopolitical issues where most of the editors share a particular point of view. And when mainstream media is flawed, its flaws can easily carry over to the corresponding Wikipedia articles.

When I cite Wikipedia I do it for the sake of convenience, when I can tell that the content is well-cited with reliable sources and it would take extra time to copy it into this report. Wikipedia citations here should be considered placeholders for the time-consuming task of going thru the sources and writing my own Wikipedia-like summary.

Unimportant sources

Critical studies

Substantial controversy surrounds a set of academic theories and publications on issues such as race, sex, and gender in humanities departments, known as critical studies or sometimes as “grievance studies.” The research frequently comes from post-structuralist or post-positivist assumptions (though some detractors use the term postmodernist), is recognizable by its thick prose and unique vocabulary, and is frequently perceived as promoting left-wing points of view. Critics regard this body of work as being politically motivated, not rigorous, vulnerable to hoaxes, and biased by the pressures of activists. Another criticism is that many of its adherents form a bad scholarly community which does not respond rationally to criticism, which indirectly raises doubts about the quality of their work. The critics are in turn accused of being uninformed, politically motivated, and bigoted. This debate reaches a rare level of polarization and hostility. At a glance, it is plausible though far from proven that many of the criticisms of critical studies literature are correct. The reality could well be somewhere in the middle. I see good grounds to suspect bias in these fields, but at the same time it seems fair to to presume that much of the scholarship is of adequate quality.

But despite the political importance placed in the critical studies debate by both the right and the left, I actually haven’t come across any examples of such research that are substantively applicable to answering the questions in this report. It seems that readers (and even authors) of critical studies from both sides of the political divide exaggerate the work’s importance by ignoring the nuanced contextual meanings of key terms (e.g. “oppression”, “patriarchy”, and so on) in favor of value-laden colloquial definitions. Once the contextual meanings are understood carefully, it seems that research in critical studies tends not to provide significant conclusions for the goal of predicting which political actions will have the best impact on global welfare.

Therefore, I haven’t concerned myself with broadly judging this kind of scholarship. If some particular work might be useful, then I’ll just evaluate it on a case-by-case basis. In the process, we may come to a better understanding of this methodological debate.

Discord

The "Combined Defense" server offers fairly informed discussion on military affairs. They are highly informed on specific tactics and technology, but not particularly informed about broader political dimensions. On the positive side, they seem good at cutting through a lot of the bullshit which murkifies some military discussions in more public and mainstream spheres. On the negative side, the server suffers badly from toxicity among the established members. The userbase has a tendency to sharply dismiss particular think tanks, analysts, YouTube channels, etc as being unreliable, but it is often unclear whether these dismissals are reasonable or overreactions to minor points of contention. An especially worrisome issue is that members are ignorantly dismissive of prediction markets and probabilistic forecasting, and the moderators have actually banned discussions of the two topics.

The r/WarCollege server is new but looks promising.

Founding Fathers

One of the common faiths of American politics is that the Founding Fathers really knew what they were doing and we should have the highest level of respect for their judgments.

Empirically, the Founding Fathers did narrowly succeed in creating a stable democracy, and the track record of the United States' political institutions compared to its contemporaries serves as evidence in favor of their ability to design a good political system. However, there were many pitfalls to their judgment which we can avoid today.

First, the founders did not have our opportunity to observe the unfolding of history, progress in voting theory, and other developments of the past 230 years which inform present judgment.

Second, humans in general have become smarter over this period, populations have gotten larger, and learning has become more extensive, providing us with much more raw intellectual power to find solutions to political problems.

Third, the federal constitution and other early American documents were a series of practical compromises between competing parochial interests rather than a principled implementation of a noble philosophical vision. There were many founders who saw problems with the Constitution but were simply outfought by contemporaries who were arguably less principled. Academics and pundits today could in theory make objective judgments. In practice, people today still have many parochial interests, but they tend to be more benign ones than those of the late 1700s (like slavery) and are at least more important today given that they are based on our current political economy rather than the political economy that we had over 200 years ago.

Fourth, the country itself has changed since the late 1700s in ways which could matter for political institution design, such as a massive increase in territory and population size, changes in demographic composition, changes in culture, urbanization, urban-rural political polarization, and the rise of political parties.

The very philosophical ideals underpinning political judgments have also changed, such as the decline of religious thought, the decline of autarchism, the decline of Western-Enlightenment chauvinism in favor of both cosmopolitanism and realism, the rise of liberal race and gender norms and the subsequent rise of woke race and gender essentialism. But it's somewhat ambiguous whether philosophical ideals are overall better or worse now.

It's also worth noting that some common interpretations of the Founding Fathers' judgments are simply wrong. First, as noted above, much of what people today regard as essential characteristics of the American political system were only the judgments of a bare majority of the Founding Fathers rather than a true consensus among them. Additionally, certain views about the politics of early America are just incorrect, like the idea that it was a libertarian country and certain interpretations of the Second Amendment.

History

History can be useful for drawing lessons about the modern world. ‘Applied history’ in particular is more relevant and valuable. Counterfactual thinking is important for drawing lessons from history. However in practice it takes a lot of work to learn history well enough to draw proper conclusions, and then those conclusions will have dubious applicability to the modern day. Meanwhile, when people draw conclusions from very simplistic historical points (e.g. "Neville Chamberlain tried appeasing Hitler and it didn't work, so we should never appease aggressive countries"), the results are not robust.

It’s worth noting that there are quite a few politicized debates over some of the more moralistic aspects of history, like “were the Founding Fathers racist?” and “is it fair to call Christopher Columbus the discoverer of the Americas?” and so on. While these narratives seem to play a role in many people’s political discourse and reasoning, they do not help determine which policies and politicians will actually do the most to protect and improve society. Even insofar as politicians make statements about history which affirm or deny certain forms of national historical mythos, our political judgment about them should depend on whether such myths are socially advantageous, which is only somewhat related to the question of whether they are the scholarly truthTo be clear, I do believe that scholars should be rigorously committed to the truth and not try to obscure things in order to protect their preferred social order. But politicians' bullshit stories about the meaning of American patriotism or whatever are a different matter..

Old sources

I usually don’t pay attention to old sources, such as 19th-century and early- to mid-20th-century writings. In most subjects they have been superseded by newer work that takes advantage of the great expansions of knowledge and increases in academic output that have recently occurred. The simple fact is that not knowing about the most recent century of human history is a hugely damning handicap for any social science writing which is purported to have relevance for current politics. In cases where old work is still directly relevant to current academic debates – such as certain areas of philosophy – it tends to investigate issues which are simply not relevant for predicting the practical, physical impacts of policies on welfare.

However, old writing can be indirectly useful for presenting alternative ways of thinking which may be out of fashion due to cultural changes.

Research based on old data may fail to generalize to current and future generations. Dual inheritance theory poses this problem for many areas of political science and sociology, in addition to psychology.

Philosophers

Selecting a goal such as maximizing well-being clearly depends on moral philosophy.

For further judgments, most moral philosophy is clearly unhelpful as it relies on various false moral assumptions from the plethora of alternative theories.

The obvious implication is to look at the views of those philosophers who make consequentialist arguments about what actions and policies will be socially beneficial. However, in my experience this scholarship is typically unreliable. Their training in philosophy does not translate to rigor in science, economics, history, deterrence theory, or other crucial subjects. This applies to regular utilitarian moral philosophers and also to post-liberal moral philosophy which approaches social issues in explicit terms of expected harms. Examples of the former include McMahan’s utilitarian treatment of the atomic bombings, which makes significant relevant errors about the historical record, and utilitarian writings about animal farming which have usually been simplistic as well as ignorant of impacts on wild animal suffering). An example of the latter is the argument that interracial sexual preferences are unethical, which is based on supposed psychological burdens that they place on members of another race, but ignores a variety of other considerations such as the sexual and romantic satisfaction of the agent, the social consequences of interracial relationships and procreation, the economic benefits of international marriages which can be used for immigration sponsorship, and so on, which renders the paper badly inadequate for the purpose of making actual social judgments.

The writings of historical utilitarian philosophers such as Henry Sidgwick, John Stuart Mill, and Jeremy Bentham have an additional inherent limitation, which is their lack of knowledge of the recent 100-200 years of historical events and scientific research that are highly relevant to making policy judgments today.

Philosophy majors applying for graduate school are the best-performing academic group at the GRE, scoring 0.71 standard deviations above the mean.

There is good evidence that philosophers have a general tendency to discriminate against right-leaning views and individuals; far-leftists also feel that philosophy discriminates against them but there isn't good evidence for this.

Philosophers can be somewhat toxic towards outsider perspectives. Academic turf wars and gatekeeping are not less common in philosophy than they are in other academic fields. There is a pattern of harassment and general hatred by some of the philosophy field against the LessWrong web forum and by extension much of the Effective Altruist community, which reflects poorly on philosophers' character.

Philosophers in Germany in 1932 were not less likely than members of other fields to support the Nazis.

Psychologists

Psychology is below average among social science fields at producing replicable research. Ideologically polarized psychology research is less likely to replicate. There is systematic publication bias. Psychology also has a significant generalizability crisisSee this paper, and supportive commentary by Andrew Gelman..

Left-wing bias causes psychologists to spend much more time studying the character and evolution of right-wing people than left-wing people. Many anecdotes suggest a trend of “equalitarian” biases in the field. The research paradigms of implicit bias and implicit attitude testing have general flaws. However, liberal- and conservative-leaning psychology studies are equally likely to replicate.

Psychology as a field is not ready to provide policy guidance for the COVID-19 crisisSee this paper, and reflections by one of its authors.. More generally, psychology research rarely provides actionable conclusions for policymakers.

Evolutionary psychology is a major theoretical perspective with practical relevance for crime law and policy, but is limited by difficulty explaining many modern behaviors, our poor understanding of the history of our selection pressures, and difficulty explaining differences among different kinds of humans. Evolutionary psychology replicates no better than other fields of psychology; “the studies I skimmed from evopsych journals were mostly just weak social psychology papers with an infinitesimally thin layer of evolutionary paint on top. Few people seem to take the "evolutionary" aspect really seriously.”

Psychology majors applying for graduate school are an average-performing group at the GRE, scoring 0.06 standard deviations above the mean.

Public health experts

The performance of Western public health experts over the course of the 2020 coronavirus pandemic has been downright embarrassing, comparable to how the Iraq War should reflect on the national security establishment. Errors in their early prevailing views included dismissals of the threat of the pandemic, dismissal or even hostility towards early travel restrictions, and exaggeration of the risks of panic and stigma. Anthony Fauci fudged epidemiological estimates for political reasons. Public health experts continued to neglect or unfairly dismiss various good policy ideas such as variolation, isolation, tracking, human challenge trials, expedited drug and vaccine authorization, and single-dose vaccination. Many public health authorities stuck to a somewhat simplistic and poorly thought out pro-NPINon-pharmaceutical intervention, which loosely corresponds to "lockdown" in layspeak. mindset. Most bizarrely, while many public health experts warned against travel restrictions in early 2020, when the restrictions could have been genuinely useful, the experts (with exceptions) seemed to mostly forget about the issue later on when travel bans really did become pointless xenophobic restrictions amidst an already-worldwide pandemic. Some public health experts also got distracted with petty culture war feudingMost notably, complaints or downright bigotry against "tech bros" and other non-experts who publicly shared opinions about pandemic policy.. Finally, their ethical judgment on vaccine prioritization was deeply flawed.

There was a minority of contrarian health experts during the pandemic. However, they were even worse than the mainstream. They became bizarrely fixated on anti-NPI activism while ignoring a variety of cheap and productive positive steps to control the pandemic, and produced a good deal of bad science, debunked forecasts, and idiotic policy recommendations.

Many of these criticisms point in conflicting directions. Public health experts didn't suffer from one systematically bad ideology; instead they were just confused and unserious about the entire affair.

Empirically, the bureaucratized Western countries mostly performed very badly in the pandemic, whereas numerous developing countries and East Asian countries did well. Standard pandemic preparedness measures designed and advocated by experts provided no apparent benefit for pandemic outcomes.See this article from Palladium Magazine and this article from ThinkGlobalHealth. In fact, the standard measures of pandemic preparedness have been anticorrelated with success against COVID19. This is doubtful, but it's at least apparent that they don't positively cause success.

We shouldn't ignore the views of Western public health experts, but they should be considered cautiously against the views of groups with similar or better track records. These include public health authorities in countries which did well (Taiwan, Vietnam, South Korea, and New Zealand), GMU economists, Zeynep Tufekci, and members of the LessWrong/Rationalist community.

In terms of simply predicting the course of the pandemic, experts (including epidemiologists) performed better than lay people but were still substantially miscalibrated.

Reddit

Reddit’s official administrators practice moderate amounts of censorship and censure. Specific subreddits can have much more severe censorship from the rather arbitrarily chosen user moderators. Policing actions on most subreddits seem to have a left-wing bias, particularly on social issues. I recommend using Reveddit to circumvent moderator censorship.

Reddit is generally the most left-wing of all major social media sites.

Judgments from Reddit can range anywhere from decent to terrible depending on the subreddit. Here I list some subreddits that are sometimes useful. A consensus opinion from such a community can be more reliable than editorials published in the media. Other communities often have censorship of dissenting views and low-quality discussion. All communities are vulnerable to groupthink and moderator abuse. It’s also worth bearing in mind that the Reddit userbase broadly leans left and this significantly influences political discussions nearly everywhere on the site.

r/asksocialscience has a relatively high proportion of subject matter experts and attention to research in the field. They don’t seem to engage in any censorship (aside from deleting inadequate answers).

r/askeconomics has a relatively high proportion of subject matter experts and attention to research in the field. However, moderation can be petty and restrictive.

r/effectivealtruism has some of the epistemic and ethical virtues of the broader Effective Altruist community, but less useful political discussion due to having many new and casual participants.

r/irstudies is a high-quality collection of studies and research, but meaningful discussion is sparse.

r/badeconomics has relatively high proportions of economists/economics students and other users with an interest in economic policy, and they consistently refer to economic research. They also have a basic level of concern for global poverty. However it is very left-wing on some non-economic topics such as social justice, including some banning of dissent, and this could create implicit bias against some economic views.

r/neutralpolitics enforces a decent standard of quality and objectivity with openness towards different mainstream political views. They tend to be well informed on the facts of current political news and events.

r/themotte has a roughly even balance of conservatives and liberals, with further openness towards extreme-right and extreme-left views. Rigor and accuracy across a broad range of political topics is higher on average than at any other subreddit – it is not best on any particular policy area, but it is the best general subreddit. Moderator behavior can be odd and capricious, but still better and much more open than most political subreddits. Unfortunately many users have explicitly nationalist, community-focused, and anthropocentric moral beliefs and this skews their judgments on some political issues.

Political science

Quantitative political science research is greatly underpowered; "only about 1 in 10 tests have at least 80% power to detect the consensus effects reported in the literature".

Sociologists

Sociology performs second-best among social science fields at producing replicable research.

Behavioral scientists were no better than lay people at predicting the social impacts of COVID-19.

Sociologists prefer to avoid working with fundamentalists, evangelicals, NRA members, and Republicans which casts doubt on the accuracy of some kinds of their research. Sociologists’ ideological presumptions prevent hypotheses about differences between genders from making serious research inroads. 83% of sociologists identify as ‘liberal’ or ‘radical’ whereas only 4% identify as ‘conservative’ or ‘libertarian’, and the majority believe in using the discipline as a moral enterprise to transcend all forms of social oppression.

The construction and popularization of the ‘racial resentment’ metric appears to be a case where sociologists’ ignorance or dismissal of conservative viewpoints caused them to incorrectly classify them as being motivated by racist attitudesSee articles from Vanderbilt Political Review and National Review.

Will Wilkinson, a moderately-left-wing researcher formerly of the Niskanen Center, believes that the racial resentment metric is flawed but does not believe it is any kind of extraordinary misstep that should discredit the field.
. Racial resentment does not predict racial discrimination.

Sociology majors applying for graduate school are an average-performing group at the GRE, scoring 0.02 standard deviations above the mean.

Twitter

Twitter has a relatively high level of sitewide censorship and account suspension. Certain statistics, if presented in an offensive context, can lead to a suspension. Another example is censorship over gender issues. A single troll managed to get at least 39 of their critics suspended from the site. Twitter staff can display strong political hostility. Its practices are often accused of being biased against the right wing, prompting partial flights to Gab and Parler.

Twitter’s algorithm does not silence conservatives, but it does favor inflammatory tweets.

The Twitter community leans left. It is dominated by the center-left, has very few center-right users, and has a surprising number of extreme-right users. It seems to disproportionately represent a more politically active section of the population. A popular adage goes "Twitter is not real life."

Twitter is a good place for staying on top of recent research and the opinions of experts. At the end of the day, it is the concentration of smart and thoughtful people (especially when they are identifiable with their real names) which seems to matter most for productive social media, and Twitter does this best.

Sometimes I use very small Twitter polls to get confirmation of certain ideas. These obviously don’t carry much weight, but they are still a little better than relying purely on my own assertions.

YouTube

Compared to other social media platforms, YouTube has very high levels of censureship including content hiding and demonetization, plus occasional outright censorship.

It is frequently accused of being biased against conservatives, and occasionally accused of being biased against minorities, but in reality it is best described as merely being hypersensitive to controversy from mainstream media and advertisers (though of course, this dynamic may cause biased policy in practice). YouTube’s recent algorithm revision benefited corporate media and punished independent creators and conspiracy theories, but did not alter broad left-right balance. YouTube’s recommendation system is complex and nuanced. YouTube’s recommendations generally steer anonymous viewers away from viewing radical and extreme content; their recommendations for logged-in users may be different, but probably show a similar biasSee this study by Mark Ledwich and Anna Zaitsev. There has been much informal criticism of this paper, but it largely appears to be misguided (see criticism and rebuttal, and rebuttal to other criticism) and politically motivated..

YouTube’s community leans very slightly left. Left-wing channels on YouTube account for about twice as many total views as right-wing channels, although right-wing channels are greater in number.

Perun makes very good videos about modern warfare, especially regarding military procurement.

HypoHystericalHistory has good talks about military strategy, although mostly focused on Australia.

LazerPig makes good videos about modern military affairs and does well to debunk common bullshit narratives, but he seems to be a bit sloppy and makes a few seemingly incorrect statements.

Two channels with good content about urban design are Not Just Bikes and Armchair Urbanist. The latter however can be a little unfair when criticizing other people's argument.

The Royal Society for Asian Affairs has some good content.

Leonard French has a good channel discussing contemporary legal cases. He seems mildly liberal.

"Video essays" about social and political topics tend to be poor.