What About Legacy Mismatches?

Editor’s Note: This article previously appeared in a different format as part of The Atlantic’s Notes section, retired in 2021.

Last month, as part of our long discussion in Notes on affirmative action and its renewed attention under Fisher v. University of Texas, we addressed the mismatch theory, in which racial preferences in college admissions could do more harm than good if they place unprepared students in the kind of hyper-competitive “prestigious” schools that cause many students to abandon certain academic tracks like STEM or law, or even drop out of college altogether, when those students would have otherwise thrived in those fields after graduating from slightly less competitive schools. (Conor also tackled the mismatch debate.) A reader wonders:

I’m curious if anyone has studied “mismatch” and its effects on legacy admissions at Ivy League colleges. For example, could George W. Bush have become President of the United States on the connections he would have established drinking his way through Texas Tech rather than Yale? Would his behavior and performance while in college have been tolerated had he been admitted to Texas Tech on his own merit rather than to Yale as the son and grandson of elites?

To answer to the reader’s leading question, yes; Gail Heriot, a law professor and member of the U.S. Commission on Civil Rights, addressed the legacy factor in a long report on mismatch theory published by the Heritage Foundation in August. (Heriot also co-authored the amicus brief that Justice Scalia cited in his controversial comments over Fisher last month.) From the Heritage report:

[Duke economists Peter Arcidiacono and Esteban Aucejo and Duke sociologist Ken Spenner] helped to dispel the common belief that beneficiaries of affirmative action catch up after their freshman years with their better-credentialed fellow students. What happens instead is that many transfer to majors where the academic competition is less intense and where students are graded on a more lenient curve. Their GPAs increase, but their standing relative to other students taking the same courses does not.

Again, the authors show that this effect is by no means confined to beneficiaries of affirmative action. White children and grandchildren of alumni who receive legacy preferences have the same experience, earning lower grades than white non-legacies at the end of their first year. While the gap narrows over time, it is only because legacy students also shift away from the natural sciences, engineering, and economics and toward the humanities and social sciences.

This helped the authors to respond to the argument that underrepresented minority students abandon science and engineering because they have no role models there or because they are somehow made to feel unwelcome. It is exceedingly unlikely that anti-legacy bias, lack of legacy role models on the faculty, or any other argument commonly advanced to explain racial disparities in science explains the legacies’ collective drift toward softer majors. If it is the wrong explanation for legacies, it is overwhelmingly likely to be the wrong explanation for underrepresented minorities as well.

Another reader, Jochem Riesthuis, draws a much different conclusion than Heriot’s:

In response to the argument on mismatch, it seems curious to me that the idea of white (or more generally, majority ethnic) mismatch is not part of Sander’s research. After all, many white students get into prestigious and selective universities because of legacy policies, or athletic scholarships, or because of their essay writing skills, great recommendation letters or extracurricular activities, in spite of lower  SAT or LSAT scores. How do these students fare?

One would imagine that they too are less likely to prosper in the demanding circumstances of the intellectually challenging university. I have no data on this, but anecdotally the evidence seems to be on the other side: George W. Bush, for instance, did well enough at Yale, without being a high flyer academically. If that pattern holds for most white legacy students with lower-than-average test scores at selective universities [CB note: That doesn’t seem to be case, according to Heriot], we could argue that it is not the academic challenge but the cultural challenge that trips up minority students.

We could then argue that “White students who have lower academic scores get by on their social skills.” If this is the case, that severely undermines the argument for Sander’s case. After all, if there are less than 10 percent African American students on a campus, their social capital is going to diminish accordingly. There will be less people who look like them, or with whom they share a cultural background. In a hostile racial environment—on the assumption that the U.S. is a hostile racial environment—this becomes detrimental to academic performance.

In other words, the presence of other African American students—even if they do not perform well academically—helps the gifted African American student do better. In order to help more minority students succeed at law school, the law school needs to help them acquire the social capital needed to become a lawyer—for instance, by providing first-year internships at black law firms. As your reader argues, it is the job of the elite universities to help bright but socially challenged students.  

Claude Steel at Stanford has done research suggesting that the lack of social capital and the “stereotype threat”—where students fear confirming the stereotype associated with their minority—may indeed harm their scores. See him explain his work here. For a few critiques of Sander’s argument that runs according these lines, see this essay in The Journal of Blacks in Higher Education this review of Sander’s book in the Texas Law Review.

Conversely, if we see a failure in white legacy students to become lawyers similar to what Sander saw in minority students, this would be an argument to stop all selection other than on SAT and LSAT scores: no more recommendations, no more lists of extra-curricular activities, no more essay writing, and certainly no more legacy or scholarships based on anything other than test scores. The societal costs of such a system might be too heavy to bear. What would be the point of donating to a university, or supporting it politically, financially or intellectually, if it gave no benefits to you and yours? It’s strange to single out affirmative action students as the sole beneficiaries of an admissions system that considers more than only test scores.

Your thoughts? Drop me an email and I’ll try to incorporate them into the ongoing thread on affirmative action. Update from our reader Jochem:

I would add a question in response to Heriot’s argument:

Is the lower success rate for legacy students also true of law schools? It would seem to me that the study of law is closer to the humanities and social sciences than to STEM subjects, and it is in law schools that we see many legacy students too.

If it is also true of law schools, I would suggest that even at the cost of having fewer African American lawyers overall, there is societal gain in having affirmative action at prestigious law schools to provide a buffer for the exceptional African American law students. Affirmative action thus leads to more Ivy-league educated African American lawyers—however few in number—who would be eligible to sit on the Supreme Court and lower appellate courts. Gone are the days when people with degrees from other universities get on the bench. If we don’t want to return to an all-white Supreme Court, we therefore need affirmative action.

The same would hold for other fields, since for instance the NIH overwhelmingly offers fellowships and research grants to scholars from prestigious universities. If these do not include African Americans and other minorities, their specific health issues are likely to get researched less, leading to higher health costs for all of us, which is already a problem.