Showing posts sorted by relevance for query meritocracy. Sort by date Show all posts
Showing posts sorted by relevance for query meritocracy. Sort by date Show all posts

Saturday, February 9, 2013

The System That Dare Not Speak Its Name

I'm not sure I'm going to make it all the way through Chris Hayes's Twilight of the Elites: America After Meritocracy (Crown, 2012).  I know I said I was going to read it, but I got sixty pages into it yesterday and got so dispirited that I didn't want to go on.  I think he's mapped out his assumptions by the point I've reached, and they are what's wrong with the book.  Put simply -- and Hayes is not a complex thinker -- he believes that the United States is a meritocracy: that our rulers are people who showed that they were qualified to run a complex society and its institutions.  He keeps referring to our elites as "the meritocracy," begging repeatedly the question that they actually have merit.

This emerges very clearly in the second chapter, "Meritocracy and Its Discontents."  Hayes begins:
Whether we think about it much or not, we all believe in meritocracy. It is embedded in our very language: to call an organization, a business, or an institution “meritocratic” is to pay it a high compliment.  On the portion of its website devoted to recruiting talent, Goldman Sachs tells potential recruits that “Goldman Sachs is a meritocracy.” It’s the first sentence [31].
I presume that by that first sentence Hayes means that "we" all believe a meritocracy is an ideal to be worked toward.  If so, he may be right, as when he wrote a few pages earlier that meritocracy "is an ideal with roots that reach back to the early years of the Republic" (21), that people should achieve wealth and status through disciplined hard work, not have them assigned by accident of birth.  Or, as he also puts it, "In a meritocracy, people are judged not on the color of their skin, but on the content of their character. This crucial distinction between the contingent and essential features – between skin color and intelligence – appeals to some of our most profound moral intuitions about justice and desert" (51).

He goes on to offer what he considers a "compelling" argument, that meritocracy is not "necessarily fair, but rather that it is efficient. By identifying the best and brightest and putting them through resource-intensive schooling and training, we produce a set of elites well equipped to dispatch their duties with aplomb. By conferring the most power on those equipped to wield it, the meritocracy produces a better society for us all. In rewarding effort and smarts, we incentivize both."  He goes on to concede that "meritocracy in practice is something different.  The most fundamental problem with meritocracy is how difficult it is to maintain it in its pure and noble form" (52-3).  Before you can "maintain" a meritocracy, however, you have to achieve it.  Hayes only assumes that it has been achieved (for a few bright shining years in the Sixties, I gather), and this is not something that can be assumed; you need to provide an argument and evidence, which he doesn't do.

Instead he begins with a case study, the Hunter College High School in New York City, of which he is an alumnus.  (Along with many other well-known people, including Florence Howe, who wrote about her experience there in her memoir A Life in Motion.)  Rather flustered, Hayes describes how Justin Howard, a graduating senior at Hunter, delivered a commencement address in 2010 that challenged many of the school's pretensions.

Hunter, Hayes explains,
embodies the meritocratic ideal as much as any institution in the country.  It is public and open to students from all five boroughs of New York City, but highly selective.  Each year, between 3,000 and 4,000 students citywide score high enough on their fifth-grade standardized tests to even qualify to take Hunter's entrance exam in the sixth grade.  Only 185 are offered admission ...

Hunter is routinely ranked one of the best high schools in the nation.  In 2003, Worth named it the highest-ranking public feeder school, and Newsweek, in a 2008 evaluation of high schools, named it one of eighteen "public elites."  In 2007, the Wall Street Journal identified Hunter as sending a higher percentage of its graduates to the nation's top colleges than all but one of its public peers.  That year, nearly 15 percent of the graduating class received admission to one of eight elite universities that the Journal used for its analysis.  The school boasts an illustrious group of alumni, from famed Civil Rights activist and actress Ruby Dee, to writers Cynthia Ozick and Audre Lorde, to Tony Award winners Robert Lopez (composer and lyricist, The Book of Mormon and Avenue Q) and Lin-Manuel Miranda (composer and lyricist, In the Heights), to West Wing writer and producer Eli Attie, to Supreme Court justice Elana Kagan [32].
Where do I begin?  Well, first and I hope most obviously, Hayes takes standardized tests very seriously: he seems to think that they actually measure merit.  In reality they measure ability to choose correct answers from multiple-choice options under intense time pressure in a competitive environment.  Marking in bubbles on an answer sheet -- neatly, neatly, don't get outside the bubble! -- while the clock ticks is indeed a skill of some kind (I did quite well at it myself in the day), but I'm not sure it is equivalent to "merit."  Hayes is sure it is.  He considers the SAT, for example, "an objective measure of the 'merit' of the applicant" so that "the nebulous subjectivity of the admissions procedure would be eliminated in favor of an equal, accessible, and objective metric" (37).  There is quite a large literature which explains why this isn't so, but Hayes seems not to be aware of it.  Well, nobody's perfect.  (For now I'll refer the reader to two articles by Alfie Kohn, available online, and to his book The Case Against Standardized Testing [Heinemann, 2000].)

Notice that the famous alumni Hayes names are not, in fact "crats," however much merit they have, except for Justice Kagan.  Poets, composers, writers: they contribute to our society, no doubt, but they don't rule it.  This is a problem that runs throughout his discussion.  Not only is he confused about what merit is, he consistently ignores the "-cracy" in meritocracy.  He's far from alone in doing so, of course.

Hayes also seems to want to impress the reader by how selective Hunter is.  Thousands apply, but only a few, the best! get in.  The best!  If only a few are let in, they must be really good, right?  And it proves that the school is good, right?*  He's so attached to this idea that he misunderstands Justin Howard's critique of the Hunter system -- or rather, of the system that produced Hunter, the system Chris Hayes mistakes for meritocracy.  Howard's entire speech is available online; I'll mainly stick to the portions Hayes quotes.  "I feel guilty," Hudson said,
... because I don't deserve any of this.  And neither do any of you.  We received an outstanding education at no charge based solely on a test we took when we were eleven-year-olds, or four-year-olds.  We received superior teachers and additional resources based on our status as "gifted," while kids who naturally needed those resources much more than us wallowed in the mire of a broken system.  And now, we stand on the precipice of our lives, in control of our lives, based purely and simply on luck and circumstance.  If you truly believe that the demographics of Hunter represent the distribution of intelligence in this city, then you must believe that the Upper West Side, Bayside, and Flushing, are intrinsically more intelligent than the South Bronx, Bedford-Stuyvestant, and Washington Heights, and I refuse to accept that [quoted by Hayes, 33].
Hayes reports that the "parents in the crowd were, not surprisingly, a bit taken aback.  The teachers offered a standing ovation.  Jennifer J. Raab, the president of Hunter College and herself a graduate of Hunter College, stayed seated" (34).  Hayes himself is indignant: "Unlike elite colleges ... entrance to Hunter rests on a single 'objective' measure: one three-hour test.  If you clear the bar, you're in; if not, you're not.  There are no legacy admissions, and there are no strings to pull for the well connected.  If [NYC Mayor] Michael Bloomberg's daughter took the test and didn't pass, she wouldn't get in.  There are only a handful of institutions left in the country about which that can be said" (34).

But Hayes is missing the point, partly I think because Howard didn't make it clearly enough.  I think he'd have preferred to miss it anyhow.  In one sense, I disagree with Howard: I think he and his fellow students do deserve the "outstanding education at no charge" that they received at Hunter.  But every other child in New York City, and indeed in the US and around the world deserves an outstanding education at no charge.  Not the same education, mind you, but an outstanding education that would bring out and nurture the capacities each child has.  Most children in New York don't get anything like that unless their capabilities match those valued by elite universities, which Hayes, begging the question, assumes to be merit.  This is the source of Howard's survivor's guilt, and Hayes misses it entirely.

Only 185 children are offered admission to Hunter each year.  That seems an arbitrary number, but I suppose it's the capacity of the school.  There is no reason why there couldn't be other good schools in the city, each admitting another 185 of those who've scored high on the entrance exam; even those who take the test in the first place have already been selected from the mass of students by the preliminary test they took in fifth grade.  There are other elite schools in New York, of course, like the Bronx High School of Science attended by Samuel R. Delany.  I imagine there could be even more, if there were the will to spend the money to establish them.

It's good that such schools exist, but there also need to be good schools for all the other children.  There aren't, and that's the root of Justin Howard's guilt and anguish.  This is especially true because we can't know in advance what any given child is capable of, though the testing industry and many educational "professionals" are built on the premise that we can.  Michael E. Young satirized this belief in the 1950s in his The Rise of the Meritocracy, which Chris Hayes has read but whose satire he can't quite grasp.  Young mocked the pretense of psychometricians to be able to identify children's potential at ever earlier ages.  Though it's well established that there is no measure, "objective" or otherwise, that can really do this, Hayes, like other believers in meritocracy, wants to believe that there is, or could be.  And one of the core features of meritocracy as an ideology is that those who rise in the hierarchy must be willing to accept the values of the rulers and leave behind the lesser, lower orders from which they came.  The values of the rulers must be meritorious, because they are the rulers' values and the rulers wouldn't be rulers if they didn't have merit, right?

"Hunter's approach to education rests on two fundamental premises," Hayes declares in his apologia.
First, kids are not created equal.  Some are much smarter than others.  And second, the hierarchy of brains is entirely distinct from the social hierarchies of race, wealth, and privilege.  That was the idea, anyway.  But over the last decade, as inequality in New York has skyrocketed and competition for elite education has reached a fever pitch, Hunter's single entrance exam has proven to be a flimsy barrier to protect the school's meritocratic garden from the incursions of the world outside its walls [35-6].
Hayes makes several blunders here.  First, different is not the same as unequal.  That people are not equally intelligent does not mean they are or should be unequal in terms of rights; Hayes's allusion to the opening of the Declaration of Independence shows that he either doesn't understand the difference (more likely, I suppose) or that he's deliberately blurring it (in which case he isn't as smart as he likes to think).  Second, there is not really a "hierarchy of brains" distinct from the "social hierarchies of race, wealth of privilege," especially since (as Hayes uncomfortably admits) the two are intimately connected, even intertwined where Hunter is concerned.  The percentage of black students entering Hunter dropped from 12 percent in 1995 to 3 percent in 2009, he reports on page 36.  Have black New Yorkers gotten 75% dumber in 14 years?  I doubt it very much, and even Hayes can't quite bring himself to say so.  (No doubt because of the "open-minded, self-assured cosmopolitanism that is the guiding ethos of the current American ruling class," that he says he "absorbed" at Hunter [35].)  Hunter isn't responsible for the growing economic inequality, with all its destructive consequences, of the world outside its walls -- or is it?  After all, its graduates are disproportionately represented among the elites who are responsible for the increase in inequality.  It appears likely that something important wasn't covered in that outstanding education they received.

I don't find that surprising.  Hunter isn't an alternative approach to education, it's a portal that admitted a few people who wouldn't have gotten past the old aristocratic sorting methods of the elite schools.  Hunter was founded at a time when the prevailing sorting methods were frankly racist and sexist; basing admission solely on test results was a good start, but it was only a start.  Its aim is to assimilate its students into the values of the ruling elites.  Remember that it was very intelligent, highly-educated products of the system represented by Hunter -- the "best and brightest" -- who invaded Vietnam and devastated Southeast Asia in the name of anti-Communism.  Just as George W. Bush's staff 'fixed the intelligence' to justify the US invasion of Iraq, the Pentagon Papers revealed that the Eisenhower administration fixed the intelligence, despite the absence of supporting evidence, to prove that Ho Chi Minh was a Kremlin agent; Kennedy's men bought this fabrication and ran with it as if it were a game of touch football.  Of course, very few of them died as a result of their brilliance.  (Most of the casualties, American or Vietnamese, didn't merit an elite education.)

Ursula K. Le Guin put it neatly in words she invented for the philosopher Odo in her novel The Dispossessed, originally published in 1974 (reprinted by Perennial, 2003, p. 358):
For we each of us deserve everything, every luxury that was ever piled in the tombs of dead kings, and we each of us deserve nothing, not a mouthful of bread in hunger.  Have we not eaten while another starved?  Will you punish us for that?  Will you reward us for the virtue of starving while others ate?  No man earns punishment, no man earns reward. Free your mind of the idea of deserving, the idea of earning, and you will begin to be able to think.
These words leapt out at me when I first read them decades ago, because I'd recently encountered the same idea in Walter Kaufmann's Without Guilt and Justice, where he attacked "the idea of deserving" at length.  I don't expect Chris Hayes to have read either of these books, of course.  But he needn't have read them to have encountered some enlightening alternative views of ability, education, and social justice.

Hayes alludes a couple of times to the metaphor of the cream rising to the top.  So does the scum, but he prefers not to think about that.  In either case it is a metaphor which has dubious relevance to education or the running of a society.  He eventually claims that at Hunter, "over time the mechanisms of meritocracy have broken down. With the rise of a sophisticated and expensive test-preparation industry, the means of selecting entrants to Hunter has grown less independent of the social and economic hierarchies in New York City at large. The pyramid of merit has come to mirror the pyramid of wealth and cultural capital" (54).  This is all wrong.  There is no pyramid of merit, not even figuratively.  The "mechanisms of meritocracy" were never in place.  Perhaps Hayes believes that Hunter's entrance exam could be revised so as to weed out the kids who've been prepped for it, leaving only the truly, innately excellent -- but there is no reason to think so.  Hunter isn't to blame.  The fault lies with the system Hayes mistakenly buys into, no doubt because he has benefited from it: the hierarchy of wealth and power that justifies itself by declaring itself a meritocracy.  The merit involved can only be instrumental, the ability to carry out efficiently tasks whose value isn't open to question.  Should we invade this or that country?  Don't ask: the only question is whether we can do it efficiently.  I think Hayes is obscurely and uneasily aware of this, but isn't ready to challenge the values of the society that raised him up and gave him a platform to write books like this one.

Speaking of which, it looks like the next chapter will focus largely on professional baseball's doping scandals.  The ability to play baseball well is a merit easier to measure than the ability to govern a country well and wisely, but baseball players don't rule the country either.  I have to remind myself that what concerns Hayes is that our elites have let us down.  Say it ain't so, Joe!  I'm going to have to steel myself to keep reading.

* P.S.  On rereading this it occurred to me that while it was obvious to me that Chris Hayes was wrong to assume that Hunter's extreme selectivity was evidence of its merit, and of its students, it might not be obvious to everyone else.  Sarcastic rhetorical questions, while fun, aren't an argument.

So: suppose that Hunter still only admitted 185 new students each year, but did so purely on the basis of a lottery, a random drawing.  It would still be very selective, but no one would claim that this proved the "merit" of the 185.  Or suppose that admission was based on the ranking of the students' parents among New York City's wealthiest families.  (Or its poorest, if you like.)  There are probably people who'd argue that the richest have demonstrated their merit and their children have demonstrated theirs by choosing good parents to be born to, but I don't think Hayes would be one of them.  Selectivity by itself doesn't prove much; consider the negative associations of the word "exclusive," as in "exclusive country club."

Hayes claims (or assumes -- after all, he passed it, so it must be good!) that Hunter's entrance examination is an "objective measure" of the students' aptitude for the Hunter experience, but I doubt it: could it be any better than IQ tests or the College Board examinations, both of which are seriously flawed as measures of intelligence, intellectual ability, or scholastic aptitude?  If so, Hunter should share it with the world.  But here I'm only really concerned with his evident belief that selectivity is a good in itself.

Wednesday, January 16, 2013

When I Hear the Word "Meritocracy" I Reach for My Critical Thinking

I'm not sure why it seems important to write something about Chris Hayes's recent book Twilight of the Elites: America After Meritocracy (Crown, 2012).  Glenn Greenwald recommended it highly, and I have a lot of respect for Greenwald.  (On the other hand, he also thinks more highly of Rachel Maddow than I do.)  For another, Hayes has done some good work on his TV program.  But the most important reason is that I find the whole question of elites and meritocracy provocative.

Before reading Hayes's book, though, I read a classic of sociological satire, The Rise of the Meritocracy 1870-2033, by Michael D. Young, originally published in 1958. Young invented the word meritocracy, and it's remarkable how quickly what he intended as snark was adopted by the very people he was mocking.  Understandably, this annoyed him, and in 2001 he complained in print that the Labour party had borrowed the word for its own purposes, along with the abuses Young had warned about.

"Meritocracy" is a hybrid word: like "homosexuality," it consists of a Latin and a Greek component glued together.  The -ocracy was an obvious choice, given its already widespread use.  Why didn't Young use a Greek prefix?  Maybe because such a compound already existed: aristocracy, which means rule by the best or most excellent.  For several hundred years, though, aristocracy had taken on connotations of rule by the privileged, the "best-born," and one could refer to an aristocrat without meaning that he or she was an excellent person -- rather the opposite.  Even those who mainly supported the social order, like Jane Austen, were aware that birth and breeding didn't guarantee excellence, virtue, or even mediocrity: an aristocrat could be crass, stupid, dishonest, and corrupt.  I don't think I'm the only person for whom aristocracy suggests an inbred class of people, probably related to each other, who assume their excellence, especially when they have none.

Merit has similar ambiguities, though, like any abstraction.  It used to mean an earned reward or punishment; I guess demerit eventually took on the sense of an earned punishment.  Two of Merriam-Webster's definitions, "the qualities or actions that constitute the basis of one's deserts" and "character or conduct deserving reward, honor, or esteem", are probably what Young was thinking of.  They're basically circular, but then Young didn't mean his new word to be a sensible one.  Still, in 2001 he could write, "It is good sense to appoint individual people to jobs on their merit."  The difficulty is working out exactly what "merit" is pertinent to a job.  One common problem that seems to be overlooked, probably because it's intractable: Even after a person has occupied a position for some time, he or she can be excellent in some respects and terrible in others; so what do you do?  Young's fictional narrator could claim that by the late 1900s, testing had been developed to the point where a candidate's aptitude for a given spot could be measured precisely; in the real 2010s we're still far from such discernment.

There appears to be a widespread notion in the US and in the UK that we already live in a meritocracy.  Or sort of one.  Or we'd like to.  Or we should.  Or we used to, but it's turning into an oligarchy.  I did a web search for "we live in a meritocracy," and most of the results were questions rather than declarations.  Not too surprisingly, the pundit David Brooks came up with his own use for the word: "The hunger for recognition is a great motivator for the meritocrat ... Each person responds to signals from those around him, working hard at activities that win praise and abandoning those that don't", as if simply wanting to rise in a hierarchy made a person a meritocrat, or constituted merit.  (The article is a tiny masterpiece of free association.)

Most people who use the word "meritocracy," even critically, seem to take for granted that people in high places actually have proven their merit.  I don't.  Generally their success within the system is assumed to be evidence that they have merit, which is conveniently circular.  In the absence of better evidence, I think Noam Chomsky had a good point:
One might suppose that some mixture of avarice, selfishness, lack of concern for others, aggressiveness, and similar characteristics play a part in getting ahead and "making it" in a competitive society based on capitalist principles. Others may counter with their own prejudices. Whatever the correct collection of attributes may be, we may ask what follows from the fact, if it is a fact, that some partially inherited combination of attributes tends to lead to material success?
People tend to overlook the suffix of "meritocracy," the idea that those with "merit" should rule. The first violinist of the New York Philharmonic may be one of the best violinists in the world, but the main benefit accruing to his or her merit is ... playing in the New York Philharmonic.  Some prestige too, no doubt, but the achievement is an end in itself, as achievement mostly should be.  The same goes for star athletes: they may get paid a lot of money, but no one thinks that their achievement entitles them or qualifies them to do anything but play their sport.  Yet people who make large amounts of money in business or finance often think, and are thought by some, to know how the government should be run, or even just the economy.  Aside from the huge business and fiscal disasters over which such people have presided, their public statements since the re-election of Barack Obama should have disabused most people of any notion that they're even qualified to decide their own salaries: they routinely think they're worth a lot more than they are.

Lately at the library book sale I stumbled on The Reality Club (Lynx Books, 1988), a collection of papers from (according to the blurb) "New York's most vibrant intellectual salon", which offered "every reader a journey to the cutting edge of ideas and knowledge in our culture."  I decided to buy it because one paper was "In Defense of Elitism," by Gerald Feinberg, a Professor of Physics at Columbia.  Strictly speaking, "elitism" isn't the same thing as "meritocracy," but since neither word is well-defined, they work out in practice to be roughly the same thing.  (Feinberg helpfully posts a definition as an epigraph to his article: "the leadership by the choice part or segment" [275], though the definition is neither helpful nor very grammatical: "the leadership"?)  Like "aristocracy," "elitism" has acquired some negative baggage, but there's always somebody game to defend it.

"In what passes for social commentary in present-day America, it is hard to find someone who has anything favorable to say about elitism," Feinberg bravely begins.  Yet "No voices have been raised to condemn the Los Angeles Lakers basketball team or the Chicago Symphony Orchestra, even though those institutions are elitist according to any plausible definition of that term" (275).  Really?  I'm not sure they fit even Feinberg's definition.  What do they "lead"?  Who chose them?  Audiences attend events featuring either "segment" not to be led, but to see what they hope will be excellent performances; but also from local chauvinism, loyalty to the brand, to see and be seen by other fans, and so on.  Or again: who "leads"?  It's not the players but their coaches who lead, and the course of the orchestra will be influenced if not determined by wealthy donors who possess no musical excellence themselves.

Feinberg is quick to distance himself from political elitism:
I am not arguing that elitism is a desirable approach to determining who should govern society.  I agree with the view that was adopted early in our society that participation in the process of government by a large part of society is a more important matter than how effective the governors are [276].
Cute.  And disingenuous; surely Feinberg is aware what a question-begging word "effective" is in this context.  Does he mean, say, making the trains run on time?  He also seems to overestimate the actual "participation in the process of government by a large part of society" in the American system, even in theory.
My defense of the proposition that elitism is a desirable attitude, at least in certain situations, is based on two simple ideas.  One is that some activities are accepted as so worthwhile, both by those that do them and by society at large, that they should be done as well as they can be done.  The other proposition is that some individuals are much better at doing these activities than others, or can become by any methods now known [276].
Even if I grant his ideas, what does either one have to do with leadership?  Why should  acceptance by the ignorant canaille ("society at large") have any more weight in determining what is "worthwhile"?  True, Americans do invest a large amount of interest, energy, and money in elite sports, but in what way does that make the NBA or the NFL more worthwhile than pro wrestling?  Classical orchestras are having increasing financial trouble, so does that mean that they are less worthwhile than they used to be, and would Feinberg admit the worth of more profitable music with less old-people prestige, such as Lady Gaga or hiphop?  The idea that only two-hundred-year-old European art music is "worthwhile" has less to do with its intrinsic excellence (which is often real, I'm not disputing that) and more to do with class stratification.

Excellence in sport or musical performance has a certain advantage over other kinds of excellence: it's a lot simpler, and easier to evaluate, than excellence (or effectiveness) in politics or artistic invention or many other areas.  Feinberg glosses over this problem with handwaving, and it soon becomes clear that his article, far from being cutting edge, is an old-fashioned attack on the Trash They're Letting Into College These Days, and the Barbaric Garbage That Is Being Taught/Sold as Art.  He's deliberately vague about the details, but then he could probably count on his intended audiences agreeing.

Ironically enough, Feinberg turns out to be part of the problem he's complaining about, when he dismisses "what passes for social commentary in present-day America."  He may not know much about society, politics, or art, but he knows what he likes.  On top of that he's a particle physicist and a professor at an elite institution of higher learning, so his uninformed and half-thought-out ramblings on social policy and the arts are supposed to count for more than the opinions of someone who has actually studied such subjects and knows what he or she is talking about.  As a physicist Feinberg was elite -- he shared a Nobel Prize in Physics for work on neutrinos -- but expertise in physics doesn't translate into expertise on social issues.  It doesn't even follow that a Nobel Laureate in the sciences is qualified to lead his own academic department; if he is, it's coincidence.

That's the trouble with elitism, though it resembles Feinberg's swipe at popular participation in the process of government, the notion that one man is as good as another and a damn sight better too.  Elitism as an ideology encourages people who do well in one area to believe they can automatically do well in others.
----------------------------------
My next post on "meritocracy."

Friday, March 15, 2013

Move 'Em Up, Head 'Em Out

I finally finished Chris Hayes's Twilight of the Elites: America After Meritocracy, which turned out to be even less meritorious than I expected.  Hayes's ongoing misuse of the word "meritocracy" annoys me like a stone in my shoe: "institutions that create meritocratic elites" (64), "the energetic force of meritocratic competition" (75), "American political economy during the meritocratic age" (142), "several decades of failed meritocratic production" (155), "the class of meritocratic overachievers" (201), "the meritocratic era" (203), and the like.  What meritocratic era?  He can't seem to make up his mind whether meritocracy has been an aspiration, or a brief shining moment that has alas faded, or some combination of the two. 

He even quotes someone else's misdefinition:
Adam Michaelson, a former advertising executive who worked in Countrywide [Financial]'s marketing division, recalls the atmosphere of the business this way: "Countrywide fashioned itself a meritocracy, that is, whoever generated the most value or profit for the firm would be granted the greatest rewards, growth and prestige."  As happened in so many places over the past decade, the institutional definition of merit inside Countrywide became thoroughly perverse [97-8].
But the "institutional definition" didn't become "perverse," it was perverse in itself.  Those who generated value and profit for the firm weren't running it (the -cracy part of meritocracy again): the upper ranks of executive officers did that.  Those upper echelons skimmed off immense wealth for themselves from the "value and profit" their underlings generated:
Between 2001 and 2006, [cofounder Angelo] Mozilo managed to arrange for himself a staggering $470 million in in total executive compensation.  The most cynical interpretation of these actions, though also the most plausible, is that Mozilo was looting the company he'd built as fast as he could before the markets or regulators caught up with him [97].
Earlier in the book Hayes wrote, "In a meritocracy, people are judged not on the color of their skin, but on the content of their character" (51).  Leaving aside the fact that we have no way of judging the content of people's character directly, before they are granted access to money and power, it seems fair to say that Mozilo showed his character all too well as he squirreled away vast amounts of money.  In the end the Securities Exchange Commission caught up with him, but he managed to avoid trial or jail by reaching a settlement in which he "agreed to pay $67.5 million in fines and accepted a lifetime ban from serving as an officer or director of any public company ... The terms of the settlement allow Mr. Mozilo to avoid acknowledging any wrongdoing.  In February 2011, the U.S. dropped its criminal investigation into the facts behind that civil settlement."

This isn't, contrary to Hayes's repeated lament, a new crisis of authority and public trust in the US.  What bothers me is that he really thinks that it's important that we believe authorities, and that they used to or even theoretically could deserve unquestioning trust.  He seems to think, for example, that the "mainstream" corporate media used to be reliable, or ought to be treated as if they were.  It "wasn't just the rise of technology that produced the explosion of the blogosphere; it was the perceived failings of the mainstream media" (118).  That word "perceived" bothers me.  It could mean that those failings became manifest and so were perceived; but in context it seems to mean that people only thought they discerned those failings.  If Hayes has ever heard of I. F. Stone, he doesn't let on.  As he concedes,
It turned out that ... someone sitting in a basement in New Jersey, using the Internet, reading from a diverse set of sources about WMD intelligence, could actually get closer to the truth than the beat reporter with the inside sources at Langley [118].
That's essentially what Stone did.  Expelled from the Washington press corps in the late 1940s, he started his own newsletter and worked by close reading of corporate media and government publications.  He didn't even need the Internet.  Access to government insiders, that consummation devoutly to be wished among most American journalists, has more to do with wanting to rub elbows with the powerful than with doing serious journalism.

Critical thinking makes Hayes uneasy, for some reason.  He admits that authority figures have made lethal mistakes, and that professional "consensus" can be and has been completely wrong.  But he still seems to think that somewhere there's a meritocratic elite who will be right and can be trusted, if we can just find them.  That, it seems to me, is the root of the problem.  No one individual can know everything, so we have to delegate learning to other people.  But if we expect to find someone who knows everything and never errs, we're bound to be disappointed when our elites turn out to have feet of clay, and then go looking for the next true authority.  It's not necessary to be cynical about it, just realistic, but that option seems not to have occurred to Hayes.

Hayes assumes that competition selects for merit, and that is probably his biggest mistake.  In some cases it may do, where merit can be narrowly and specifically defined: as the fastest runner or the highest jumper.  But that applies only to a very small part of worthwhile human endeavor.  When merit is defined as making as much money as possible for your hedge fund, you get into trouble.  Or consider this example:
[Economist Sherwin] Rosen argued that certain technological trends had radically expanded the demand for services for those who were the best in their field: in 1950, a top basketball player could only monetize his talent with an endorsement deal that would sell sneakers to Americans; today, LeBron James is featured on billboards from Florida to Turkey to China [142-3].
I think "market" would be more accurate than "demand," but the real fault here is the assumptions that there is one indisputable "best in their field," and that no one else can "monetize his talent."  Many elite athletes have their fans, who'll buy products endorsed by their favorite stars, and those fans disagree about who is the best.  You don't have to be the top basketball player in the world to become rich, or even financially comfortable.  And again, sport is a field where excellence is relatively easy to define and detect, compared to the arts, let alone politics.  Hayes goes on:
The same goes in a whole host of domains: the best opera soprano can, with the advent of MP3s and the Internet, sell to anyone in the world with an iPod, which spells trouble for the fifth best soprano. If you can buy the best, why settle? [143]
This assumes that people seek out the best, or the world's best, singers, and that they agree who is the best.  Neither is true.  A fan may have his or her favorite diva, and argue that she's the best against the fans of rivals, but there is room (and a need) for more than five great sopranos in the opera world, just as there's no reason why New York City can't offer excellent educations to more than 185 new students each year.  Those who aren't the top 185 still have a claim on learning, and something to contribute.  (P.S. I think that Hayes's remarks here could only be made by someone who doesn't much like it, and knows next to nothing about it.)

I realize that I'm probably not a typical music listener, but I don't think in terms of seeking out the best singer, or songwriter, or guitarist, or pianist, or band in the world.  I look for music or other art that gives me pleasure, or moves me emotionally, and I'm drawn by a range of artists.  Sure, there are fans who root for their band or their singer against all others -- I remember with distaste the divide between Beatles and Stones fans in the 60s, for example -- but it isn't necessary to denigrate all other singers than your personal fave.  The mindset that does so is fed by the competitive ethos of capitalism, but that's what is wrong with it.  Few of even the most fanatical fans have records by only one performer in their collections.  And none of this has much if anything to do with merit, let alone meritocracy.

Hayes structures his book around a few hot-button stories that many writers and pundits can agree are important -- the Roman Catholic abuse scandals, steroids use in baseball, corruption in business and finance and politics -- but they don't have much to do with "meritocracy."  That's not too surprising, since he doesn't have a very clear understanding of what meritocracy is or might be, and Twilight of the Elites is a frustratingly insubstantial discussion of some serious problems.

Friday, April 5, 2019

The Best!

Lately I saw a little surge of talk about meritocracy on Twitter -- a surge in my little neighborhood, anyway.  I've had a lot to say on that subject here before, but this morning, as I was riding my bicycle to the library, I had a thought I don't think I've had before.

I suspect that there's a connection between faith/belief (they're not quite the same thing) in meritocracy and overrating the things or people to which we assign merit.  If you believe, as Chris Hayes for example does, that meritocracy means hiring the best, putting the best in charge of things, then you will probably feel an impulse to overrate the merit of those you nominate.  It may not be a simple cause-and-effect tendency.  You may want to give the person the job, the slot at your elite school, your money for their CD, because you think they're the best, rather than the other way around.  But they may not be the best, and it doesn't entirely matter.

For example, some years ago I saw that Bob Dylan had been ranked high in a Playboy readers' poll as a harmonica player.  Now, I like Dylan -- his early work anyway, up to 1970 or so -- but I never thought he was the best harmonica player around, or the best guitarist, or pianist, or singer.  He was good enough for what he wanted to do, and he violated norms for "good" singers in a good way: you don't have to be trained or have a pretty voice to be an expressive singer, and for some purposes having an ugly voice may be preferable.  But that doesn't mean you're the best singer, nor does it matter.

Now compare what Chris Hayes wrote on this subject in The Twilight of the Elites:
The same goes in a whole host of domains: the best opera soprano can, with the advent of MP3s and the Internet, sell to anyone in the world with an iPod, which spells trouble for the fifth best soprano. If you can buy the best, why settle? [143]
As I pointed out before, "best" is not the right word here.  Among seven billion people, there are going to be many thousands of operatic sopranos at such a level of excellence that it's really meaningless to call any of them the best.  The differences between them will be so tiny that most people can't detect them.  (This also applies to world-class athletes: the difference between the fastest runner and the tenth-fastest runner in the world is likely to be some tenths of a second, and some of that will be accidental, due to luck rather than "merit.")  To say that this "spells trouble for the fifth best soprano" is false; it doesn't spell trouble for the five hundredth best soprano.  As the example of Bob Dylan shows, you don't have to be the best singer or guitarist or harmonica player to make music that many people will want to buy -- more, most likely, than will buy the music of the best soprano.  Even in the domain Hayes elected to cite, his point is invalid, laughably so.  We often love things or people who are not the "best," and it would be ridiculous to claim that they are.  But they don't have to be.  We don't love them because they're the best.  We think they're the best because we love them.

This impulse emerges early in life, I think.  My mommy is the best mommy, the most beautiful mommy in the world.  I'm the best, handsomest, smartest little boy in the world.  These are conventions for expressing the intensity and sincerity of our love for someone.  But they're not literally true, and most of the time we know it.  It's believers in meritocracy who mistake metaphors for literal truth.

Is it even necessary to the concept of meritocracy that the best person should occupy a position?  Again, outside of a narrow range of fields, you cannot quantify qualifications for most jobs.  The fastest miler, for example, can be found.  (Next year, or the year after that, he won't be the fastest anymore, which is also important.)  The best CEO, the best accountant, the best IT manager, cannot. The best students for admission to elite colleges, or for that matter to community colleges.  One bit of evidence for what I'm suggesting here is the inflation of requirements for many positions: the applicant is expected to detail how and why denying insurance claims of the terminally ill is her passion, the goal on which she has focused, laser-like, since infancy.  Why he is very excited at the prospect of working the drive-through window at McDonald's.  (I've been allergic for decades now to the term "excited" in announcements; bullshit almost always follows that word.  But by now it too is a convention: if you didn't say you were excited to announce that this Friday will, once again, be Casual Friday, many people would feel that something important was missing.)  I've helped numerous friends fill out extremely long and detailed online applications, complete with a hundred personality-assay questions, to wash dishes in chain restaurants.  Something is wrong there, even leaving aside the invasion of privacy involved.

For many positions, what is needed is not the best, but someone who is simply good enough.  Often people grow into jobs; certainly we hope that students will grow into their educations.  All too often, despite the competition, the personality tests, the interviews, the trial-by-ordeal, the winning candidate isn't even good enough.  There are probably many things that need to be done to correct that, if it can be corrected; but one beginning might be to stop pressuring people to prove what can't in most cases be proven - that they're the best.

Saturday, February 18, 2017

I'll Show You the Life of the Mind

One of the more interesting American phenomena to me is the way many of the same people who pay lip service to equality and other all-American values will, at other times, glumly but complacently declare that confidentially, y'know, everybody isn't equal and we need to learn to abandon sentimentality and live with reality.  Those who take this position range from jingoes like the late Robert A. Heinlein to nice mainstream liberals like Christopher Hayes to intellectual leftists like George Scialabba.  Most recently I came upon a post by the blogger and academic Freddie deBoer, written in a familiar more-in-anguish-than-in-exaltation mode.  DeBoer has taken some good, brave stands, but here he fell on his face.  I'm going to quote from his post at some length because deBoer has taken down his blog, and I don't know if this post in another venue will remain available:
To me, the hard political question is the gap after the gaps — the question of what to do with differences in academic and intellectual potential after we have closed the racial and gender achievement gaps. What do we do with differences in academic achievement after they no longer fall along traditional lines of inequality?


Perhaps it’s easier to say that I have good news and bad news.

The good news is that hoary old bigotries about the inherent intellectual abilities of different groups are wrong, and though much work remains to be done, as a society we are increasingly coming to shared public understanding that this is the case. Black people are not less intelligent than white, women have no less inherent talent for science, Asian people do not have some sort of genetic superiority in math. Those ideas seem to have largely been discredited and discarded by thinking people, and for that I’m glad.

The bad news is that there now appears to me to be overwhelming evidence that there are profound individual differences in academic potential, that different individual human beings have significantly unequal likelihoods of ascending to various tiers of academic performance. Educational philosophy for centuries has assumed great plasticity in the academic potential of any particular student, that given good teachers and hard work, most anyone can reach most any academic pinnacle. And the case that I would someday like to make, that I have been tinkering with making for many years, is that this appears to be substantially untrue. Instead, it appears that in general and on average, human beings are remarkably static in how they are sorted relative to others in all manner of metrics of academic achievement. In education, with remarkable consistency, the high performers stay high, and the low performers stay low. And it seems likely that this reflects some complex construct that we might call academic talent, which whatever its origins (whether genetic, environmental, parental, neonatal, circumstantial, etc) is far less mutable than has traditionally been understood,

There are many, many things that are implied, and crucially not implied, if we imagine a world where different individual students possess profoundly different academic talents.
  • This condition is not rigid, certain, or unalterable; we live in a world of human variability, and individual students will always exist who start out low and go on to excel. There are undoubtedly many exceptions, in either direction. And in fact the degree of plasticity of outcomes itself is likely highly variable. The question, particularly from the standpoint of public policy, lies in trends and averages.
  • This condition may be a matter of strict genetic determinism, but it doesn’t have to be. Simply because a given trait is not genetic in its origins does not mean that it is inherently or permanently mutable. Environmental and parental factors are not genetic but neither are they therefore necessarily mutable.
  • This condition does not imply educational nihilism, a belief that teaching is pointless or learning is impossible. All students can learn, consistently over time, even as relative position remains stable. Indeed, I would argue that is in fact the reality in which we live.
  • This condition does not mean that inequalities in environment are not real, important, or a problem. Individual academic talent is subject to the influence of external forces like any other. Poverty, abuse, affluence, chance — all materially impact outcomes, raising the less talented and restricting the more talented, or amplifying privilege and disadvantage alike. The existence of talent does not imply the irrelevance of external factors, nor does the impact of those factors erase the reality of differences in talent.
  • This condition does not imply that our metrics are the correct ones, that the abilities and knowledge that we select for in our tests and schools are the only real, useful, or valid means of sorting human minds. It does not require use to believe in the wisdom or benevolence of our education system or economy. It only requires us to believe that the socially-designated, contingent abilities we have decided are worth rewarding are not equitably sorted or equally available to all.
  • This condition does not entail some sort of overall difference in the inherent value of different people. There are many more ways to be a good, worthwhile, positive person than simply to fit into our current Procrustean metrics for what makes you a good student.
  • This condition does not imply a conservative, you-get-what-you-deserve attitude towards economics. Indeed, I think it amounts to a powerful argument for socialism
So: good news and bad news, he's been struggling for years with the empirical facts, we have to leave behind romantic illusions and face the cold, hard but still potentially socialist music.

Myself, I don't see any news here at all, bad or good.  DeBoer anticipates that reaction:
This claim is strange in that it prompts both reactions that it is obvious, that “everyone knows” that different individual humans have different academic abilities, and reactions that insist it is offensive, undermining of human dignity, dangerous. What is clear, however, is that in the world of policy, the notion of fundamental differences in the academic potential of different individual students seems bizarrely ignored.
The first thing to notice here is deBoer's focus on "academic abilities."  Though he disclaims it in his bullet-point qualifications, he seems to take for granted that academic abilities are "the socially-designated, contingent abilities we have decided are worth rewarding," and that they the only abilities that matter.  I don't agree that "we have decided" these abilities are "worth rewarding," except in school, and the rewards they receive there are mostly of the gold-star, teacher's-pet variety.  People who excel in school are generally regarded ambivalently in our society; it's proverbial that book smarts aren't worth as much as common-sense smarts; and they are not necessary for worldly or commercial success; not even to get a job and do it well.  The most that can be said is that the years of schooling required to get many "good" jobs have increased over the past century, but this has little or nothing to do with the skills or knowledge those jobs require.  The same high school diploma that used to be a sign of unusual achievement and intelligence will now, maybe, get you a job at McDonald's.  And even that is more because schooling is intended to inculcate obedience, punctuality (the ability to regulate oneself by the clock), neatness (the ability to color within the lines), willingness to take orders, and tolerance of boredom, not the academic abilities deBoer is concerned with.

It's certainly true that "'everyone knows' that different individual humans have different academic abilities."  DeBoer says it's mainly "in the world of policy" that this platitude is ignored, but I'm not so sure.  I recently read the biologist Ernst Mayr's One Long Argument: Charles Darwin and the Genesis of Modern Evolutionary Thought (Harvard, 1993), and Mayr discussed the interesting problem that although individual variation is a pillar of Darwin's theory, scientists have often tended to ignore it, preferring to view populations as uniform; which, aside from being empirically false (as any naturalist could and did point out), rejects a crucial part of the theory they are using.  Ignoring individual variation in groups is a common, perhaps normal human trait.  It persists, I think, because it simplifies whatever question you're thinking about, but it also leads to false and often destructive answers.  Additionally, as I said, a remarkable range of people say, in public, things like "kids are not created equal," trying to give the impression that they are bravely saying the Unpopular Things that 99% of those in public life haven't got the guts to say.

While 'everybody knows' that students have differing abilities and potential, 'everybody' also tends to ignore this knowledge.  Attending to students' individuality costs more money, for instance, and those who want to destroy public education are always seeking ways to cut costs, especially for the schooling of Other People's children.  But it also conflicts with a traditional model of schooling, that of rote memorization and drill, part of whose function is to sort students, though much of it is intended to even out individual differences.  Those who stand out in approved ways may get special, individual attention and permitted to advance to actual education; the rest will not.

The traditional model is also compatible with deBoer's insistence that his "condition does not entail some sort of overall difference in the inherent value of different people."  Traditional, hierarchical, Great-Chain-of-Being models of humanity generally pay lip service to the notion that everybody has his place in God's great creation: red or yellow, black or white, all are precious in His sight.  Know your place, tug your forelock, don't be ambitious, the nail that sticks up gets the hammer.  Let me stress that deBoer isn't pushing such a model; his point appears to be that, given individual variation, he doesn't know what to replace it with.  I suggest that we already know what to replace it with, at least in practice: attention to individuals with full awareness that they have different talents and abilities.  Plenty of teachers and educational writers have addressed this over the years.

That's why deBoer's crucial question is misconceived: "What do we do with differences in academic achievement after they no longer fall along traditional lines of inequality?"  First, in the traditional model of schooling I mentioned, "traditional lines of inequality" often weren't a problem; such schools functioned in 'racially' uniform communities, and their very purpose was to sort students according to academic achievement.  This is true in modern Japan, South Korea, Europe, and other countries with programs to sort students into various vocational tracks or niches; numerous Americans have argued that the US should follow their example. Sexist discrimination was a factor in Japan, for example, but not racial discrimination.  Class discrimination also was a factor, and I am struck by deBoer's failure to mention it, especially given the role class plays in recent efforts by biological determinists to justify stratification based on IQ and other dubious metrics.  In fact, it seems to me that his crucial question is basically that of The Bell Curve, which contrary to what you may have heard, was primarily meant to address the same question deBoer asks, using the same assumptions: What will we do when we've eliminated unfair discrimination and every student, every citizen, is evaluated not by sex or skin color but by their innate ability and merit?

DeBoer also has a short post about a study of Head Start, also available at medium.com, which begins by reassuring the reader that he's
in favor of universal Pre-K on social justice grounds and believe that it’s worth it even if there’s no demonstrable educational gains, as parents should have governmental help in rearing children, particularly so that they can go back to work and be more economically secure. And I’m also in favor of broadening our definitions of success as the study’s authors call for.
But in the second paragraph he bemoans "the complete absence of any frank acknowledgment that there is such a thing as natural academic talent," etc. etc., "And until we recognize that there are persistent inequalities in natural talent, we’re not engaging in a productive discussion about real-world problems."  In the longer post he writes:
Yet consider that society: if even a moderate portion of this difference in talent lies outside of the hands of the students themselves, the basic moral architecture of our supposed meritocracy has been undermined. A system that portions out material security and abundance according to the fickle distributions of academic talent, which children do not choose, is a system that has no basis for calling itself fair. Yet if we successfully combated the forces of white supremacy and sexism to the point that we achieved a racially and sexually equal society, many people would content themselves that the work of social justice had been done. But we would continue to live in a world of terrible and punishing inequality. It would simply be distributed on different lines.
The reference to "our supposed meritocracy" may be the giveaway; it shows that deBoer is evidently as confused as most people who talk about meritocracy.  So let me explain.

First: America is not a meritocracy.  Propagandists and apologists for every society will tell you that rewards and punishments are distributed fairly in their green and pleasant land, no matter how unfair the actual distribution may be.  Whatever differences you observe in the distribution are the result of people's merit or lack thereof, though of course there are malcontents who claim otherwise.  They're just jealous of their superiors and want to drag them down to their miserable level.  Just about everybody seems to agree that people should be hired, admitted to university, etc. on the basis of their merit; the trouble is that most people are convinced that merit is connected to class, race, sex, test scores, and other markers that are not, in fact connected to merit.

Second: DeBoer is confusing inequality and difference, as so many people do, and assuming (probably unconsciously) that you can't have a society of equals as long as there are any differences between individuals.  If you believe that, you do indeed have a problem, but if you jettison that assumption the problem evaporates.  He pays lip service to the contrary position in his bullet points, but abandons it when he sums up the Problem.  In the classroom, for example, equality doesn't mean that every child must earn, let alone be gifted, an A regardless of his or her performance -- even assuming for the sake of argument that performance reliably reflects "potential."  The utility and fairness of grading should not be assumed either; there are good reasons why grading, especially competitive grading, should be abolished.

Third: The preceding is true when we move from schools to society at large.  Equality doesn't mean that everybody has the same things.  To begin with, everybody doesn't want the same things.  Nor does everybody need the same things.  I've had some revealing debates with people who confused equality of outcome with political equality, and who couldn't or wouldn't grasp that equality doesn't mean that everybody must get open-heart surgery or take insulin or get an abortion; or that everyone must live in a penthouse apartment in a big city, or alternatively in a farmhouse with a white picket fence surrounded by amber waves of grain.  None of these is more meritorious than other alternatives.

It could be said as truly that there is such a thing as natural athletic or musical talent, or other talents that are not equally distributed through the population.  As Noam Chomsky wrote decades ago, this is not really a problem either.
... The question of heritability of IQ might conceivably have some social importance, say, with regard to educational practice. However, even this seems dubious, and one would like to see an argument. It is, incidentally, surprising to me that so many commentators should find it disturbing that IQ might be heritable, perhaps largely so. Would it also be disturbing to discover that relative height or musical talent or rank in running the hundred-yard dash is in part genetically determined? Why should one have preconceptions one way or another about these questions, and how do the answers to them, whatever they may be, relate either to serious scientific issues (in the present state of our knowledge) or to social practice in a decent society? [from For Reasons of State, Pantheon, 1973, p. 361-362]
There lurks in deBoer's declaration of the Problem the assumption that the difference in potential he refers to must be expressed or discovered in competition, whether in the classroom or in the workplace, and that "material security and abundance" should be parceled out to the winners, with the losers getting less or nothing at all.  If you share that assumption, deBoer's objection that such "a system ... has no basis for calling itself fair" collapses: such a system is fair by definition.  If you don't share the assumption that social "rewards" should be parceled out to the winners of the Game of Life, then deBoer's Problem simply disappears.

Ellen Willis put it very well years ago in a review of The Bell Curve: "If I bought the authors' thesis, I would still be allergic to their politics. I don't advocate equality because I think everyone is the same; I believe that difference, real or imagined, is no excuse for subordinating some people to others. Equality is a principle of human relations, not Procrustes' bed" (40).*  Nor is equality a statement about the talents or other endowments of human beings; individuals are different, but political and social equality has nothing to do with such things.  It means that the person who is less gifted has the same right to a fulfilling life (to say nothing of basic human and civil rights) as the person who is more gifted.  (By which I mean a life that fulfills him or her, even if it wouldn't fulfill me or Freddie deBoer.  From each according to his ability, to each according to his need, y'know.)  I find it interesting that deBoer, who makes much of his commitment to socialism and the left, has somehow managed to miss that.  But then, the academic talents of which he makes so much have often served to make excuses for political inequality, by fostering confusion between equality and sameness, inequality and difference.

It may be that by "hard political question" deBoer merely means the difficulty of getting American society to accept the principle of political equality.  That will be very difficult, perhaps impossible, I would agree.  But it's so difficult because people resist it so strongly.  Empirical research can't settle it, because as Ellen Willis said, equality is a principle, not a fact.  That Freddie deBoer is so confused about it is further evidence of people's difficulty in grasping the difference.  So I'm not sure that he is referring to the practical difficulty of getting the idea of equality across; he sees it as a problem because he wants it to be a problem.  And because so many intelligent and educated people agree with him, that's a problem.

* In Don't Think, Smile!: Notes on a Decade of Denial (Beacon, 1999).

Wednesday, December 9, 2009

What He Said

Saith the IOZ (typos repaired, just because I'm feeling anal today):
Were it not so plainly a result of obvious yet resolutely unexamined intellectual prejudices, I would find it curious that our freemarketarian friends are so dutifully committed to the plainly preposterous notion that within enterprises outside the political realm, true merit and virtue are rewarded; the cream rises; talent is recognized; ability is a boon. On a small scale, this is funny because it presumes the existence of enterprise outside the political realm. On a grand scale, it's funny because its most ardent proponents have so obviously never spent much time in a business enterprise, where the perversities of who does and does not rise are, if anything, even more deranged than in the strictly political territory of electoral politics. While there are certainly some very smart, talented, incisive, conversant, articulate, and well-cultured businesscreatures in the world, most of our captains of industry are even more cretinous, subhuman, moronic, and depraved than the average US senator, and that's no low hurdle or short sprint.
P.S. From Noam Chomsky's "Psychology and Ideology" in For Reasons of State (Random House, 1973, reprinted The New Press, 2003) p. 355:
One might speculate, rather plausibly, that wealth and power tend to accrue to those who are ruthless, cunning, avaricious, self-seeking, lacking in sympathy and compassion, subservient to authority and willing to abandon principle for material gain, and so on. Furthermore, these traits might very well be as heritable as IQ, and might outweigh IQ as factors in gaining material reward. Such qualities just might be the valuable ones for a war of all against all. If so, the society that results (applying [Richard] Herrnstein's "syllogism") could hardly be characterized as a "meritocracy." By using the word "meritocracy" Herrnstein begs some interesting questions and reveals implicit assumptions about our society that are hardly self-evident.

Saturday, January 3, 2009

Giving Democracy a Bad Name

Whenever I see a book like James Traub's The Freedom Agenda (Farrar Straus Giroux, 2008), the first thing I do is check the index to see what it has to say about East Timor. When I did that with Samantha Power's Pulitzer Prize-winning "A problem from hell" : America and the age of genocide (Basic Books, 2002), I found these now-notorious words:
In 1975, when its ally, the oil-producing, anti-Communist Indonesia, invaded East Timor, killing between 100,000 and 200,000 civilians, the United States looked away [146-7].
That was all she had to say, and I don't think it's going too far to call it a lie. The United States did not look away; it okayed the invasion in advance, supplied weapons and training to the Indonesian troops, and blocked any UN action against the slaughter, right up until the 1999 referendum in which the Timorese were allowed to vote for independence from Indonesia.

But enough about Power. I checked the index of The Freedom Agenda: Why America Must Spread Democracy (Just Not the Way Bush Did It), which, like Power's book, I happened on at the public library. Traub, a regular contributor to the New York Times Magazine, is just as sparse in his coverage of East Timor as Power. Here are both references listed in the book's index:
In Bosnia and Kosovo, and in East Timor, sovereignty rested with the international forces, as it had not in Haiti; but the effort to create an effective and trusted police force, an independent judiciary, a responsive civil service, and the like proved maddeningly difficult. East Timor, like Haiti, relapsed into violence when the occupying force went home. The arc of democratic development, it seemed, was vastly longer than the arc of international attention [87f].

In mid-February [2003], barely a month before the invasion of Iraq, Rumsfeld gave a speech in New York in which he extolled the “light footprint” in Afghanistan, and criticized the nation-building exercises in Kosovo and East Timor for distorting local economies and creating a culture of dependency [120].
If Traub were writing only about the Bush years, he might be able to justify this treatment, which focuses only on "nation-building" in East Timor after 1999. But The Freedom Agenda is a sort of historical panorama of American democracy-spreading, starting with the Spanish-American War. In that holy conflict against the Popish antichrist, the United States liberated the Philippines from Spanish rule, but then decided we should keep them -- for their own good, of course. They just weren't ready for freedom and democracy. The Filipinos had different ideas, however, having been trying for years to get rid of the Spanish themselves, so they rebelled. Against us, the light of the world, the beacon of freedom! So we squashed them. In a typical application of today's journalistic balance, Traub puts it this way: "More than 220,000 Filipinos are thought to have died in battle, as well as more than 4,000 American troops. … Each side routinely tortured the other."

Right, the destruction was mutual. What Traub doesn't mention is that the Filipino casualties didn't all occur "in battle": American forces moved through the countryside, massacring everybody they encountered. No one knows how many were slaughtered, since no records were kept, and estimates range widely, with 220,000 nearer the low end, but we'll never know.

Traub goes on to describe U.S. "nation-building" (I'm not quite sure why that term sets my teeth on edge) in the Philippines, which mainly involved winning the hearts and minds of local elites: "neither could we offer [the Filipinos] the American principle of universal suffrage" (page 17).
What’s more, as in Cuba, a strict control over the franchise was to curb the excesses of popular democracy. Only males twenty-three or older who had either served in the Spanish local government, paid taxes, or could demonstrate literacy in English or Spanish would be allowed to vote. This constituted less than 3 percent of the population [22].
In 1900, a strict control over the franchise also obtained in the United States: women and various other lesser groups were not allowed to vote here, and literacy tests, grandfather clauses, and other restrictions were widespread. Curbing the excesses of popular democracy had always been a concern of our rulers, so it's hardly surprising that they extended it to our new possessions. But for Traub, this hardly registers. He's too busy beating the drums for our benevolence to the lowly Filipino, even while reporting some of the milder racism of the Americans who determined US policy, and lamenting that, doggone it, the Filipinos just wouldn't take responsibility for their own democracy:
The idea of inalienable individual rights had come naturally to the yeoman farmer and the tradesman of colonial America, who stood on their own two feet. But in the Philippines, almost everyone depended on the favor of powerful clans. The habits of deference were deeply ingrained. Filipinos considered the relationship of patron and client every bit as rooted in the nature of things as Americans did the bonds of equality among citizens. Politics depended far more on kinship and on personal friendship than it did on abstract principle or belief. Politicians gained office with the support of their actual and metaphorical kin, and then offered recompense with jobs, contracts, and the like. The Americans called this corruption, but for the Filipinos it was simply an extension of the family system....

Some leading members of the elite absorbed the ethos of self-reliance and equality; others learned how to parrot it back to their gullible masters [28-29].
Damn! We Americans, especially our wise leaders, are so trusting! Even in those days we couldn't believe that the dusky races might not be telling us the truth, or that they weren't really constituted for a meritocracy. Given how corrupt the American system was in those days too, it's hard to believe that Traub is deaf to the irony in what he's writing, but he is. Consider this choice bit in which Traub quotes Larry Diamond, a Senior Fellow at the Hoover Institution, and "
professor by courtesy of political science and sociology" at Stanford, contrasting two styles of democracy.
In a liberal democracy, he wrote, "the military is subordinated, the constitution is supreme, due process is respected, civil society is autonomous and free, citizens are politically equal, women and minorities have access to power, and individuals have real freedom to speak and publish and organize and protest." An electoral democracy just had elections.
It clearly doesn't occur to Traub (or, presumably, to Diamond), that by these criteria the United States has not been a liberal democracy for most of its history, down to at least the 1960s when the franchise was guaranteed to racial minorities and they began to be elected to political office in significant numbers. It's laughable to say that "the constitution is supreme" in this century, but one may doubt how supreme it was even before the accession of George W. Bush. (Who, remember, was not actually elected to the Presidency in 2000.)

Traub's summary of American democracy promotion is just this selective throughout. To point out all his careful omissions of inconvenient fact would take a lot more space than I feel like devoting to it now, so let's jump ahead to the Bush years. Traub is critical of Dubya, as the title of the book implies, but he's so habituated to whitewashing US history that he can't stop doing it. Here's Traub's take on the aftermath of the September 11 attacks:
Some of the left … argued that the problem lay in American policies toward the Middle East. But whatever truth this claim may have had, Americans were in no mood to hear it. A more popular, and maybe more plausible, suggestion was that the problem lay inside Arab states. V. S. Naipaul, the Nobel Prize-winning author, said that the angry young men of the Arab street really wanted an American green card; furious at the reactionary and paralytic states in which they lived, they lashed out at the world of prosperity and freedom that they desperately envied but could not enjoy. Conservatives agreed, of course, but this was no mere ideological hobbyhorse. In a cover story in the October 15, 2001, issue of Newsweek bluntly titled "Why Do They Hate Us?" Fareed Zakaria located the answer in "the sense of humiliation, decline and despair that sweeps the Arab world." … The U.S. must also "help Islam enter the modern world" [104].
Leave aside the question of whether the correctness of an argument lies in whether Americans (or anyone else) are in a mood to hear it. Certainly Naipaul's "suggestion" was popular in the U.S.; its flaw is that it doesn't really trump the argument of "the left." "The left", as far as I'm aware, would agree that people around the world would like some of that "prosperity and freedom that they desperately envied but could not enjoy." The question immediately arises: Why can't they enjoy these things? Despite all our naïve and gullible democracy promotion, the US consistently supports the most corrupt, reactionary regimes in the Arab world. ($2 billion a year to Mubarak's Egypt, for example.) We have also employed Islamic fundamentalists to further our aims, which, as is well known by now, led to the rise of Al-Qaeda and to the September 11 attacks as well as much more terrorist violence around the globe which doesn't count unless it hurts some Americans along with the brown people. (This was true, for example, of the Mumbai attacks in November. I happened to pass a TV set at the student union which was showing a CNN reporter on the scene, coiffed and made-up as if she were standing on the streets of Manhattan. An explosion went off behind her, and she ducked and cringed, as well she might. But would you see a similar reporter's-eye view of US bombing in, say, Iraq? Of course not; it would make America look bad. The point of showing this clip was to demonstrate the viciousness of the Mumbai terrorists, so disdainful of human life that they mussed a CNN reporter's hair. And got debris in it. Animals.)

Traub isn't totally unaware of this, I think. The best part of The Freedom Agenda are the chapters in which he talks to people in Egypt and Mali. His condescension to the Malians ("It wasn't very much, but it seemed to matter to them" [204]) is annoying, to put it gently, but the presence in the book of ordinary people who are struggling to get some real democracy in their lives is a breath of fresh air after so much of Traub's posturing. For his coverage of the 2005 Egyptian elections Traub even talks to members of the Muslim Brotherhood, though he deplores their unwillingness to accept "Israel's right to exist as a matter of principle" or to oppose "terrorist attacks against it" [182]. Still, as Mubarak reestablishes his stranglehold on even the most rudimentary political organizing, Traub can only shake his head in world-weary dismay. It's very convenient -- the US supports ($2 billion a year!) the suppression of political institutions that might foster democracy, and then if the oppressive regime is finally overthrown, people like Traub can claim that the people aren't ready for democracy because they have no democratic political institutions.

But back to Bush and his "passionate call for democracy" (162) -- unless elections are won by people the US doesn't like, as with Hamas' victory in 2006. No one could have predicted that democracy would lead to the wrong people winning an election! Condoleezza Rice "later admitted to an interviewer that she was so shocked by the news [of Hamas’ victory] that she climbed off her elliptical trainer to call her aides. Told that Hamas had in fact outpolled Fatah, she said to herself, 'Oh my goodness, Hamas won?' This was scarcely what she had imagined in 2002 when she took on both Colin Powell and Dick Cheney to argue for democracy promotion in the Palestinian Territories" [135f]. Granted that Rice was a member of an administration that took great pains to ensure that the Right People won elections in the United States, her surprise may make sense, but still -- someone this stupid was a Professor of Political Science at Stanford? Traub seems to share her surprise, though:
As Dennis Ross says, "[Rice] does have a kind of view that elections are a built-in self-correcting mechanism. You may bring in people you don’t like, but accountability will bring changes over time." That certainly wasn’t true here: the White House was not about to do business with an organization that openly avowed terrorism. The Bush administration, along with its European allies, took the position that it would not recognize the new government unless Hamas renounced the use of violence – which of course it would not do.
Um -- has the US, or Israel, ever renounced the use of violence? Once again Traub shows his tin ear for irony.

The easiest way to deal with such issues, of course, is to ignore inconvenient facts. Traub mentions briefly the famous "purple-dyed index finger [which] was to become, if briefly, the icon of a Middle East liberated from tyranny" [128], but not that Bush never wanted the Iraqi elections and tried to block them; having failed to do so, he simply ignored the results (which would have meant a prompt withdrawal of US forces, for one thing) and installed the government he wanted. That goes a long way toward explaining why, "By the end of 2005, the euphoria of the purple index finger in Iraq had faded to a dim memory. The sovereign Iraqi government barely functioned. Rather than stemming Iraq’s terrifying violence, the election had, if anything, deepened sectarian rivalry by installing the Shias in power" [134]. This was only surprising to American corporate media; those actually paying attention to Iraqi (and American) reality had known better all along.

And so on. Traub garbles everything else he touches -- Haiti, Nicaragua, Chile ("Pinochet was not a crook, like Marcos" [66]; the echo of Nixon eludes him, of course), the breakup of the Soviet Union and its aftermath, you name it -- and always with the same bias: it's not our fault, we did our best, our intentions were good, but we are gullible and blundering and naive and misunderstood. (Oh Lord! please don't let us be misunderstood.) "In each of these countries, of course, we will continue to face hard choices between advancing our strategic and economic interests on the one hand and protesting human rights abuses on the other" (233). We could begin by cutting back on our own support for and perpetration of human rights abuses, but such things do not register on Traub's awareness, which wouldn't matter if he were the only person with such a view of the world. For example:
In a speech delivered in Washington in August 2007, [Obama] observed that United States senators typically see the "desperate faces" of Darfur or Baghdad from the height of a helicopter. He added: "And it makes you stop and wonder: when those faces look up at an American helicopter, do they feel hope or do they feel hate?" [227]
False antithesis. Maybe they feel fear, which would be a totally realistic reaction. But for Obama to admit that would mean accepting that reactions to America might have something to do with what we've done, rather than what others feel about us for no evident reasons. "Hope," meaning they're good, willing to come under the shelter of our helicopters; or "hate," meaning they're bad, blindly irrational.

As Traub puts it: "The issue, in many ways, was the pathology of the Arab world, which expressed itself in unreasoning contempt for the United States, and which was, in turn, compounded by American behavior and policies" (113), which means that the contempt, if that's what it is, isn't unreasoning. He also mentions with dismay that we live in a time "when much of the world is recoiling from what it sees as an American crusade" [230], forgetting that Bush himself called his wars a crusade.) Why do they hate us? What have we done to them? That so many Americans can say such things and ask such questions with a straight face, at most willing only to blame everything on George W. Bush, does not inspire hope in me.

I don't think democracy can be "exported," nor do I think it needs to be "promoted", even if the US had a better record in either department than it does. Exporting it implies that democracy is a product, possibly a high-tech gadget, best made in American factories and shipped at "a high price, but we think the price is worth it" to the downtrodden masses of the world. Nor does democracy need to be promoted: I think history shows that democracy is more like cannabis -- a hardy weed that keeps sprouting up in all kinds of soil despite the best efforts of Those Who Know What's Best For You to eliminate it. Raymond Williams wrote in Keywords (Oxford, 1985, p. 94) that "democracy, in the records that we have, was until [the 19th century] a strongly unfavourable term," used in elite discourse as "anarchy" is used today, to imply mob rule and chaos, apocalypse. (You'll learn more from Williams's brief article on democracy in Keywords than from all the authorities Traub quotes so fawningly in The Freedom Agenda.)

I also don't think that anyone knows what sort of conditions are necessary for the growth of democracy. Looking at historical cases as Traub does won't help unless you're willing to look more carefully at the pressures involved, such as the 19th-century American eagerness to sell American products in the vast Asian markets; or take more seriously the racism that underlay the programs even of those who thought the Filipinos (or the Haitians, or the Vietnamese, or the Timorese, or the Iraqis) could be taught democracy, like children, by their betters. To criticize the democracy promoters is not to deny that people of certain other countries are capable of democracy, but to insist that the promoters are not qualified (or even interested) to impose it on them, whether by the carrot or the stick.

Monday, July 25, 2011

Speak Roughly to Your Little Boy

What I wanted to write about, though, was a disturbing article I read this weekend, which reminded me how much many adults hate children. Of course, it was published on the site of Family Circle, "Where Family Comes First."

The article, entitled "Not Every Kid Deserves a Prize", was remarkable for its meanspiritedness. The author, Karin Fuller, writes about the trophy her daughter was awarded, at the age of eight, for playing soccer:

She didn't deserve it. Nor did she want it. At 8 years old, she hated soccer. She hated the uncomfortable shin guards, the itchy socks, the boring black cleats.
She disliked practice and she passionately, fervently, detested the actual games. Still, for completing the season, the team wanted to give her a trophy. I couldn't understand why. In my mind, if you start something, you finish it. And that gets an "atta girl"—not a trophy.
I wonder if Fuller always finishes everything she starts. I certainly can't see any reason why anyone, adult or child, should stick with something she so passionately hates, for no reason other than having started it. (And whose idea was it to start her playing soccer in the first place, I wonder?) It's one thing to tell a kid that she has to go on taking care of the pet she begged for, because the pet needs care. But why stick with an activity that no one depends on, and that you're not contributing anything to? I doubt that her absence from the team, had she quit, would have hurt anybody. I also can't see why her daughter should have received an "atta girl," any more than a trophy, just for finishing something; by Fuller's standards, even that seems like a sop to her self-esteem.

But Fuller's got more nifty examples.
I recently overheard a waiting-room conversation between two mothers. One complained her son wasn't allowed on a field trip reserved "unfairly" for high-achievers. She admitted her son had made little effort to earn his spot, but in her eyes, it was unreasonable to reward some kids and not others. The second mother was upset because her son had received a failing grade for repeatedly falling asleep in class. "It's not fair," she said. "He turned in most of his homework assignments."

Let me see if I've got this right: Kids who don't try should get the same benefits as those who do? And completing most of the assignments should discount that sleeping-in-class thing? Are we so obsessed with fairness that we raise children to believe everyone should be treated the same, regardless of effort or skill?
Let me see if I've got this right. A kid who repeatedly falls asleep in class is nothing to get concerned about -- I suppose he's just doing it to annoy the teacher (and Karin Fuller), because he knows it teases. If he still does his homework, he isn't trying. As for that field trip, why is it reserved for high achievers? Why is it a reward, something reserved by definition for only a few students, instead of something the whole class participates in?

And it gets better:
My husband, Geoff, was a teen with a younger sister when his father remarried a woman with two boys. Both older boys were accustomed to always coming first, while the younger kids were used to baby-of-the-family privileges. But suddenly the roles—and rules—had changed. Cries of "Not fair!" became commonplace. One day Geoff's father sat down with all four kids and said essentially this: "On any given day, life isn't fair. That's the way it goes. We hope everything evens out in the long run. Live with it."
Rules, you see, aren't made by parents, but by "life," which is unfair. On any given day, anyway. Things will even out by themselves. Parents aren't responsible.

I see things differently. "Life" often isn't fair, but people should be; if they aren't, they're in the wrong. I don't mean to downplay the difficulty of being a parent, nor of building a new household out of two old ones, but Geoff's father was abdicating responsibility for his own actions. (Of course I'm relying here on Karin Fuller's probably self-serving second- or third-hand report, but that's the point: she's spinning it to suit her own prejudices.)

There may be some who disagree, believing there's no harm in giving trophies to all, that it's a harmless way to recognize kids' participation and encourage them to try. What they're failing to see is that by rewarding everyone, the trophy is devalued, or the certificate becomes nothing but a piece of nice paper with a pretty font. In our quest to make everyone equal and everything fair, no one is special. By bolstering self-esteem across the board, we're sending the message that self-esteem is more important than hard work and achievement. But ultimately, high self-esteem doesn't guarantee success. That takes self-discipline, self-reliance and self-control.
I disagree, because there are other ways of looking at this. I think there's harm in giving trophies to anyone, especially young children. Rewards discourage achievement, because they cause people to lose interest in doing whatever they are rewarded for. Alfie Kohn has shown this many times over the years, with plenty of studies that demonstrate it, but giving and getting rewards feels so good.

Consider the 2005 documentary Mad Hot Ballroom, about a program which taught ballroom dancing to fifth-graders all over New York City, concluding in a city-wide competition with a humongous trophy as the prize. It got mostly positive reviews, in which the most popular recurring word seemed to be "adorable." A few reviewers, like Slate's David Edelstein, were wary of the competition element:

One reason I hate the fact that my just-7-year-old daughter watches American Idol (long story) is that I don't want her to think about competition yet. I don't want her to see people being judged—and in some cases, ripped apart. Yes, that sounds odd coming from a critic—but these are people who aren't rich and famous and in some cases are getting torn apart with a camera in their face. But up in Washington Heights in Mad Hot Ballroom, competition might be the only way out, and there are fewer illusions to be dashed.
But even Edelstein, in the end, dissolved into a puddle of Awwwww! And I agree, the kids are cute. But when I watched Mad Hot Ballroom, I noticed other things that weren't so cute. Several teachers, administrators, and dance instructors claimed that the program taught children to be "little ladies and gentlemen":

Over shots of the winning students, a teacher says that one of the girls was incorrigible and now, after a year of rehearsals, she has poise and self-control and doesn't get into trouble.
Well, winning helps, I suppose. But how long will that "poise and self-control" last? Will most of the kids even go on dancing after the program is over? Probably not: it's set up to make dancing stressful and coerced, as if the aim were to turn them off, not encourage them to continue. Mad Hot Ballroom isn't a controlled study, so I'm not faulting it for not telling us where the kids were five or ten years later. But I reserve the right to be skeptical.

None of the reviewers I've read noticed the school administrators, especially those in the school that had won the previous year's competition and had the trophy in their school. (You didn't think the kids would be allowed to take it home, did you?) They talk as though they, not the students, had won the competition, and semi-playfully talk about doing magic so the trophy will stay with them another year. Winning, even vicarious winning through one's students, doesn't seem to improve people's characters.

Losing's another matter. Another reviewer noticed that (in an instructors' meeting, thank goodness, not in front of the kids):
One of the instructors, however, blurts out something along the lines of "Second place is first loser!" Taking that another step, the program's leader reminds the instructors that, indeed, life is tough and competition is healthy.
Well, there you have it: there's only room at the top for one, and everybody else should get their noses ground in the dirt, because they're losers. I couldn't help wondering why, if learning to dance was so good for kids' social skills and corrigibility, the distraction of throwing in a high-stakes competition was added, to produce hundreds of bruised losers when nothing real was at stake except the principals' egotism. Far from being "healthy," competition purely for competition's sake makes no sense at all.

[P.S. Alfie Kohn points out that this kind of competition involves artificial scarcity. There's no inherent reason why there must be only one "winner"; the idea is to distract the competitors from the fact that they're fighting each other over a worthless piece of plastic and metal. In the real world, resources and goods often are genuinely scarce. But suppose that a family is short on food. Do the parents put the food on the floor and tell their kids to fight for it -- or compete with the kids themselves? (Well, maybe Karin Fuller would.) No. In the real world, when goods are scarce, you share them. One function of competition -- and maybe a conscious purpose at times -- is to teach children and adults that sharing is for losers. On the other hand, the good things connected to the teaching of ballroom dancing, such as social skills and the pleasure of dancing in itself, aren't scarce, so there's no reason to make the dancers compete unless you want a war of all against all. Which is a reminder that Hobbes's war of all against all is not, as he thought, the state of human beings in nature, it's a product of "civilization."]

Often enough competitiveness is sheer evil. While I was looking around on the web, trying to put off dealing with Fuller's article, I found another article by Alfie Kohn in which he documented parents who work actively to keep other people's kids out of the advanced classes that are reserved for their high-achieving offspring.

[F]rom Amherst, Massachusetts, where highly educated white parents have fought to preserve a tracking system that keeps virtually every child of color out of advanced classes, to Palo Alto, California, where a similarly elite constituency demands a return to a "skill and drill" math curriculum and fiercely opposes the more conceptual learning outlined in the National Council of Teachers of Mathematics (NCTM) standards; from an affluent suburb of Buffalo, where parents of honors students quashed an attempt to replace letter grades with standards-based progress reports, to San Diego, where a program to provide underachieving students with support that will help them succeed in higher-level courses has run "head on into vigorous opposition from some of the community's more outspoken, influential members -- the predominantly white, middle-class parents of high-achieving students."

... They may be pro-choice and avid recyclers, with nothing good to say about the likes of Pat Robertson and Rush Limbaugh; yet on educational issues they are, perhaps unwittingly, making common cause with, and furthering the agenda of, the Far Right.

... This is essentially what happened in San Diego, where an attempt to give a leg up to lower-tracked students was, as Elizabeth Cohen of Stanford University puts it, "the kind of project that you'd think wouldn't bother upper-status parents at all. Wrong! They said, 'What are you going to do special for my kid?'" This posture, she adds, goes beyond a simple and commendable desire to do everything possible for one's own children. "When parents tell me they're terribly anxious about their kids getting ahead, I'm sympathetic. Everyone wants the best for their kids. But when it extends to sabotaging programs that are designed to help people, I have to draw the line."
Notice what is going on here. It isn't just that these parents are ignoring everyone else's children, focusing their efforts solely on giving their own children the most desirable education. Rather, they are in effect sacrificing other children to their own. It's not about success but victory, not about responding to a competitive environment but creating one. As Harvey Daniels of National Louis University sees it, "The psychology of those parents is that it's not enough for their kids to win: others must lose -- and they must lose conspicuously."
This is the sort of thing that casts doubt on Fuller's contention that "success ... takes self-discipline, self-reliance and self-control." It also helps to have access in the first place to resources that are necessary for advancement in a stratified society.

Incidentally, I recently read A Class Divided: Then and Now (1987) by William Peters, about Jane Elliott's famous "blue eyes - brown eyes" discrimination exercise in which children are divided into groups by the color of their eyes. Those in one group are given privileges and praised arbitrarily, while those in the other group are denied privileges and criticized arbitrarily -- on the first day. On the second day, they switch places. Originally Elliott did this with her third-grade students, though since then she's done it with older ones and, in modified forms, with adults.

I'd been hearing about this exercise for years, but Peters's book taught me something I hadn't heard before. After it was over, many of the students showed a decisive improvement in their school performance, which lasted for at least the rest of the school year; see pages 97-8 and 108-10. Oddly, no one seems to have followed up on this, though Elliott brought it to the attention of some educational psychologists. Her own theory (pp. 109-10):

"On the day they are in the 'superior' group and doing genuinely superior work ... they find out for the first time what their true potential is. They learn by actual experience that they can do much better work than they have been doing. Later, when the exercise is over and they continue to work at a higher level, they are simply responding to what they now know they can do.

"It's no longer news in educational circles that children tend to live up -- or down -- to the expectations of their teachers," she continues. "In this case, it's the children's expectations of themselves that have changed. Their new expectations are based not on hopes or wishes or even on what their teacher told them about their abilities, but on their own knowledge and experience. They don't just think they can do better work, they know they can, because they have."
Fuller concludes:
Ultimately, we aren't simply raising children—we're raising adults. And if we want them to become functioning members of society, they need to learn how to win and how to lose. They need to be able to take criticism, cope with arbitrary decisions and handle setbacks. They need to see that people who work hard to achieve—even if they fail at first—will be rewarded more than those who don't.

That's called real life. And it's fair.
It takes a certain amount of denial to pretend that the US (or any other society that I know of) is a meritocracy, in which those "who work hard to achieve ... will be rewarded more than those who don't." Are the masses of unemployed today simply people who didn't work hard? It might be true that those at the top of our society, who've devastated the economy so effectively, did work hard, but at what? Certainly nothing productive. "Fair"? No, far from it.