“Start a blog!” they all say.

They also tell anyone who elicits even a few chuckles to get into standup, they tell anyone who can play “Hot Cross Buns” to take up the piano… and normally, I resist these kind of blandishments.

But as it turns out, I DO write a lot stuff that is of an awkward length for facebook. And MAYBE somebody other than the people who are my facebook friends might want to see things that I write. So, uh, here we are.

Use the categories. I do!

Three categories of things that are going up immediately are:

  • My birthday lectures. I give a lecture on my birthday. Sometimes people hear about them and want to read them. You can find them here.
  • Recreational anarchronisms. It’s a game I play where I create plausible(-ish) genealogies for contemporary cultural artifacts as if they existed some time in the past. I create series with rules-sets. Further explication of the concept and links here.
  • Things I get published places. Here.

And I might cross-post various stuff that I would just post to facebook, especially if it’s long and has to do with things other than just my personal life.

2017 update

I now put up little book reviews!



Inspired by Jorge Luis Borges, I like to play around with alternative backstories for cultural artifacts. I also like to set up series of such backstories with rigorous (well, rigorous-ish… rigourish?) rules. I have, as yet, two series.

  • Call the first one something like “Right-wing Talent Goes West.” Here I take various kinds of pop culture — movies and tv shows, mostly — and posit them as the products of major figures from amongst the reactionary literati of the late-ninteenth through mid-twentieth centuries. I also have a rule where I only do one per country. I figure I’ll stop at nine. Here are links to ones I have so far:
  1. Celine in the Sunshine State
  2. Waugh on the Boardwalk
  3. Junger on the Beach
  4. Evola in Fat City
  5. Lovecraft, Screenwriter
  6. Mishima Among the Cornfields

I anticipate perhaps three more. Nine is a good number.

  • “Early Modern Webcomics.” Pretty straightforward: projecting webcomics into an early modern (1455-1789) context. Here are the ones I have so far:
  1. Bocaccio’s Mal di Legno
  2. Durer’s Sprechen Donnereschen
  3. Geoffrey-Jacques’ Matieres Questionable


Once upon a time, I went to an unusual school. This unusual school had a visiting staff, a Finnish man. He was a good guy. One day it was his birthday and he announced he would give a “birthday lecture.”

We were tickled and intrigued by this concept. It turned about to be about something like peer therapy. Being clever young weisenheimers we didn’t really get into the spirit of the thing like perhaps we should’ve. But the idea stuck with us. We would jokingly suggest to each other to do “birthday lectures” whenever a birthday would come around, etc.

Well, I like an audience, and a birthday is a good occasion to get one. I did several impromptu, improvised birthday lectures, but in 2012 decided I would do a proper one, which I would research beforehand and write out. The only rule is it has to be on something other than my main academic research. The guidelines are it should be between twenty-five and forty-five minutes long, and thus far all of them have been about American intellectual history.

People have been surprisingly receptive towards them. I have an introductory speaker every year, a friend who I consider a colleague, if not always in the formal sense of sharing an institution than in the higher sense of being someone with whom I share my intellectual life. Their remarks aren’t always preserved, but I consider them a sufficiently important part of the process to mention them here.

There are currently four. Presumably, by late August 2016, there will be five. Here are links:

2012: Henry Adams, Builder of Tombs (Who was Henry Adams? Why did people care? Why don’t they anymore? Why do I? Introductory remarks given by Baz Harrigan.)

2013: Call Me Melville (Melville was beloved enough by mainstream scholars to resurrect his career after decades of obscurity… and beloved enough by a New Left bomber that he rechristened himself after the author. Why? Introductory remarks given by Pete Cajka.)

2014: The Long March Through the Human Resources Department (Why does progressive social justice discourse sound so legalistic, and why do both activist and corporate social regulation practices seem so Calvinist? Introductory remarks given by Aaron Goodier.)

2015: The Individualism of the Hamster Wheel Runner (What happens to individualism when it constitutes itself in “cult” form, as represented by movements like Objectivism and Satanism? What does it mean for individualism — one of the basic intellectual currencies of our time — that it takes these forms? Introductory remarks given by Jarib Rahman.)

2016: Lethality and Merit (“Support the Troops” has increasingly been supplemented with a worship of/identification with personal lethality- hence the worship of snipers and other “operators.” I tie this in with the discourse of meritocracy which the operator literature both competes with and partakes in. Introductory remarks given by Drew Flanagan.)

2017: COIN of the Realm (Tracing some of the intellectual lineages and mutations of American counterinsurgency doctrine. Introductory remarks given by Mufasa Vallon.)

2018: tradition and Tradition Amongst the CHUDs (What does “tradition” mean when it’s claimed by Bill O’Reilly, Julius Evola, and fascist zoomers all at the same time? Both more and less than you might think! Introductory remarks given by Matt Johnson.)

2019: The Countercultural Vision of History (How did the counterculture look at the American past and why does it matter? Featuring multiple Ishmaels, both Reed and the supposed “Tribe of” from Indiana. Introductory remarks given by Kit Cali.)

2020: Fear and Loathing in Genre New England (A discussion of what New England “means” and how the question has been mooted and reflected in the works of H.P. Lovecraft and Dennis Lehane. Introductory remarks given by Ethan Heilman.)

2021: Alternate History, at the End of History and Beyond (an examination of alternate history fiction at the zenith of its popularity at the “end of history” era, and beyond, and the ways in which our limited ideas of what history is limits our fictions. Introductory remarks given by Ed Golden)



Every now and again, somebody else publishes something I write.

A Red With An FBI Badge Jacobin, June 2014. This one is about James Ellroy and the peculiar way the personal and the political work together to create the nightmare-world of his best works.

Neal Stephenson’s Ideal Forms Los Angeles Review of Books, August 2015. Roughly like my Ellroy piece, except about scifi writer Neal Stephenson (and, thereby, a whole different set of literary and political commitments, etc.).

Occupation With a Human Face Jacobin, December 2015. On Montgomery McFate and her place in the selling of counterinsurgency to the public.

“Foul, Small-Minded Deities”: on Giorgio de Maria’s “The Twenty Days of Turin” Los Angeles Review of Books, February 2017. On the recently-translated Italian weird fiction classic.

The Internet Wars Come To Print Los Angeles Review of Books, July 2017. On Angela Nagle’s “Kill All Normies” and the way we read the alt-right.

100 Best Dystopian Books The Vulture, August 2017. I contribute to this list of short descriptions of dystopian works.

The Dark Forest and Its Discontents Los Angeles Review of Books, May 2018. On Liu Cixin’s “Death’s End” and the “Remembrance of Earth’s Past” series generally.

It’s Not Just Red States vs Blue States Jacobin, March 2019. A review of Kevin Kruse and Julian Zelizer’s “Fault Lines,” which attempts to write the history of America in the late-20th/early-21st century.

The Far-Right Roots of “Straight Pride” Dissent, June 2019. Revealing “Straight Pride” as the latest rebrand for the East Coast’s fumbling far right.

Why Populism Is Not A Gateway Drug To Fascism Los Angeles Review of Books, July 2020. A review of a book about fascism, populism, and lies.

Review of Hagerman’s “White Kids” San Antonio Review, July 2020. A review of a sociological work about the racial ideas of upper-middle-class white kids.

Age of Illusion DigBoston, August 2020. A review of Andrew Bacevich’s The Age of Illusions, a history of the US in the late twentieth/early twenty-first century.

A Seemingly Endless Recitation of Events DigBoston, September 2020. A review of Rick Perlstein’s Reaganland, a real slog of a book.

Anti-fascism Versus Anti-Extremism Los Angeles Review of Books, October 2020. I review two books on combating the far right, one from a radical antifascist perspective and the other from a liberal anti-extremist one.

The Rise of Border Fascism Dissent, November 2020. A review of Brendan O’Connor’s Blood Red Lines and his concept of “border fascism.”

Review of Black Radical DigBoston, November 2020. A review of Kerri Greenidge’s biography of William Monroe Trotter.

Myths Over History Full Stop, December 2020. Partially a review of Ben Teitelbaum’s War For Eternity, but more my takes on the concept of Traditionalism and what it means now.

Review of What Tech Calls Thinking DigBoston, December 2020. A review of Adrian Daub’s exposition of the “intellectual bedrock” of Silicon Valley.

Review of A Pandemic Nurse’s Diary DigBoston, January 2021. A review of an early source for covid-history.

Review of Ideal Minds San Antonio Review, March 2021. A review of a fascinating work of criticism/intellectual history of the seventies by Michael Trask.

Disaster and Bureaucracy DigBoston, March 2021. A review of Kim Stanley Robinson’s entry into the “climate fiction” sweepstakes.

Berard Reviews the Next World War San Antonio Review, March 2021. The fine folks at SAR let me review “2034,” a real piece of shit of war prognostication.

The Everyday, Between Revolution and Reaction Los Angeles Review of Books, April 2021. A review of Marc Stears’s book on “ordinary life” as a source of inspiration for the center-left.

Beyond Belief Amongst the Millennials Full Stop, June 2021. Wherein I discuss millennial spirituality, uber-creep Josh Hawley, and where our fractious culture goes from here.

What We Talk About When We Talk About Cold War Culture DigBoston, June 2021. A review of Louis Menand’s humdinger doorstop The Free World, and when liberal historiography has its place.

What’s Alternate in Alternate History? DigBoston, August 2021. A discussion of alternate history fiction occasioned by P. Djèlí Clark’s steampunk fantasy Master of Djinn.

The Nazarene, the Backlot Cowboy, and Us DigBoston, October 2021. I have a look at Kristen Kobes Du Mez’s Jesus and John Wayne and the difficulties of evangelical history.



Towards the end of a long and storied career, the best idea of a future Anton LeVay, founder of the Church of Satan, author of the Satanic Bible, a man about whom all manner of tall tales once circulated, could produce when asked could be summed up in one word: Disneyland. That’s not a snide lefty’s dig, either, but his own words, or rather, the words of his amanuensis, Blanche Barton, spoken in LeVay’s presence to the journalist Lawrence Wright in 1991. “That’s been a real trial balloon for a lot of this,” Barton told Wright of the theme park’s relationship to LeVay’s idea of the future, “the incorporation of androids, a private enclave with a self-contained justice system, its own private police force. It’s a good example of capitalism at its peak.” For Wright, this little tidbit was the last straw on the camel-load of evidence that his time with LeVay had provided to the effect that this supposed vicar of Satan on Earth was, in fact, a rather sad, lonely, old man, about as exciting or dangerous as, well, Disneyland.

Wright didn’t draw much social significance from his portrait of LaVey, and for good reason. The best way to describe Satanism is a term Wright would not have had access to in 1991 – trolling. Indeed, these days Satanists have embraced the concept, and have landed in the news by exploiting “religious expression” loopholes to insist that public buildings that display things like Ten Commandments statues also put up big statues of Baphomet, which I for one find pretty amusing. Try though they might, no one has ever been able to pin any of the supposed crimes inspired by Satanism on actually organized Satanists (a tiny group, it’s worth noting, however far their imagery spreads), and the waves of panic over satanic murder or “ritual abuse” that swept the country periodically from the seventies to the nineties now appear rather quaint. Moreover, in a fashion those of us who have dealt with trolls will recognize, none of the varieties of Satanism are especially internally consistent or lay out a really precise idea of what they believe or what they’re doing. This inconsistency goes right down the most basic questions, such as: do Satanists actually believe in and/or worship Satan? Is the Church a real church or more of a performance art project/tax dodge for LaVey and his pals? Are they rejecting all morality or just Christian morality? The answers to these questions change depending on which one you talk to at which time, and given that most of them belong to some little grouplet – Satanists have a tendency towards splitting rivaled only by congregational Protestants and by Trotskyites – these answers are usually barbed asides at somebody in a rival cult. Satanists have produced thousands of pages of literature and much of it reads like a lot of troll manifestos, wandering turgidly between different lines and means of arguments, frequently patching in dubious primary source quotes, making asides and then not returning to the argument where it left off, leaving the reader the impression that there’s a meta-troll, the troll of having been gulled into reading this nonsense. When the ideas are as content-light as they are here, it makes sense to angle on the personal, as Wright did, in what, after all, was a profile of a person.

But those of you who know me or, at any rate, are sufficiently indulgent as to read my facebook statuses, know that I’ve taken it as a task to try to read the history of ideas in the leavings of the trolls, con men, enthusiasts, and off-brand pedants of the modern past. No matter how obscure, some sort of choice and selection is present in all of their expressions. Some of these choices can be used as tracks, visible signs of the changes in the societal and intellectual contexts of lived existence in a given time and place. To put it another way, LaVey picked Disneyland for a reason.

Moreover, spend enough time examining different kinds of discourse, and you’ll notice certain problems crop up over and over again. Social problems as we understand them – like managing the relationship between capital and labor – come up throughout the modern era. Political problems – like those attached to the governance of dissimilar bodies of people sharing a single polity – go back farther still. Existential problems – like what the deal is with being able to conceive the infinite but still be doomed to a very finite existence – are arguably older than any. A vast array of disparate groups of people have dealt with these questions. You can treat these problems and many others like tropes in literature, which can be arranged and rearranged to produce a vast and ever-changing array of potential meanings and messages depending on context. Looking at which of these questions are asked or answered, when and by whom, can tell us a lot about history. At least, this is the operating assumption behind much of my historical work, both professionally and here with these lectures. Here’s hoping there’s some merit to it!

The tropes we’re going to deal with here belong to the genre “individualism.” I’m almost entirely uninterested in the questions like what constitutes an individual, what the rights or responsibilities of an individual are, what the relationship between individuals or between individuals and society – however conceived – should be, blah blah blah. These questions are boring and almost always posed tendentiously. I suppose if tasked with taking a stand, I would wind up opposed to most individualists in that I do not believe that the individual person is an ontologically independent fact. Basically, I think individuals construct themselves out of culturally available material. Moreover, I think our idea of what the individual can be is a culturally constructed idea. To me, this is common sense, and doesn’t necessarily imply anything about the rights, duties, whatever of the individuals thereby constructed. And if it did that’s not a conversation I would find immediately interesting. What is interesting, to me at least, is the range of different constructions of individuality that one sees people – of all walks of life and levels of intellectual sophistication — construct. It gets more interesting still when these people get together and create organizations to propagate these ideas, given that the whole point supposedly is the priority of the individual.

It gets really interesting when these organizations could be described as “cults.” “Cult” is a loaded term. Not to get all buzzfeed on you all, but every nineties kid remembers cult panic, cult stories on the news, cult members on talk shows, cults as villains in tv serials, etc. And I think most of the people here get that this panic is over the top. So when I say “cult” I mean primarily “body perceived as a cult by others.” These are typically small, passionate, insular groups dedicated to a set of principles to one degree or another at odds with the mainstream of the culture in which they live.

Several groups that meet this description have proclaimed themselves dedicated to radical individualism, the proposition that the individual is or ought to be the focal point and justification of both morality and practical existence, and that deviations from this focus – towards the social, the metaphysical, the environmental, etc. – are necessary evils at best, the seed of man’s suffering at worst. Satanism –with few exceptions, but certainly including the original Church of Satan and most of its immediate offshoots – is one such movement. Objectivism – the philosophical system propounded by novelist Ayn Rand and her followers – is another. When I call them “cults,” I mean it in the comparatively value-neutral sense I laid out earlier; I may have my opinions on the tenets involved, but this is a birthday lecture, not a Huffington Post piece. It’s worth noting, though, that neither Rand nor LaVey brooked opposition lightly, and Rand in particular allowed a culture of conformity and ostracization to develop in her inner circle. Despite Rand’s intellectual pretenses, both her work and LaVey’s – and much of that of their followers – were much more in the vein of a preacher or a prophet than a scholar or philosopher. To borrow a phrase, they were not content to describe the world, but sought to change it. That said, neither were so controlling as to be able to prevent their groups from splitting, and the ideas of both spread farther than is typical for groups we call “cults.” We’ll have more than enough on the propagation and offshoots of both groups in a bit, though, so just hold your horses on the org-chart objections.

I am going to focus on these two groups in this lecture for a few reasons. While there are some anecdotal reasons – particularly why I chose Objectivism and Satanism and passed over, say, Discordianism – they’re not especially interesting, though I suppose you can pump me for them afterwards over drinks. We can say the anecdotes were the catalyst, but as I got thinking about them, the points of comparison came thick and fast, as did a few important and telling points of contrast. I’m not the first to put the two together. LaVey, after all, cites Ayn Rand as an influence, and parts of The Satanic Bible are, to put it politely, very direct paraphrases from Atlas Shrugged. But I think the confluence runs deeper than jokes about Satanists being little more than “horny Objectivists,” as a meme I’ve seen online goes. Instead, I think both constructed their idea of the individual, the centerpiece of their belief systems and, according to them, of the moral universe, in parallel and telling ways. This is what I want to interrogate.

It’s my belief that the way these cults of individualism constructed the individual subject was both an important indicator of and an influence on the way individualism has been constructed more broadly in our society since the late nineteen-sixties. Proving influence is tricky. Objectivist interventions have played a role in American right-wing politics; this, historians agree upon, and have written a good deal about in the last few years. Satanists have few such interventions – primarily restricted to defending themselves when accused of unlikely crimes as they periodically become subjects of moral panic. I believe that these interventions are important – and have a paper trail, useful for a historian – but that the real action in the story of radical individualism takes place at the demarcation of a small but important intellectual space.

Consider: the way individuality was understood during the years of the Cold War consensus – as having strong ties to a social order whose parameters are largely agreed upon, responsible to this order and with certain claims upon it – is different from how individuality is understood today. Many factors went into this change, some of which were basically ideological, and our two subjects – especially Objectivism – contributed to this. But I think only a small-to-middling portion of the force involved could be attributed to them, at most. I think the real historical significance of Objectivism, Satanism, and other midcentury cults of individualism is in carving out a  space in the aftermath of that destruction for the construction, modification, and propagation of secular, notionally oppositional, individualisms. To put it bluntly, we’re looking at the right wing of the counterculture. Any historical purpose the counterculture can be said to have served, I hold that the ideas propagated by the individualism cults have worked to turn in a rightward direction. They have worked to direct energy away from projects of collective liberation and towards… damned near anything else.

Having just made the conservative counterculture point, it’s worth acknowledging two points of comparison between Objectivism and Satanism right off the bat: first, both bear the heavy stamp of a founder; second, both founders expressed disdain for hippies and did not see themselves as conservatives. Ironically for writers who inspired many a rock band, both Ayn Rand and Anton LaVey were notably restrained in their tastes in music (preferring sentimental orchestral music- think Lawrence Welk) and generally preferred their hedonism to be indoors and private. Rand was forever negotiating the terms of her alliance with other right-wingers, with mixed results. LaVey stayed out of politics but occasionally chuckled merrily about how his ideas were a better fit for conservatism than was Christianity. Either way, ideas can serve purposes beyond the stated intent of their holders, and I hew to a broader idea of conservatism – as being about the preservation and in some cases restoration of regimes of power, especially in the private sphere – than was common at the time.

As for their interactions with the counterculture, however much they might kvetch about dirty communistic hippies – and kvetch they did, especially the San Francisco-based LaVey– the counterculture, insofar as it stood for anything, stood for the transformation of society through the transformation of individual consciousness. Even at their most communistic – and here it’s worth noting we’re talking “communistic” as in “rural communes,” not as in “Marx and Lenin,” adherents of which never got anywhere with the counterculture – the point of any communal activity was that through them, the individual could become something better, something purer and more whole. Societal change would thereby result.

This is not too dissimilar to the understanding Objectivism and Satanism shared of the relationship between the individual, morality, and social change. Both embraced an individualism that isn’t just radical, but is also – purportedly at least – transgressive. Egoism as rebellion – against a bewilderingly wide variety of supposed oppressors from the government to religion to most of their readers’ families – lays at the center of the belief system of both groups. This is another facet they shared with the counterculture: the idea that what they were doing was a rebellion. Satan, of course, has long been symbolic of rebellion, having rebelled against God. Rand’s novels are less about the actualization of individuals and more about already-actualized individuals destroying a social order that is insufficiently deferential to them. Most observers agree that it is this rebellious posturing – however affected it may or may not be – that has attracted the youth following that both groups acquired. That the posture has held up as long as it has is an important part of the continuing story.

The forms that LaVey and especially Rand used to propagate their respective ideas were as important as their content. Both were first and foremost storytellers. Rand wrote essays – and given her cult following, they are doubtless some of the most-read essays in the land – but is primarily known for her novels. LaVey was a man made of stories, mostly specious ones. Journalists and erstwhile comrades of his have made great sport of knocking over his more spectacular claims, like that he had an affair with Marilyn Monroe or that he served San Francisco as “city organist,” a position that city – or, one suspects, any other city – never actually had. While there were substantial aesthetic differences in the stories they told – Rand’s sweeping epics of good and evil set against a high modernist backdrop of skyscrapers and rail lines versus LaVey’s stories of hypocrisy and vice set against the wistful seediness of depression/WWII-era America – but structurally, they had a lot in common. They were about special people who showed up enemies. The enemies might be crooked, dangerous, or otherwise flawed, but their real crime was imposing restrictions on the hero’s individual flourishing, and worse, justifying these restrictions by reference to a priority higher than the individual – a religion, society, etc.

The political and social messages in Rand’s stories were more explicit than LaVey’s. Indeed, they are more explicit than pretty much any writer not paid by a given political movement or regime typically is. By the time she wrote her magnus opus, Atlas Shrugged, she had decided that allegory was the only fit device for a serious writer. And so the characters all embody something, from broad character archetypes (“the bureaucrat,” “the good underling”) to rather specific mid-twentieth century political ideas. LaVey, for his part, employed allegory – typically a mixture of stuff cobbled together from old books about black magic and sordid visual puns – in the various rituals he detailed in his books. He was always vague as to whether these rituals actually accomplished something in and of themselves or were more along the lines of amusing pastimes, the latter being a time-honored use of occult practices for bored rich people. Either way, these allegorical narratives – with symbolism pointed enough to be grasped easily but generic enough to be adapted to a wide array of circumstances – were what propelled the growth of the movements in question more than their arguments.

This is a good opportunity to discuss what some adherents to either creed would insist is a major, indeed irreconcilable, difference. Objectivists are so named because they believe in a universe that not just contains objective fact, but that is made up of objective facts, where everything others would chalk up to opinion or value-judgment is also an objective fact. Thus, they live in a world where morality is as objective as arithmetic. Satanists… don’t. As per usual, LaVey avoided making a definitive proclamation on the subject, and he certainly understood his philosophy as stemming from certain facts of life, most of which are drawn from popular ideas of the ruthlessness of natural existence filtered through social Darwinism. But for LaVey and Satanism, the whole point of these basic facts is that they aren’t moral – and neither should you be.

Fundamental though this philosophical difference may be, it’s surprisingly irrelevant when you look at the results of the discourse in terms of constructing individualism. Either because it’s insufficiently rigorous or all too rigorous, the prevailing socially accepted morality, whatever it is, is wrong. It’s wrong because it limits those individuals of sufficient caliber to transcend, to live according either to the objective morality of Rand’s universe or the realistic amorality Satanism propounds and thereby reach the heights of human potential. Both wind up doing versions of the usual song and dance dating back from the early days of liberalism about how self-centeredness drives creation and innovation, blah blah etc. etc.

The basic philosophical dissimilarity between Objectivism and Satanism impinges on the lived existence of either belief system at one important point: the basis of negotiation with the actual, existing, inadequate world. Here, their differing ideas of the basis of individuality and morality come into play. Objectivism, based on the idea of an objective reality that defines all moral choices, demands the believer change the political and social structures of the world. Satanism, based on the idea that morality is altogether a hobble on the strong, suggests that the believer elide the political, the social, and indeed the legal. These facets present one set of problems for the everyday believer: how to live according to a difficult code. They present an altogether different problem to the leaders of such groups (and, being small and eventually decentralized, “believer” tends to shade into “leader”): how to lead such a group and get along in the world. For all of the apocalyptic imagery in Rand’s novels and for all of LaVey’s villainous posturing, both wanted to operate freely in the existing world, and not lead a revolution or die in jail.

In the time honored tradition of moral entrepreneurs from Calvinism on, Objectivists and Satanists fudged a solution out of a combination of doctrine and circumstance. Circumstance came to the aid of Objectivism, in the form of the rising tide of the conservative political movement, primarily in the United States but to a limited extent elsewhere as well. Whatever disagreements orthodox Objectivism might have with some mainstream Republican tenets like the positive value of religion, they were quite capable of working together. This is how Rand has gotten herself in the history books – her ideas found their way into American conservatism and conservatism normalized her work and her followers. Objectivists – a few hardcores aside – never had to abscond from society, because they’ve found a real chance to change it. Satanism, for its part, faced the problem of keeping its followers within the bounds of civilized society. Here, LaVey essentially punted to aesthetics. Crime is grubby, LaVey preached. Alluding to doing terrible things is fun for shocking squares – there’s LaVey as grandfather of the trolls again – but in general, Satanists should seek the sort of dignified existence that’s only workable on the right side of social order. There, Satanists could compete ruthlessly and win exultantly thereby furthering themselves and, at least by example, the benefits of living free of moralism.

And so we see that Satanism and Objectivism both had flexible, sustainable means for solving the theologico-political problem that organized society presents to the radical individualist. This development is an example of a dynamic I think we can see in a lot of discourses. The limits of a given discourse – its blind spots and logical gaps – can be both liabilities and assets. They’re liabilities to the extent that they run the risk of leaving gaps through which energy and adherents can escape. They’re assets to the extent that they can form circuits, or corrals, or whatever sort of metaphor you like for containing and channeling the force generated by pushing the discourse’s limits. The answers that Objectivism and Satanism provided to their followers as to why they should bother following society’s laws are open to debate. What if you disagree with LaVey’s aesthetics and think murdering strangers is, in fact, aesthetically pleasing? What if you think conservatism or libertarianism are insufficiently pure in their dedication to the truth? Both systems have had adherents argue these things, but the answers provided were sufficiently convincing and flexible so as to keep adherents in – and to generate lively debate as to where exactly the lines should be drawn in any event, useful intellectual fertilizer for the growth of any movement.

Like many movements before them, both Objectivism and Satanism owe much of their shape to schism. Both began their organizational existences as the brainchildren of their founders. At the beginning of their respective existences, both had a sort of concentric circle model of organization. Around the person of the founder there was a tightly bound inner circle of long-time devotees and friends of the founder, close both in terms of their relationships and in geographical terms; Ayn Rand’s “Collective” based in New York, Anton LaVey’s church in San Francisco. Around this inner circle was the larger outer circle of readers, admirers, and inner-circle-wannabes, which included those interested in the group who lived outside of the geographical center. Both inner circles split open into schisms made more hostile than they might be due to the intense personal relationships involved. The contrasts here are worth noting. The major schism in Objectivism occurred in 1968 when Ayn Rand booted Nathaniel Branden – her declared intellectual heir, a major Objectivist writer and speaker, and also her secret lover (both were married to other people at the time) – out of Objectivism and declared him a “non-person” when it was revealed he was carrying on an affair with a younger woman, also not his wife. Branden, abandoned by most of his friends save for some younger Objectivists he brought into the fold, upped sticks for sunny California to begin a psychology practice. Satanism began breaking apart in a gradual process in the early 1970s when LaVey began selling memberships to the Church. LaVey insisted that the Church was always a money-making scheme, and what were you going to do? Tell the Vicar of the Devil he was being corrupt? This is more or less exactly what Michael Aquino, one of LaVey’s first and most important followers, told the old man when he broke off from the Church of Satan to found the Temple of Set in 1975.

Rand and LaVey had two distinct things to say about their respective friends-turned-nemeses (frenemeses, if you will): first, that they were small-timers, only worth anything due to the reflected light of the founders themselves, of no account otherwise; second, that by turning against them, their heretics now stood against everything that they preached. Perhaps, within the inner circle, both of these were subjectively true. From the perspective of critical historical understanding, both assertions, as applied both to Branden and to Aquino, are false. I argue that both, in fact, are important elaborators and propagators of the radical individualist discourse which they and their erstwhile mentors helped create.

Google Michael Aquino and most of what you get on the first page of results are conspiracy sites: being both a prominent Satanist and a career officer in Army intelligence will do that. When read through the lens of someone with a certain familiarity with the history of military intelligence, his work shows him as a near-perfect type of a long-term mid-level intelligence officer: clever but not brilliant, with a distinctly instrumental frame of mind, a bit of a showboat, given to assigning big scary terminology to banal things. Interestingly, given that Satanism is in most respects less ideological than Objectivism, the schism Aquino triggered was more about ideas than was Branden’s quarrel with Rand. Aquino’s account of the break is muddled, as break-up stories often are, but there were two basic issues. Most pressingly, LaVey’s plan to sell positions in the Church of Satan irked Aquino and other Satanists. LaVey’s defense – that Satanism was always a scam, all belief systems are scams, the point is to benefit from them – triggered the second objection: Aquino and his followers believed there were actual supernatural beings that they were worshipping. It gets into he-said-he-said when Aquino insists that LaVey, too, once believed that Satan – or, at any rate, a supernatural force at odds with the Christian God – was real and the proper object of worship, too, but abandoned it in favor of crass materialism. Of course, disagreements at to the nature and intent of these beings – which tended to turn into very pedantic arguments with inaccurate glosses of historical paganism and Gnosticism used to bolster assorted weak positions – led to further splintering of these “theistic” Satanists. In an echo of the way in which the profound philosophical differences between Objectivism and Satanism did not produce profound differences in their respective discursive practices, across the gap of the existence or non-existence of supernatural forces, LaVeyan Satanists and theistic Satanists exist in dialogue and construct radical individualism in similar ways. Aquino’s break allowed for a broader array of aesthetic and ethical options within Satanist discourse, while leaving its basic shape unchanged.

Nathaniel Branden – who died only recently – played a much larger role, both in shaping radical individualism and exporting it beyond the small, cult-like confines of movements like Objectivism. The year after his split with Rand he published The Psychology of Self-Esteem, the first book in what came to be called the self-esteem movement in psychology. He may have been a non-person in orthodox Objectivist circles – he might still be, for all I know, even in death – but he quickly became a mover and shaker in the world of pop psychology and the burgeoning self-help market. At the beginning of his career, his psychological work still strongly reflected Rand’s influence. He almost entirely refutes the idea of the unconscious and holds that psychological complexes are the result of irrationality and irrationality largely the result of “social metaphysics.” That phrase, in fine Randian style, is a sort of portmanteau of lazy thinking, subjective thinking, and simple politeness, and is what allows irrational social ideas to plant complexes in the brain. It is these complexes which therapy – a combination of talk therapy and hypnosis, according to the book – should fix.

The seventies wore much of the rougher Randian qualities off Branden’s ideas and prose. His work began to emphasize self-esteem as a boon to the self, a way of treating the self well. There came to be less emphasis on fixing hurt selves and more on self-improvement. The ethos became much gentler than it ever was for Rand – Rand would never urge people with psychological issues to treat themselves gently, as Branden eventually came to do. All the same, like with Aquino’s break with the Church of Satan, the changes, seemingly so drastic, cover for fundamental similarities between orthodox Objectivism and Branden’s “West Coast” variant. Ironically, this can be evinced in large part by the interaction between Branden’s brainchild, the self-esteem movement, and one of Rand’s bogeymen, the state.

Branden, of course, was not the first or the most important movement Objectivist to influence state decision-making: that honor, of course, goes to former chairman of the Federal Reserve Alan Greenspan. But it’s worth noting Branden didn’t make his mark by working his way into a technocratic executive office but through influence on the legislative process. To wit, in 1986 Democratic California state assemblyman John Vasconcellos – who represented several seats in his long career, all centered around what’s now called Silicon Valley – introduced a bill to that august body to create a State Task Force to Promote Self-Esteem. Not a joke! It passed. Branden was not on this task force, but Vasconcellos claimed that reading Branden’s work was the inspiration for his interest in self-esteem psychology, and many of Branden’s followers in that burgeoning field were represented.

Vasconcellos comes up for nearly as much abuse as Michael Aquino does if you google his name and the word “self-esteem,” largely from the sort of people who think that the practice of giving participation trophies in youth sports is ruining America and that it’s somehow Obama’s fault, but he knew what he was doing. The California State Assembly – and Vasconcello’s constituency in Silicon Valley – might be profligate with their money, and might be flighty, but when they are profligate on a flight of fancy, they usually expect results, or, anyway, an explanation from somebody. Vasconcellos and his team of self-esteem psychologists had, if not much data, at least the assertion that self-esteem training in schools and anywhere else the state had a captive audience, like courthouses and prisons, was a cheap form of social remediation. This typically took the form of things like classes in elementary schools where students would write complimentary letters to themselves, or, really farcically, state-issued award certificates and other tchotchkes for minor offenders showing up on time for court dates.

California has never lacked for serious social problems and 1986 was no exception, being six years away from the Rodney King riots and eight from the nativist explosion that led to Proposition 187. Meeting such massive, structural issues as systemic racialized poverty, exploitation, immigrant assimilation, etc. through any program of therapeutics – let alone through half-assed feel-good nonsense like the task force proposes – stops being ridiculous and becomes insulting if you think about it much at all.  In a sense, through proposing the task force, Vasconcellos recapitulated the way in which all of the discourse here collapses in on itself. To put it another way: it’s impossible to treat self-esteem therapy like it’s a substitute for social policy, but it’s entirely possible to use thinking in terms of therapy as a substitute for thinking in terms of structure. The space where thinking even in terms of conventional welfare state politics – let alone more radical solutions – was nowhere to be seen, either effaced or escaped by a discursive space entirely inimical to them.

Radical individualist discourse – and even if state-funded, self-esteem psychology, especially when suggested as a social prophylactic, definitely is that – is as strong and flexible as it is in large part because of its sojourn with the counterculture and the cults. For one thing, without them, political individualism – libertarianism, more or less – would need to invent a culture out of whole cloth. The culture of the rest of conservatism – religious and militaristic, mostly – wouldn’t cut it. More importantly, though, are the cults’ influence on the forms of radical individualism. The prophetic voice employed by the founders of the cults created discursive worlds where critical thought directed away from the individualistic premises they promoted was impossible without leaving the circle – not for nothing was LaVey a circus barker and Rand a screenwriter before embarking on their final careers.

The schisms within these groups are as important – perhaps more important – than the groups themselves. What they provided was a blueprint for the replication of individualist worldviews. These worldviews stay within the basic paradigm of individualist discourse but can be modified to suit the user. Ironically, given how little they appreciated dissent, neither Rand nor LaVey would have been able to appreciate how devotedly their followers and those influenced by them adhere to their basic framework, even as differences – superficial but real enough to them — multiply. Aquino or Branden might appreciate it – Aquino is the only still alive. Perhaps I should shoot him an email. This modified replication process allows for a skein of individuality – largely aesthetic –to exist over what is actually a pretty conformist culture. It creates a network of nodes for the development of difference within the pattern of the larger discourse. Or, to be less jargon-y, discussions over WHICH individualist ideas and practices are best make people less likely to have the discussion over whether the whole individualist framework makes sense, in its own terms or anyone else’s. Anything sufficiently enthused over – we’ve seen the examples of schools of psychology or spirituality, but we could also talk about artistic modes, health notions, some political concepts, many more things – can form nodes in the network, items on the menu through which the individual can construct a worldview. Put into contact with each other, these worldviews contain enough difference to engage the holders without demanding contact outside of the paradigm in which they coexist. This outside has many things – including, but not limited to, most possibilities for action directed towards mass political liberation. And for the most part, those inside stay in.

The history of conservatism (and, to an extent, liberalism) is littered with efforts to turn a broad spectrum of the population – not all or most of it, but a critical mass – into supporters of a given system of order. Typically this is accomplished through a limited and often privatized distribution of a certain kind of property. Margaret Thatcher proposed to make property-holders out of those living in housing estates to give them “a stake in society.” Ideologues of the slave south called for tax breaks to allow more white southerners to own slaves, thereby inoculating them from abolitionism. Microlending today proposes both to lift third world masses out of poverty and tie them to finance capitalism. In a sense, what the schism-generating model of oppositional radical individualism did was to allow every egoist with a few spare opinions to become his own cult leader, even if the cult was just himself (gendered pronoun used advisedly). And of course, with the spread of information technology – and both Objectivists and Satanists were early enthusiasts of the internet, though by this point of the lecture I think we’ve gone beyond them – everyone has a platform to promote the cult of themselves. And, of course, to kibbitz on other people’s cults and argue and switch sides and in general do all of the things a public sphere is meant to do. Except in this instance the public sphere is made up of people either passively pretending or adamantly denying the possibility of the public as something other than a reticule of individuals.

This is why I didn’t write another history of Objectivism or a piece on the counterintuitively conservative politics of Satanism, beyond the fact that neither is really original, especially Rand-bashing. It’s because whatever effect their actual ideologies had, their stamp on the production of ideologies is, to my mind, much stronger and more interesting, if also harder to nail down. Even if neither belief system ever gains another adherent, their mode of discourse will live with us for the foreseeable future. What should one do about it? I don’t know. But if this lecture was useful at all, then one takeaway should be that understanding popular discourse involves kicking up a lot of strange rocks and taking a good long look at the things that live under them.



Note: this is the only lecture so far that has been captured on video. If you’d prefer that, look here.

In early May this year, there was an interesting internet dustup which, among other notable attributes, featured what has to be the first time one of my alma maters, the New School, was discussed in US Weekly, even if it was only the blog. What happened was this: during a panel discussion held at the school on the subject of black feminism, noted transgender activist Janet Mock expressed her admiration for pop superstar Beyonce. This led prominent feminist scholar and panel co-member bell hooks to denounce Beyonce for her cooperation with the juncture between capitalist media production and sexist and racist beauty norms. Hooks, a past master of using pithy and provocative language to get her points across, capped off by calling the performer “a terrorist,” for the harm which she has allegedly visited upon the psyches of black girls. Sort of puts a new spin on the saw about one man’s terrorist being another man’s freedom fighter.

I’m not going to go into the veracity of hooks’ claim here, both because it strikes me as pointless, and because the discussion of the claim quickly became a discussion of the discussion, as internet arguments tend to do. A generational rift quickly became apparent- younger commentators, based on the cornucopia of blogs dedicated to parsing the intersections between politics and pop culture, did not so much defend Beyonce herself, though expressions of enjoyment for her music and respect for her business acumen abounded. Instead, they criticized her critics – like hooks, mostly older commentators on race and culture – for singling her out and for promulgating a joyless, lifeless politics of austere political combat. Nobody, as I saw, actually trotted out the old Emma Goldman line about it not being her revolution if there was no dancing, but the spirit was there.

Unlike many internet foofaraws that roil the effervescent waters of social justice discourse, the great Beyonce-terrorist dustup ended quickly and was relatively free of acrimony. Some of the main discussants even came to agree with each other publically, a welcome and rare sight. The reason why it struck me as worth noting has to do with the intergenerational quality I mentioned. In short, it was funny to me that these younger social justice bloggers were attacking hooks for opposing the valorization of a pop culture figure, when hooks herself was a key figure in the rise of culture studies, the field which made pop culture a subject of sustained, sympathetic cultural inquiry. Unlike earlier academic critics of pop, who emphasized the potential for their subject to stultify or even control masses of people otherwise alienated by modernity, the culture studies cohort of the eighties and nineties, hooks prominent among them, made much of the subversive capabilities of people’s interaction with and creation of popular media. Selfhood understood as an ongoing act of creative performance – which, if consciously undertaken, could subvert racial, gender, etc. hierarchies – was and is an idea that animates culture studies, and hooks has written numerous works along these themes. Many of the untold number of think-pieces on Beyonce that one can find on the internet, before and after the hooks dustup, use the pop star as an illustration of a black woman in control of her own performance of self, using ideas clearly borrowed from hooks’ wheelhouse, if not citing her work itself. So it was interesting to me to watch the older generation of culture studies visibly lose control of their central idea, and to see younger commentators in the same vein express their incredulity at one of the major figures in their intellectual tradition; a sort of ouroboros moment, the serpent of culture studies eating its own tail.

Indeed, one way in which hooks’ critics paid her homage was in the rhetorical method they used in their attacks. While the Beyonce-terrorist claim is, on a literal level, incorrect enough that critical bloggers typically did take the time to point out the claim’s outlandishness, for the most part the critics argued less that hooks and her defenders were wrong per se, but that they were harmful. By making their arguments, hooks and her defenders were oppressing the critics and at least some of the critics’ readership, they were acting to divert attention from more serious problems, they were reenacting sexist rhetorical practices. More than being factually wrong, hooks was made out to be morally, perhaps existentially wrong. In hooks own writings – at least those I’ve sampled, she has had a long and productive career and has written a lot – her own criticism follows the same lines. People are not merely wrong or incorrect in her world, even people with whom she is sympathetic; they are party to racism, sexism, to oppression in general, they are part of the problem which hooks sees as her job to solve.

Look at the internet and you will see that, despite the presence of a great many nerds and pedants who make the showing up of factual errors a sort of sport, hooks, no matter how much she may alienate some of her intellectual progeny, is assured for the time being her that her legacy will live on. Her practice of treating many, most, perhaps all disagreements as manifestations of deep problems which are at once civilization-wide and profoundly, damnably personal, is quite widespread in what we could call, for lack of a better word, the social justice community, which is not restricted to the internet but which makes some of its more spectacular displays there. The arguments that take place in this conceptual space are part of an ongoing effort to define the relationship between individuals, communities, and moral and ethical imperatives in the context of two overwhelming shadows that loom over our time. The first is the persistence of brutal, deadly social inequalities, especially those along racial and gendered lines; the second is the failure of the Left, especially after its high point of prominence in the 1960s, to follow through on its promises of social transformation and its subsequent long decline. The necessity of hard choices on the part of those committed to social justice in a period of conservative ascendancy, and the choices of those attempting to manage the consequences of social change, together between them condition much of the social justice discourse we see today, for better and for worse.

A brief note on method. Dealing with the history of ideas forces the historian not to forgo value judgments as such, but to look askance at normative explanations for why ideas are adopted or not. It’s useless to say a given idea took hold because it was right or good; people always think that about the ideas they happen to hold. Historians need to know why and how they came to think that about their ideas. So I will be dealing with many ideas and practices here, good, bad, and indifferent, and while usually I have ideas about their value, that isn’t a major part of the story I’d like to tell. Normative judgments can be a use of history, but normative explanations are just shoddy work. So, you know, hard though it is not to lean on such a shining beacon of moral judgment such as myself… don’t.

The collapse of the broad left that emerged through the civil rights and antiwar movements of the ‘60s coincided with the beginning of civil society’s efforts to piece together a response to the imperatives of the Civil Rights Act of 1964. While massive resistance to the law gets more attention – rightly so, probably – efforts to comply with the statute evince a pattern we’ll see repeated a few times in this lecture. The Civil Rights Act was very clear in moral intent: the destruction of formal structures of racial discrimination. But it was quite vague in terms of how it meant to bring its imperative about. Particularly challenging was Title VII, which contained those two fateful words: affirmative action.

The phrase “affirmative action” conjures up for many images of government diktats from on high, specifying the racial order hither and yon across the land, but this image is patently false. The affirmative action stipulation in the Civil Rights Act says pretty much nothing about what affirmative action actually is- it just says it needs to happen. Any smart lawyer would be able to recognize this as a golden opportunity for interested parties seeking to define a given a legal space.

As it happens, the interested parties that came to define what constitutes compliance with the mandate to take affirmative action to end discrimination were professionals in the field of human resources and personnel management. A field originally dedicated largely to preventing or defusing labor organization now found itself in a position to define corporate responses to affirmative action mandates. Logically enough, the process began in the companies most dependent on federal largess, notably major defense contractors. What HR professionals were able to convince these companies – not run by the liberally-inclined, by and large – was that it would be much cheaper in the long run to implement programs to monitor and, in the company’s own time, correct racial discrimination in hiring practices than it would be to expose the firm to lawsuits through dilly-dallying. Together, HR people from a number of major corporations defined, piece by piece in a process still ongoing, “best practices” of antidiscrimination policy. These practices in turn were adopted by the courts – in absence of more specific guidance from the federal government – as standards for compliance industry-wide, which other companies would be expected to meet or face potential legal action.

The record gives us little on who, exactly, these HR professionals were, what their stances were on the social justice issues of the day, though we know, like many corporate support professionals, they lamented the tardiness of higher managers to see their work as essential. And, as is definitional to any profession, HR people are produced, as it were, institutionally, by programs in colleges and universities, and kept up to speed by professional journals, associations, conventions and so on. And so, the human resources professionals who would continue to negotiate the implications of civil rights law for corporations (and other big institutions, like schools) and translate the results of these negotiations into policy were produced by institutions which were, themselves, sites of conflict over the legacy of the civil rights movement, its decline, and its offshoots.

Academic history is starting to prod, in its ginger self-conscious way, into the history of the 1970s and 1980s. The history of ideas is typically a bit easier to find sources for than other kinds of history, due to one of the subfields major defects- its focus on people with the time, education, and platform to make their ideas known to the world. And so the history of ideas has been making quicker inroads into the relatively recent past than other subfields, useful as that’s the time that concerns us here. The big conversation piece in recent intellectual history these days is a book by Daniel Rodgers at Princeton, called “Age of Fracture.” While the book as a whole is well worth reading, a lot of the argument is right there in the title. The period roughly from Watergate to 9/11, Rodgers argues, was a time where broadly shared ideas about society in America fractured, and the fractures of those fractures fractured further until we were afflicted with a “contagion of metaphors.” Now, I have a number of criticisms of this thesis as an explanatory rubric for American history in this period, but it appears to be an adequate description of the life of the mind in higher education in the seventies and eighties- the time Rodgers started teaching, notably enough. At one point, before the upheavals of the sixties, the big American research university was held up by some as a model not just for higher education but for society: progressive but orderly, hierarchical in a meritocratic way, moving everyone along serenely toward the same better future. But by the time the seventies came around, universities were in an odd and conflicted place, having digested perhaps more of the changes of the times than much of society – relaxed codes of conduct, ethnic studies departments, enshrinement of the right to protest – while not quite knowing what to do with themselves in the new dispensation. This, combined perhaps with the anxiety of shrinking prestige, gave academic politics of the time a nervous edginess and tendency to fracture it never quite lost.

It’s common for leftists to bemoan some of the fracture points and divisions that arose among leftists and liberals in the seventies, and understandably so, but they came about as a product of real failures of the sixties left. During the sixties, Black groups began to emphasize the necessity of autonomous, self-generated black organizations if Black Power was going to be anything other than a slogan, and other groups followed suit. Discontent with both black and white leftist organizations, which from all accounts were profoundly and often openly sexist, in turn fueled a resurgent women’s movement, which reached similar conclusions about group autonomy. This, in turn, fractured along lines of race, class, and sexuality, along with differences about ideas and strategy. One good thing about the decline of the sixties left, from one perspective, is the plethora of different voices and perspectives that surfaced in no small part in reaction to its failures. It was much like the last Batman movie, in that respect.

For all their differences, these movements and other successors of the sixties moment had a few important structural elements in common. First, terrain: no one, other than in the fever dreams of the far corners of the renascent right, thought that the new movements were really going to overthrow the ruling class tout court, as the New Left – quite grandiosely – thought might or at any rate should happen. The movements were somewhat less marginalized, though, in colleges and universities, though it’s common to overstate their influence – especially as far as administrative decision-making is concerned – even there. Still, it provided a secure perch from which to spread their messages, even if it was largely to sleepy undergrads. Second, the post-sixties left attempted, and to a great degree succeeded, eventually, in turning public attention to politics understood on a smaller scale than the big doings on which the sixties left focused. The politics of domestic life, of small groups, of the local, the psychological, the personal however defined; these weren’t the sole focus of the post-sixties left, but in many respects the attention we pay to them is, in part, their doing. I tend to think there are a few other things at work too, but that’s a lecture for another day. I think it’s fair to say that it was the effervescence of feminist writings especially in the seventies and eighties that did much of the work in illuminating this terrain. Given their involvement in earlier movements whose critique of social structures tended to at best underplay and at worst denigrate women’s issues, and having witnessed the dark side of a revolution in sexual mores which did not place much emphasis on consent, feminists had – and have – a target rich environment in front of them. And, unlike the sixties New Left movements in which many seventies feminists once toiled, the latter proved quite comfortable using the legal means of the system it critiqued.

This turned out to be especially important for the human resources profession’s ongoing negotiation with social change. The civil rights legislation upon which much of affirmative action is predicated includes sex as a category of discrimination as the result of what amounts to sabotage. A southern congressman who opposed the extension of civil rights protection to blacks added sex to the list of things forbidden to discriminate based upon, as both a bitter comment on what he saw as the absurdity of rights legislation and a disincentive to vote for the bill. Well, it didn’t work as the latter, so joke or not, the Civil Rights Act of 1964 includes language about sex discrimination. Feminist legal scholars and activists would use that opening to advance a broader understanding of the utility of the law in addressing inequality. While hiring, promotion, and pay discrimination along gender lines were and, sadly, are important issues which feminist activists fought against – this is the same period as the fight for an Equal Rights Amendment – the biggest accomplishment of feminist legal activism in this period is probably redefining the discrimination issue away from measurable discrepancies in pay and position and towards the culture of workplaces, most importantly the defining of sexual harassment as a form of legally actionable discrimination. This was first established by court precedent in 1976 and written into EEOC guidelines in 1980, and with it the concept of the hostile work environment, once a feminist theoretical construct, became a legally actionable concept. All made possible, in some small backwards way, to a bigoted congressman trolling his colleagues.

The battle to get sexual harassment recognized as a form of workplace discrimination and thus a violation of civil rights law was a long and arduous one, and while formal recognition has been extended these last thirty years, nobody serious could consider the issue of harassment settled. As with other areas of civil rights law, the powers that be made a clear normative judgment, that harassment is discrimination, but did not make clear what companies and other bodies potentially liable to discrimination lawsuits had to do in order to be compliant with this new mandate. Once again, the profession of human resources was there to fill the gap. HR professionals, as before, took the initiative with their respective organizations to establish procedures to deal with harassment complaints. These procedures, if the pattern held, would go on to become legally recognized best practices, which if followed would act to shield employers from potential liability.

As with racial discrimination, the first tool HR brought to bear on the problem of sexual harassment was the in-house grievance procedure, which was adapted from procedures meant to defuse labor disputes. By and by, though, it became clear that these tools alone would not be sufficient for an environment where the internal power dynamics – or, at any rate, those dynamics that ran along the legally actionable dimensions of civil rights law – and the behaviors and atmospheres those dynamics create were now the subject of actionable scrutiny. Activists and HR professionals alike – many of whom, after all, were produced by the same universities which were often facing similar problems – pointed to the same sorts of problems we can probably all see with an internal grievance based set of procedures. What sort of chance would a plaintiff have for her complaint to be taken seriously if the same sorts of people – sometimes the same exact people – who produced the unsafe work environment were the ones considering her grievance? Especially as the concept of the hostile work environment broadened to include not just sexual but racial and eventually other forms of discrimination, the insufficiencies of a grievance-based approach became ever clearer to HR people and activists alike.

Something like an answer, at least as far as the HR profession was concerned, came due to a case argued before the Supreme Court in 1998, Faragher v City of Boca Raton. There are a lot of ins and outs to this case and I’m no lawyer, but the upshot of it was that it was determined that employers have a responsibility to proactively prevent the creation of hostile work environments- and that they faced potential discrimination lawsuits if they did not. Once again, the courts created a mandate which the human resources profession was poised to fill. The way to shield companies from lawsuits in a situation where liability could literally be distilled out of the noxious cultural atmospheres in many, probably most, perhaps all workplaces, was to allow human resources professionals to create a less hazardous cultural environment through the training of employees. This process would begin when an employee entered the firm, and would be periodically reinforced by training throughout the employee’s time at the company, both to prevent recidivism and because from an HR perspective, what legally constitutes actionable harassment or discrimination is a moving target. One way of looking at is that, in essence, human resources departments in companies across the land used open-ended legal decisions about civil rights law to place themselves in positions where they would be charged with producing and periodically reproducing employees whose conduct would be conducive to a safe working environment- or, more accurately, whose possible misconduct would not be a legal liability for their employers. In a clear adaptation of a radical practice, in this instance the consciousness raising session, a concept popularized by feminists in the sixties and seventies, the preferred tool for this production came to be the harassment workshop, though in a notable deviation from feminist practice these workshops were typically mandatory and led and orchestrated by professionals.

Given what could generously be called the quaint, patchwork quality of the coverage of law and power in this country, not everyone has attended such a workshop; indeed, in the half-dozen odd jobs I’ve held before being a grad student, including some for big companies, I attended none, at best signing a form somewhere that I read a sheet of paper telling me to keep my conduct appropriate. God knows some of those workplaces could’ve used some very thorough policing of how people behaved. More generally, though, I think it’s exactly the patchwork, interstitial, cheesecloth-like quality of organizational life in contemporary America that has made the convergence between post-sixties radical ideas and post-civil-rights management practice both possible and increasingly relevant. Given the lack of viable alternatives, like any really broadly-based leftist organization, grassroots pedagogy now stands at the center of anything even vaguely progressive to such an extent that these is little else visible there. The old joke about it being easier to dissolve and then elect a new people rather than to do the same to the government has actually become true: the logic of social justice discourse is precisely the logic of producing a population capable of living according to their truths, and undertaking this production in really suboptimal conditions. The task – the management of conduct with reference to a constant but somewhat unstable set of moral/legal imperatives – the means – the pedagogy of small groups, where one happens to find them – and the context – spiraling inequality and seemingly no way out – these are things that the social justice community and the contemporary human resources profession share, and I think the two gestated long enough in similar circumstances that it is not always clear, from the perspective of the history of ideas, where the dividing line is.

Perhaps it makes sense to speak of them as a single modality, which is just a fancy word to mean bundle of ideas and practices. The central concern of this modality is the management of moral space in a situation where the source of oppression, and consequently harm and evil, is understood as coming both from unaccountable power and from the internalization of this oppression on the part of those below. The activist, in theory, seeks to end this situation by dismantling and/or redistributing power; the manager, in theory, seeks to manage this situation to avoid liability, but it’s an open question how much these differences of intent matter as far as the articulation of this mode of thought and practice are concerned, or even how different they really are. HR managers are people, after all, and some undoubtedly truly believe that what they’re doing advances social justice. While social justice activists typically look down upon capitalism, they often look down just as much as on anticapitalists, especially considering that the history of the movement – and the history the movement tells to itself – heavily involves remembrance of racism, sexism, and other perfidies on the part of socialists, communists, and others who would overturn management’s applecart once and for all, whether or not said applecart is run by people invested in all the best anti-oppressive causes.

So, where does this leave us? And more importantly for this longsuffering audience, what on Earth has it got to do with the Beyonce dustup with which I began this lecture?

First, an important late development in the story of the social justice modality is the migration of the locus point of its discourse away from universities and small activist groups from marginalized communities – though of course it’s still present there – and towards loosely-knit informal internet communities, or what could just be called “The Internet.” This migration was spurred by the way the internet makes communication easier in both a positive and a negative sense: it enabled the sharing of ideas between social justice proponents, but also brought about an explosion in rampant public displays of racism, sexism, homophobia, transphobia, and just general shittiness, that the anonymity and playground ethics prevailing on much of the internet allows and encourages. Especially given the time of the internet’s early development – the nineties, where basic politeness was treated as “politically correct” malarkey, the reactionary jingoism of the Bush years, the explosion of open racism with the election of Obama – it makes sense that many young people would be repulsed and searched for ideas and practices that promised to not only combat the “isms” but would place that combat at or near the center of moral existence.

This shift in the locus of discourse led to some important changes in the way social justice understands the relationship between pedagogy and its moral imperative. Let me use an ecclesiastical metaphor here. The model of pedagogy that obtains in the formal institutional expressions of antidiscrimination – the harassment policy compliance workshop, for example – could be said to operate under a Presbyterian model. A group of people who job it is to parse out the implications of a legal/moral imperative – be it that of social justice or that of an angry Scots God – pass on the word to their respective flocks, and the flocks toe the line, or else they’re out of the congregation and/or the job. This functions to keep the community as a whole on the right side of the ineffable workings of grace and/or discrimination law. What obtains on the internet and in other more loosely-regulated moral communities could be called a Congregational model. The community attempts to steer itself into the port of grace, understood in the Christian or social justice sense, and is in the end only accountable to itself and to the demands of a moral standard which makes much of its own intentional difficulty and lack of comfort. Anyone can theoretically lead the congregation, but anyone, leader or not, could also be given the boot by popular consensus if they threaten the delicate effort to build a righteous community. One trait the more freewheeling internet social justice proponents share with their more staid cousins in managerial professions is that both concern themselves with visible signs of potential liability and/or internalized oppressive attitudes. I think this – along with the fact that people just like talking about celebrities – helps explain why moral/political dissections of pop culture artifacts – and further dissections of the dissections, ad nauseum — have taken on the importance they have in social justice circles. Your attitudes to the pop stars of today can be read as visible signs of your relationship to an overwhelming moral imperative. If this is the case, and if your relationship to the imperative makes up part of the moral space in which we all answer to the imperative, then naturally concerned parties might be less than restrained in attempting to manage your choices and conduct- and will have vastly differing ideas of how this management should go, to boot.

So, what good can come out of thinking about social justice as a modality of management? For me, I tend to think of it in historical terms; it might advance our notions of how the history of ideas works. Pursuant to that, and maybe to throw some bone of judgment to my patient audience, let me say this: we need to think seriously about the implications of using a morality in the place of a politics. Morality and politics don’t exist except with reference to each other. Morality gives politics a purpose – politics is needed to figure out how to apply oneself to this purpose in a world where morality does not always prevail. Social justice discourse uses the techniques developed between activists, academics, and bureaucrats attempting to wrestle with the moral and legal dimensions of social change as its political theory implicitly. When politics becomes solely a question of the possession of moral legitimacy, then when presented with a situation that demands unified action in situations of crisis, what you get instead is infinite regression: debate over the debate over the debate over who can say what when and who decides the deciders. Stained as they are by association with oppressors past, most political theories can be, if the interlocutor so chooses, dismissed out of hand in favor of referring once again to the moral principles of antioppression, thereby surrendering many of the potential ways out of this loop while further reassuring the interlocutor of their own moral purity. Given the long relationship between activist and legal/bureaucratic methods of dealing with the politics of diversity, and given their history of sharing metaphors and practices amongst themselves, I think in the vacuum of a conscious political theory, the politics of the management of moral space developed by human resources will continue to act in a political theory’s place. In the spirit of community self-definition that motivates so much of social justice discourse, I leave to the listeners themselves to decide whether this is a satisfactory state of affairs.



Tonight, I’m going to tell you two stories, linked by a man who looms large in both, despite the fact that he was dead for decades when the stories occurred. That man is Herman Melville, known today as one of the great American writers, largely on the strength of his magnum opus, Moby Dick. The first story I will tell you is how Melville’s work was taken from the obscurity in which it languished at the turn of the twentieth century and made into one of the foundation stones of American culture’s image of itself at the time when American power and prestige was at its height. The other story will be about how the image of Melville, as a man and a cultural symbol, came to be used by people who understood America in ways violently opposed to those who saw themselves as guarding Melville’s legacy. The first story has been told before; it takes place largely within the critical establishment and in academia, and its effects are readily apparently to anyone who has taken an American literature or American civ course and is inclined to mull upon the experience. The second story is unfortunately ill-sourced; the questions I hoped to yield from it have not been asked, to my knowledge; and it takes us out of academia and into a world of bombings, betrayals, and desperate doomed rebellions. The two stories rely on each other, and both rely upon a man long dead, obscure in his own life, and a provider of no easy answers. Within the range of causes Melville has been conscripted to serve in death lie some of the basic quandaries of the American twentieth century.

In his own life, to the extent he was known at all, Herman Melville was known as the creator of picturesque romantic sea stories. Born in 1819 in New York to a family of good social standing that lost all of its money when he was a boy, Melville went away to sea as a young man, sailing to the South Pacific and working on whaling vessels. He came home to New York and wrote two novels based on his experience as a castaway on remote Pacific islands: 1846’s Typee and 1847’s Omoo. These two books were hits, playing to an American audience yearning for tales of noble savages and preindustrial natural splendor. With the proceeds, Melville purchased a farmhouse in Pittsfield, MA for his expanding family, and divided his time between it and the developing New York literary scene. This was the high point of Melville’s career while living.

There are a few explanations for the decline in Melville’s fortunes. There’s little getting around the fact that his third novel, 1849’s Mardi, was a near unreadable brick of  book, a sprawling picaresque allegorical fantasy sold to readers as another South Pacific yarn. It was a commercial flop and even sympathetic critics treat it gingerly today. Flop or no, Melville continued writing, publishing, and failing. His greatest work, 1851’s Moby Dick, failed to sell its original print run; “Billy Budd,” considered his greatest short work, was written for his desk drawer and only published posthumously. One of Melville’s twentieth century immortalizers, the historian Perry Miller, chalks up Melville’s fall to the rough and tumble literary politics of mid-nineteenth century New York. In his telling, there were three factions in the American literary world before the Civil War, and Melville fit into none of them. He was too democratic and passionate for the gentlemanly Anglophile whigs who ran the New York journals; he proved too philosophical for the Jacksonian literary nationalists in the Young America club to which he once belonged; and he was too earthy, too Jacksonian, and one suspects too New York for the transcendentalist coterie grouped around Emerson and Thoreau up here in New England. Nineteenth century literature, like prison, was a bad place to not belong to a gang. For whatever reasons, Melville was never considered a major literary figure in his lifetime after the failure of Mardi. Politics wasn’t all bad to him, though: patronage got him a position in the New York Customs House, where he made enough to support his family (though not enough to keep the Pittsfield house where he wrote Moby Dick) and while away his time on this earth until he died in 1891.

Interest in Melville’s work began picking up again in the 1920s, thirty years after his death and seventy years after Moby Dick flopped. The inciting incident for the Melville revival was the discovery of “Billy Budd” among Melville’s papers and its publication in 1924. Much of the initial interest in Melville at this time came from Britain; an English publisher first released “Billy Budd,” and D.H. Lawrence praised Melville as a great American writer in his book of essays on American literature in 1923. This interest rapidly found its way across the pond, where Melville found himself posthumously drafted into a later, and higher stakes, version of the battle that destroyed his career; the battle to define American literature.

None of the three literary factions among whom Melville came to grief in his own life existed in any meaningful form seventy years later. Outside of the academy, American literary life lived under the shadow of the critic H.L. Mencken and his coterie, clustered around Mencken’s magazine, The American Mercury. Mencken’s milieu had the social elitism of the Whig critics without their passion for propriety; the urban earthiness of the Jacksonian democrats without their populism; and the interest in foreign philosophy (especially Nietzsche) of the New England transcendentalists with none of their belief in the improvability of man. Their basic m.o. was to sneer at those dumb enough to believe in anything, be it Christ, Marx, American exceptionalism, or the League of Nations, illegal cocktails in hand, and the smarter ones, like Mencken, were sharp enough to parry all comers. More than ideas, Mencken and his set represented an attitude and a style, which many youths with literary pretensions attempted to make their own during the long boom of the 1920s.

You wouldn’t believe it nowadays, but time was people found it distasteful, or perhaps dangerous, to affect idle-rich pseudo-intellectual superiority in the midst of a crushing economic downturn. The 1929 stock market crash and subsequent depression spoiled the party for Jazz Age cutups like Mencken and showed that there were more serious problems in American life than laughing William Jennings Bryan off the national stage. Mencken became increasingly shrill – and decreasingly funny – as FDR dismantled his nineteenth century idea of elitist freedom and became more popular than him in the bargain. The fear and anger created by depression conditions inspired a general societal lurch away from what was then the political center, and anticapitalist sentiment and worries over the rise of Fascism in Europe sent much of that lurch leftwards. Literary culture followed suit. The Communist Party reached the height of its popularity in America (it was never that popular, but it did a bit better with intellectuals and artists than with the population at large), and Popular Front sympathies were widespread. Steinbeck’s depictions of impoverished migrant farm workers displaced the glitz and tragic navel gazing of Fitzgerald on the country’s literary stage.

“Where does the Melville revival enter into all of this?” the long-suffering listener could be forgiven for asking. The truth is Melville was not especially important to either of the tendencies I just described. I’ve found little reference to Melville in Mencken’s voluminous writings, but Mencken definitely pooh-poohed American literary nationalism and the universalist pretenses of writers of Melville’s vintage. The leftist cultural formations of the 1930s had more interest in Melville – he did, after all, write about working whalers who caught real hell off of their bosses – but realism was in vogue among Popular Fronters and Melville could not be called a realist. The group that came to take ownership of Melville’s legacy was defined in large part by their reactions against the literary and broader cultural and political trends of the 1920s and 1930s. For ease of reference, we can call this group the American Studies scholars.

This cohort includes a list of people whose names might be familiar to you if you read American literary criticism from any point from the ‘40s to the ‘70s, or paid close attention to the notes in your American civ course readings: F.O. Matthiesen, Van Wyck Brooks, Alfred Kazin, Leo Marx, Perry Miller, Vernon Parrington, Max Lerner, Henry Nash Smith, the list goes on. These scholars mostly came of age between the ‘20s and the ‘40s. They were all white and many of them were among the first cohort of Jews to attend to prestigious American universities. They were mostly men. Most of them dabbled, to depth or another, with some of the ideological trends of the day, mostly leftist ones. Several of them were Communist Party members at some point in the ‘30s or else were fellow-travellers, though their involvements were typically short-lived. Most of them recoiled against the Party, against Communism, and against anything they understood as extremism, which is to say against most of mass politics in the period between the world wars. What they devoted themselves to, in the place of the defined ideologies of the day, was an idea of America largely of their own creation, and to a shocking extent managed to make their idea of America America’s idea of America.

For a group of literary scholars who preferred close reading of novels and poems over discussing the social contexts in which literature is produced, the early American Studies scholars had some fairly transparent ulterior motives. Most of them agreed with a school of thought emerging in political science at the time, the totalitarianism school. The totalitarianism school held that the mass violence of the twentieth century was generated by political extremism, and that extremism was generated by the alienation of people living in modern mass society, cut off from tradition, unequipped to deal with the pace of change, dulled by numbing routine and anodyne pop culture, and hence easy pickings for any glib-tongued demagogue who came along and gave them someone to blame. The rise to prominence in depression-era America of the Communist party, populist demagogues like Huey Long, pseudo-fascists like Charles Coughlin, along with many other ineffectual but noisy Nazi groups, cults, and quacks, affirmed for the American Studies coterie the idea that mass man, left to his own cultural devices, would destroy himself. Next time, the reasoning went, there might not be an FDR to keep things copacetic.

American Studies sought to combat this problem and their ideological foes less by direct confrontation and more by rhetorical positioning. Instead of the Popular Front’s fixation on contemporary social relevance, the American Studies scholars emphasized the continual cultural relevance of a canon of American literary works. As distinct from the Europhile, right-leaning literary classicist tradition that dominated academic criticism in the early twentieth century, they stressed American exceptionalism. Being made up of Jews and intellectuals, the American Studies writers did not indulge in nativism, and did not dismiss European culture, but the central point was that American culture brought something unique and universally relevant to the table. They disputed amongst themselves as to what that something was exactly, but on aggregate they agreed that American civilization was the working-out of western man’s drive toward freedom, democracy, and equality (in that order), often times in spite of the wants or actions of Americans themselves. They saw this process as working itself out in the canon of American literature that they compiled. For all of their talk of America as a uniquely democratic culture, the first American Studies scholars had little interest in popular or offbeat culture, and their sources were all big-name writers. Most of them fell into two categories, sometimes facetiously called “testaments.” There’s the “Old Testament” of the “American Renaissance” of the mid-nineteenth century – Hawthorne, Poe, Emerson, Whitman, Thoreau, Melville – and the “New Testament” of early twentieth century American modernists: Faulkner, Fitzgerald, Hemingway, Stein, Dos Passos, Steinbeck, etc. There was a sprinkling of writers from the period in between, as well: Mark Twain, Henry James, and our old friend from last year’s lecture, Henry Adams. Read these, the American Studies scholars told whoever would (or, in the case of generations of college freshmen, were compelled to) listen, and you would understand what America was about. You would see it, and say that it was good.

The American Studies scholars were hardly the first to claim to define American literature, of course, nor were they the last. They were, however, in a unique position of power in the ‘40s and ‘50s. Their interest in promoting a unitary national culture as an alternative to threatening political ideologies was shared by many in the government. As the country geared up for the Cold War, fears of cultural subversion and of a return to the perceived cultural drift and chaos of the prewar years spread widely in Washington circles. This coincided with the massive expansion in higher education that followed the passage of the GI Bill and the beginnings of government support for research of all kinds. We usually think of government assisted research in terms of hard science, and for good reason, but the Cold War national security state also had its hand in the pie of the social sciences and even the humanities. Hell, the CIA bankrolled much of abstract expressionist painting. The CIA also funded some of the early American Studies programs – whole academic departments dedicated to the agenda of the scholars I’ve been telling you about – such as the one at Yale. This yielded the amusing spectacle of CIA men, suborned academics, and rich (often right-wing) donors wrangling with each other over how best to spend secret government funds in order to push one or another rigidly programmatic vision of how free and democratic American society is. This funny irony often had sad results, though. No matter how far they ran from their youthful leftism or how sincerely they taught American exceptionalism, most of the American Studies scholars held to an idea of America that was too gentle, too cosmopolitan, and too intellectualized for right-wing nationalists in the funding foundations and in Congress. Moreover, to the right wing, a Commie was a Commie, ex- or no, and number of ex-Communists were chased out of the field which they created in no small part as an expression of their repudiation of Communism. This included F.O. Matthiesen, author of the classic American Renaissance, who had the misfortune of being both red-baited (he was considered a fellow traveler) and lavender-baited (he was gay). His career ruined, he killed himself in 1950.

Sad stories like that aside, the American Studies paradigm and the early Cold War state went together like peanut butter and chocolate. From their perch in important academic positions and watered by foundation and government money, the American Studies scholars propagated their idea of American literature far, wide, and deep. Their American canon was widely taught in universities and high schools across the land. The Baby Boom generation was taught what America was from a variety of sources – their relatives, movies, tv – but the official version came from the American Studies paradigm, and that meant exposure to Herman Melville.

What did the American Studies cohort see in Melville that Melville’s own peers did not? Part of what Melville brought to the table could be called highbrow insurance. Melville’s later works, especially Moby Dick and Pierre, could be seen as prefiguring literary modernism, both in style and in content. American critics need not feel ashamed in front of Joyce, Proust, and Sartre, they could tell themselves, when an obscure American were prefiguring all of those guys’ moves seventy years before the fact. Melville was a useful bulwark against charges of American parochialism, too, dealing as he did with universal themes on very broad canvasses. These literary qualities were what the American Studies critics focused on – they made much of their strict focus on texts, ironic given the level of attention they paid to their own societal context – but there were political implications in Melville’s work that the American Studies critics liked, as well. After all, what better symbol for totalitarianism than Captain Ahab? Here’s a man who gathered together a disparate mass of men and unified them in pursuit of a mad dream, leading them to willingly participate in their own destruction. This was especially congruent with the understanding of midcentury liberals that totalitarianism was only incidentally a problem of this historical circumstance or set of ideas, but was fundamentally a problem of humanity, one that couldn’t be solved but could be managed, if one learned how to live and think the right way. Ahab was a useful figure to point to for those who would make ideology a matter of psychology.

And so Melville was incorporated into the American canon, at least in part due to his merits; the American Studies critics, for all of their ulterior motives, weren’t slouches at reading and appreciating literature. Moby Dick and other weighty works of Melville’s terrorized, fascinated, or bored millions of college and high school students from that day to this, and for at least some of them entered into their idea of what their country was about.

But as it turned out, for all of their occupation of the commanding heights of American criticism and for all of their official backing, the American Studies scholars were incapable of controlling how Herman Melville, a man they ushered into the national consciousness, would be understood and used. And unlike most disputes over what literary figures mean, this one did not restrict itself to rudely-written journal articles.

Signs that the Melville consensus might not hold started to show early on. For one thing, Melville was the only writer in the American Studies canon that I know of that people went so far as to rename themselves after. There are two examples of this I know of. The first was born Jean-Pierre Grumbach, but is better known to the world as Jean-Pierre Melville, director of such classic films as Army of Shadows and The Samurai. Before he became a famous film director, Grumbach was active in the French Resistance, where he took on the code name Melville, after his favorite author. He kept it after the war. His films weren’t really message films (especially given the overheated political context of postwar French cinema), but they do have a darkness and ruthlessness to them – in part inspired by his war experiences – that is not entirely in keeping with the canonical project the American Studies made Melville’s namesake a part of. That said, no one conversant with both Herman and Jean-Pierre Melville can say the name does not fit the younger man, or that Jean-Pierre Emerson or Jean-Pierre Hemingway would make any more sense.

Within the world of literary criticism, serious leftist attempts to incorporate Melville into their fold came about as the American Studies paradigm was taking hold. Imprisoned on Ellis Island – under the feet almost of the Statue of Liberty – awaiting a deportation hearing, the great Trinidadian radical scholar C.L.R. James wrote his interpretation of Melville entitled Mariners, Renegades, and Castaways. This work agreed with the American Studies scholars that Moby Dick was a prescient study of totalitarianism, but he turned the argument around on its originators. Ahab doesn’t represent a platonic, trans-ideological form of psychologized tyranny; he represents the highest product of advanced industrial society, the “managers, superintendents, executives, administrators” that brought about great advances but who also, by their very nature, sought to bring all under their control. James saw this dynamic at work in the United States, which had just imprisoned him for subversion, as much as in the Soviet Union, whose Communist Party considered him a non-person for questioning the Stalinist party line. Unlike the American Studies scholars, James did not see Ishamel as a symbol of American innocence and desire for American freedom, but rather as an intellectual enabler of Ahab’s tyranny, analogous both to the party-line Communists he was used to fighting and to the piously liberal American scholars who cooperated with McCarthyism. Hope comes from the titular “mariners,” “renegades,” and “castaways” that made up the crew of the Pequod, a self-contained world of hard work, unpretentious reason, and rough-and-ready democratic bonhomie, though dangerously vulnerable to the Ahabs of the world if not properly wary and organized.

After six months at Ellis Island, James was denied a visa extension and he shipped off to London.  The American critical establishment did to Mariners, Renegades, and Castaways what it so often did and does to books that seek to force reckonings it’s unprepared for: they ignored it. It would take the upheavals of the 1960s to partially dislodge the American Studies interpretation of Melville from its status as semi-official dogma. Many of the new generation of critics coming of age at that time took part in the upheavals, notably H. Bruce Franklin, a Maoist Melville scholar involved in numerous building take-overs at Cornell and Stanford. But the most spectacular reappropriation – or assault on, if you prefer – Melville’s legacy during the ‘60s took place outside of the academy. This is where the second man who rechristened himself “Melville” enters the story.

The man known to history as Sam Melville was born Sam Grossman in a hospital in the Bronx in 1934. He grew up poor around Buffalo, New York, and eventually became an engineer. A restless and by most accounts charismatic man who felt stifled by American society in the ‘50s and early ‘60s, he became involved in leftist political causes and by and by “dropped out” of conventional society and became a full-time movement figure in New York. He participated in anti-war protests and worked in the underground press. Most accounts of Sam Melville depict him as having being impulsive and action-oriented: to those of who have been to political meetings, he was the guy who’d always say “all this talk is bullshit, let’s go do something, anything.” It was in this spirit that he began taking progressively riskier direct actions, starting with harboring Quebecois militant fugitives and culminating in stealing dynamite and planting bombs in locations thought to be part of the American war machine, mostly banks and draft induction centers.

I would call Sam Melville, from as close to an objective level as I can get, the most serious of the New Left bombers, but that doesn’t mean all that much. Bombing was a desperate, ill-considered tactic to begin with, the product of facile young people deeply invested in proving themselves deeply invested. Few of the New Left bombers, to my knowledge, intended their bombs to kill — they were meant to materially slow the war effort by sabotaging its physical plant – but kill people they did, most often themselves by accident. Bomb-making is no activity for amateurs. With his engineering background, Sam Melville proved a capable bomb-maker and was conscientious (as far as any bomber can be said to be) about collateral damage; none of his bombs killed or seriously injured anyone. But he was impulsive, given to making snap decisions, and was almost entirely uninterested in taking even preliminary security precautions. He talked about his deeds with people be barely knew and endangered himself and his partners, a group that consisted of some of his lovers and various pals almost randomly recruited from the New York activist scene. Judged from the perspective of urban guerrilla strategy, Sam Melville could be seen as a man of useful nerve and technical skill, but in severe need of a disciplined organization. No such organization existed in the ‘60s left in America.

Sam Melville was caught because he made a classic mistake: he trusted a hippie. Though in this case, the hippie, one George Demmerle, was a member of the far-right John Birch Society who took up a purportedly freelance agent provocateur gig, though by all accounts he enjoyed the opportunities his assumed lifestyle afforded him. Demmerle met Sam Melville at Woodstock, gained his confidence, and steered him into the hands of the police. Melville’s associates went underground (including his ex-lover Jane Alpert, from whose memoirs most of our information on Sam Melville comes), and Sam Melville was sentenced to eighteen years in prison. Less than two years after his arrested, he was killed after taking part in an uprising that briefly took over Attica prison.

I don’t quite remember where I first heard of Sam Melville, but as I became more interested both in Herman Melville’s work and the legacy of the 1960s the question of what Herman Melville meant to Sam Melville lingered in me. Direct evidence on this question is scanty. Sam Melville died without writing a memoir, and his published letters tell us little about his name change. Jane Alpert, arguably the person who knew him best, wrote that Sam told her that Melville was his maiden’s name, but Alpert didn’t buy it, knowing that Moby Dick was Sam’s favorite book. Her claim is backed up by radical attorney Bill Kunstler, one of the last people to see Sam Melville alive. Kunstler was brought in to try to negotiate a settlement with the Attica rebels, a settlement that never came. There, he apparently got to talking with Sam Melville in the occupied prison yard, and inquired after his name. Melville towards Kunstler that he took the name Melville, inspired by Moby Dick as Alpert claimed. In his interpretation, the white whale, not Ahab, was evil, and the doomed quest to destroy the whale was a noble, if quixotic one – and the fact that Ishamel, the lone survivor of the Pequod, went back to sea in the end was highly relevant. For Sam Melville, his namesake Herman was a prophet of cyclical, doomed, but existentially necessary war against an evil, elusive, and ultimately unstoppable enemy. The struggle is the point – not the victory.

Like both men who rechristened themselves in his image, Herman Melville left no memoirs and no programmatic statement of what he meant to achieve. I believe part of the attraction towards Melville on the part of literary scholars is this gnomic quality; that, combined with the volume and complexity of his work, furnishes a lot of ground for critical exploration. I can’t tell you whether Sam Melville’s reading of Moby Dick is valid or not, though I will say that the idea that Captain Ahab is a hero strikes me as highly specious. I don’t know Melville’s work as well as I’d like, but there is on quote that I think is relevant here. The context was a letter, written by Melville when he was still a young-ish literary striver, to Evert Duyckinck, his patron back when Melville was still on the right side of New York’s literary Jacksonians. Melville was trying to explain to Duyckinck why he had commited a serious faux-pas: expressing approval of Emerson, a transcendentalist and anathema to Duyckinck and his posse. Melville explained himself thusly: “I love all men who dive.”

Melville, in the end, was more true to his dive into the murk of human existence than was Emerson, and he paid for it in life and received for it the dubious benefit of posthumous acclaim. The American Studies scholars may have used his work to construct a politically-motivated and rather limited canon, but they were also genuine lovers of literature and they seized on Melville at least in part because of his willingness to give into the unknown. If nothing else, they needed to prove than an American writer could do as well as a European.

But to dive is to willingly give oneself up – irrevocably, if not completely – to forces you cannot control. In the physical act the diving metaphor is derived from, that force is gravity, the attracting force pulling bodies towards bodies of greater mass. To dive is not to surrender entirely to gravity – the diver needs to decide when and where to let go, and what to do as they fall – but there is an acceptance of what may come as a consequence of the decisions to seek things that only be found in this dangerous way.

In sanctifying a man committed to the dive as a part of the American canon, the American Studies scholars were enshrining a way of being that led far outside of the circumscribed orbit of their ideas, and at times outside of the orbit of reason or morality. The same aspects that make the highest accomplishments of culture a salve, an ornament, or a tool of social order can also lead people outside of these concerns, and into very murky waters. Those who would know culture had better be prepared to dive deep themselves.



When I was an undergraduate at Marlboro College, a lot of my historical reading was self-directed. For reasons now opaque to me, I took it upon myself to read many of the historians of the “Consensus school” of American historical writing: Richard Hofstadter, Louis Hartz, Daniel Bell, etc. To put it briefly, the consensus school believed that American political history is defined by a consensus between all responsible parties on basic political issues. Whatever disagreements Americans may have had, the consensus scholars argued with varying degrees of sophistication and smugness, Americans from the beginning basically agreed on liberal politics, democracy (but not too much democracy), free (but not too free) markets, upward mobility, and individualism. Failed attempts to divert the country away from these principles, undertaken by both the right and the left, only strengthened the establishment that adhered to these basic American principles, until we arrive at the heyday of the liberal establishment of the late 1950s and early 1960s, where the comparatively placid politics and mass prosperity of the post-McCarthy, pre-1960s revolt era seemed to vindicate the consensus school on the wisdom of their perspective. There’s was a history with a happy ending. The 1960s, of course, upset the applecart, and they reacted in a variety of interesting ways, but for a good decade, consensus scholars sat at the top of the American historical profession, and dominated sociology, literary criticism, and other fields as well. Some of them are still worth reading today, if you’re into old, kind of outdated books.

One name that came up a lot as I read these works was that of Henry Adams. He was seldom the focus of much attention, but his name cropped up again and again in these old books, usually attached to one of two things: a pithy and learned observation on 19th century American politics and society, or a grossly antisemitic remark. Before seeing his name all those times, I knew the name Henry Adams from two sources. The first was the top of the Modern Library’s 100 best nonfiction books of the 20th century, where sits The Education of Henry Adams. The second was the bookshelves of grandparents of friends of mine, especially if they were of a certain WASP-ish demographic, where the name Henry Adams was one of a whole roll call of names I would eventually run into in my historical reading: Van Wyck Brooks, Richard Henry Dana, Edmund Wilson, and still more Adamses, such as Charles Francis Adams and Brooks Adams. So it was with a variety of associations that I went in search of answers to the questions: who was this Henry Adams guy? Why did scholars write about him fifty or sixty years ago as though their readership would know and care about him? Why does no one talk about him now?

Here’s a brief rundown on basic Henry Adams facts: born in 1838 in Quincy, MA. Great grandson of John Adams, second President of the United States. Grandson of John Quincy Adams, 6th President of the United States. Son of Charles Francis Adams, Congressman and ambassador to Britain during the Civil War. Henry served as his father’s secretary while he was overseas and thus spent the entire war in Europe, and was witness to a great deal of diplomatic manuevering and back-and-forth as his father strove mightily to keep Britain from recognizing or aiding the Confederacy. After the war, Henry engaged in several pursuits, bouncing from one to the other without being sure what he really wanted, but being insulated by his family position. He pursued journalism and reform politics – then largely the reserve of gentlemanly elites such as himself – in tandem, writing pieces on reform efforts of the day: currency, civil service, tariffs, etc. He also wrote novels related to this subject, politely (but not gently) lampooning the lax ethical and intellectual standards of politicians and expressing the distrust of democracy that later became one of his distinguishing intellectual traits. In the 1880s, he was worked as a history professor at Harvard, where he was instrumental in bringing the seminar model and other modern historical techniques from Germany to the United States, and he wrote several important works of American history. He had a, by all accounts, happy marriage with Clover Hooper, and no children. His marriage ended abruptly and horribly when Clover killed herself shortly after her father’s death. Grief-stricken, Adams took up travel and delved into areas of art, history, and religious expression far afield from that with which he had previously engaged. He wrote about art and medieval history; he doted on nieces, some real, some titular; he was a sought-after figure in elite social circles that fancied themselves cultured, and was widely well-regarded as a sage, though also seen as increasingly eccentric and pessimistic. In 1907, he wrote his autobiography, The Education of Henry Adams, which was released only to people he knew until he died in 1918, after which it was published and hailed as a masterpiece.

The Education is the definitive statement of Henry Adams’s career, and is the logical conclusion, in style as well as in substance, of the last and most important stage of his intellectual development. Stylistically, the book, written in third person past-tense like a historical monograph, manages some impressive feats: giving the impression of encapsulating the experience of a generation while emphasizing its author’s separateness from his peers; of telling the story of a man through the framework of his world and times and telling the story of the world at a given time through the framework of one man’s experience; and of giving the impression of intimacy with the author even as the author admits to self-serving omissions and obfuscations (though not as many as Adams’s biographers would later find). It also included previously unknown revelations about the diplomatic situation with Britain during the Civil War, some digs at personal and political opponents, a theory of history, and periodic descents into crude antisemitism. In short, it is a real piece of work.

In substance, The Education of Henry Adams is the most profound statement that I know of of cultural pessimism in the history of American letters. Read unsympathetically, The Education is the final screed of a deeply privileged man who didn’t like assorted aspects of a world which had passed him by, and who used the talents and education that privilege secured him to fancily dress up the sort of complaint most of us are used to hearing from older relatives or acquaintances about how the world is going to hell. That assessment is basically correct, and Adams will frequently leave contemporary readers unsympathetic or just plain angry. That said, I believe it is worth looking at the shape his complaints took, both to understand better what their context was and because the way in which Adams structured his complaints is important to understanding the significance his later readers granted to his work.

Adams was a great – and, I’ve been told, somewhat inept – borrower of scientific metaphors. As such, the basic metrics in Adams’s understanding of history were borrowed from the physical sciences: energy and valence. Adams held cultural energy to be akin to physical energy, and to be basically finite, and further held that his period was running out of it. How someone could maintain this conclusion during the period that saw the harnessing of electricity, the invention of flight, and the development of untold other scientific and cultural innovations, is where order and coherence come in. Energy without order, in Adams’s view, is chaos, and Adams understood the developments of the late 19th and early 20th centuries – damned near all of them, scientific, technological, economic, political, social, artistic – to be detrimental to an orderly, coherent understanding of the world, and thus harbingers of cultural – and, it is alluded, general, society-wide – chaos.

Adams was far from alone from thinking something like that at the time he felt it. Many beneficiaries of the nineteenth century’s changes felt the same way. Historians have advanced several explanations for the phenomenon of widespread elite anti-modernism. Some, most notably Richard Hofstadter, emphasized a social-psychological explanation: those elites least happy with the state of late nineteenth century America were those whose positions relied on something other than money alone- social position, education, etc. As big new money muscled small old money out of power and social prominence, small old money reacted by forming reform movements and/or seeking out other value systems. A somewhat more straightforward social explanation is the idea that elites of the 19th century were simply scared by technological progress, and even more of by the specter of social degeneracy that might lead to the elite falling from their position, possibly with the help of insurrection from below. There are other explanations, too, but I’ll spare you. The point is, Adams was an outstanding example of a recognized social type: the late nineteenth century upper crust WASP who grew dissatisfied with his surroundings (regardless of how much privilege they afforded him) and sought meaning outside of western modernity as it was then understood. Some of that type went in for primitivism, others got into Eastern spirituality (particularly Buddhism), others, like Adams, turned to medievalism, and the (supposedly) serene, spiritually rich, unquestioned and unquestionable hierarchy provided by Catholicism. Adams never converted to Catholicism, but he went to his grave believing that the high point of human civilization could be found in France in the twelfth century, where the cult of the Virgin Mary provided an energy more powerful than the nineteenth century’s dynamos to a civilization vastly more coherent than McKinley’s America. Adams grew to hate capitalists, workers (especially those with the temerity to organize), and those usual figures of disintegrative modernity, Jews. In none of these ideas was Adams alone. It’s worth noting that part of Adams’s rhetorical strategy was to cite himself as an example of this decline: he neither wanted to, nor was able to, “follow the family go-cart,” as he put it, into responsible political positions and national prestige, and neither he, nor the rest of his elite cohort, could stop the decline that he saw all around him.

So, elite reactionaries reacting against the things – capitalist modernity, to put it simply – that made them elite is not a thing without precedent. If you read material from the time, it seems like everybody (everybody with a little money, that is) was doing it. Why was Adams picked out from amongst the whole gaggle of elite anti-modernists that America in this period produced to be representative? Why did historians and critics care? Why do I?

There is, of course, one obvious answer: because his work was better than that of his peers, and there was a lot of it. There’s some truth to this assertion – The Education is certainly impressive – but I don’t think I need to spend too much time with this audience stressing how hard it is to match quality to posthumous acclaim.

A tentative answer came to me, I feel ambivalent about reporting, on a recent visit to the Isabella Stewart Gardner Museum. Few cities are graced with one place that nearly everyone acknowledges is the most beautiful in the city, and Boston is one, and its place is the Gardner. Briefly, for anyone in the audience who doesn’t know of it: the Gardner museum was the home of Jack and Isabella Gardner, very wealthy Bostonians of the turn of the twentieth century. Isabella was a great lover of art, and by and by turned her entire home – walls, floors, and furniture included – into a mosaic composed of great art from the past. The Gardners lived in this monument to art collection, walking on Roman floors and sheltered from the elements by Gothic walls, surrounded by masterpieces of painting, sculpture, and design, until they died, after which the house was made a museum open to the public. I strongly suggest you visit if you haven’t.

Education has made my enjoyment of many things more ambivalent. It did not ruin my visit to the Gardner a few months back, but visiting after having finished Ernest Samuel’s biography of Henry Adams, I had a new and not entirely welcome perspective on the place. I knew, now, that Isabella Gardner was guided in her acquisitions by Bernard Berenson, one of the great curators and art critics of his day (no, I hadn’t heard of him either before I started reading this stuff). Berenson was, in turn, deeply influenced by the aesthetic principles of – who else? – Henry Adams, with whom Berenson struck an odd but persistent friendship, in spite of Adams’s increasingly vocal antisemitism. Berenson was a Jew, but a Christian convert, conservative politically and artistically, had an eye for art that even a critic as stern as Adams could appreciate, and, most importantly, was a high-minded and extremely patient man, and thus was able to put up with Adams long enough for the man to help shape the task Berenson took on of directing their mutual friend Isabella Gardner’s art collecting, and the creation of her sanctuary. When you know this, and know Adams, and know what his aesthetic principles meant to him and many in his circle, the Isabella Stewart Gardner Museum looks rather different. It loses none of its beauty, but the enclosed chambers, each themed after a different idealized portion of the medieval or early modern European past (there is very little in the place from later than the sixteenth century, and nothing that could be called “modern” except the utilities) now look like both triumphs of design and a series of efforts to block out a present made unpleasant and unworthy by democracy, the extension of rights to previously unfree groups, and the rest of modernity’s baggage. The intricacy of the construction of these spaces speak both to creative genius and meticulous attention to the details of craft, and to the lengths to which a privileged and unhappy few would go to create a counterworld deep and consistent enough to let them forget about the real one.

By stipulation of Gardner’s will, the permanent exhibitions will never change. The only change that I know of was made by the perpetrators of a heist there in 1990. Unless something drastic happens, the museum is fixed the way Isabella, Berenson, Henry Adams, and the rest of their circle wanted it, forever. This is in many respects a good thing: the museum is beautiful as it is. Combined with what I now knew about some of the animating principles of the place’s creation, this fixed quality put one word in mind the fine June day I last visited: tomb.

None of this should work to the detriment of the museum or even to Isabella Gardner. By all accounts she was a generous, free-spirited, whimsical lady (she once shocked polite Boston society by showing up at the Opera wearing a white headband on which was inscribed “Oh, those Red Sox”), who gave her home and collection to the world freely after her death. But the trip led me to think about the great creations of Henry Adams’s major phase, after the death of his wife, the time during which he became a cultural icon, and it occurred to me that all of those creations could be seen as tombs of one kind or another. In one case, a literal tomb: After his wife’s suicide, Adams commissioned his friend, the great sculptor Augustus St-Gaudens, to create what is now called either the Statue of Grief or the Adams Memorial in Rock Creek Cemetery in Washington D.C. Adams was buried by it when he died in 1918. It is still cited as a great and important piece of monument sculpture. It’s worth seeing if you have the chance.

Perhaps informing the idea I had of Adams as a builder of tombs that came to me at the Gardner museum was memory of my visit to the Adams House in Quincy, where I went one dull day last summer. Several generations of Adamses lived there, and Henry was born there, though he lived most of his life in D.C. Henry, though he had many ambiguous feelings about the house and his family’s legacy, was, along with his brothers, instrumental in getting the house designated a historical landmark, one of the first places to be so designated and protected in the country. If the Gardner museum preserves in amber a fantasy of antiquarian grace and order, the Adams House does the same for the Adams legacy, which the brothers believed to be in danger of being overrun by a century uninterested in their brand of elite leadership. If you go out there you can see, for free, the Adams’s preferred built environment, their furniture, their books, and the biological descendants of the gardens they planted. Given the beliefs that the Adams brothers (I just deleted a page and a half about Henry’s brothers, who were interesting men in their own right) about where American society, and history in general, was going, it is hard not to understand this preservationist instinct as an effort to memorialize a better way of life tragically dying, and this case, the Adamses had the chutzpah to identify that better way with themselves and their ancestors.

The most important tomb Henry Adams built was his last masterpiece, The Education of Henry Adams. Here, he collapsed eulogies into still more general eulogies, like unusually lugubrious Russian dolls. The Education is a eulogy for the political power and social pull of the Adams family, defeated in Massachusetts politics by the Boston merchant and banking families; he eulogized those same interests, along with New England’s political and cultural influence, which in the course of the nineteenth century were supplanted by larger and richer powers clustered around New York and other industrial centers; he mourned for this order, too, soon to be displaced by still larger and more centralized monied powers and threatened from without by foreign competition (he had a prescient fixation on Russia and East Asia) and from within by proletarian uprising, Jews, and other supposed symptoms of decay. Upon rereading the book a few months ago, I was impressed most of all by two things: first, the nerve with which Adams managed to make the decline in political influence (which he drastically overstated, but we’ll leave that aside) of himself, his family, and his friends a metaphor for general civilizational decay AND vice-versa AND got away with it; and second, the basic rigor Adams maintained in his self-absorption. If the whole world was going down the tubes, then I suppose it makes sense to mourn the things that you once opposed along with those you hold dear, because it was all part of the larger whole that Adams believed was in the process of destruction.

Adams was prescient in writing a memorial to himself. He died, and hence gave up his personal influence on American art and literature, in 1918, just as an avant-garde dedicated to new forms of art and literature – who we could lump, somewhat problematically, into the category “modernist” – were poised to sweep to the commanding heights of American culture. The rumblings of cultural modernism existed that existed in Adams’s lifetime he either ignored or denounced in an off-handed manner. Artistic experimentation, to put it lightly, was not something he looked upon with favor. The political and social experimentation that many avant-garde figures of the 1920s and 1930s were proponents of would only compounded his difficulties, had he been alive to see them. A right-winger could do just fine in that environment, as evinced by the careers of T.S. Eliot, Ezra Pound, and H.L. Mencken, but even though Adams’s work was respected by people across the chasms of the cultural scene at that time, he was not the center of attention he once was.

Fast forward a few decades to the 1950s, and you get a different picture. After a world war, a depression, several red scares, disappointments for both the left and the right in America, an outbreak of mass prosperity, and a campaign on the part of the US government, including the CIA, to incorporate cultural modernism into the American side of the Cold War, most of the major American intellectuals of the day, many of whom dabbled in left-wing politics in the 1930s, decided that the political center was the place to be. It should be noted that their idea of “the center” is well to the left of what centrists hold to today: they were proponents of a welfare state and substantial governmental regulation in the economy. Here we are, back at the consensus scholars of the beginning of the lecture, at a time when they were not only at the peak of their influence, but the peak of influence for any group of American intellectuals up to that time. This was the time that the field of American Studies was developed, with the explicit intent of creating a unitary national culture to provide national coherence during the Cold War. It was out of this field that the American literary canon was first firmly established, and college students (of which there was an unprecedented amount thanks to the GI Bill) across the land learned were taught that this canon was American literature, even if some of the writers on it were not especially popular in their own times (Herman Melville is the outstanding example of this). One of these works was The Education of Henry Adams– and none of Henry Adams’s other books received anything like the same treatment.

I believe that the scholars of mid-twentieth century America picked Adams out of all of his peers, and Adams’s later work to the exclusion of his earlier work, precisely because he was a builder of tombs. These tombs served any number of pedagogical functions, and the consensus scholars were nothing if not public pedagogues. Adams’s tombs could be pointed to for vindication: look upon what happens, reader, to those who resist the current of American modernity. They could be pointed to for pathos: look at how the American elite of a bygone era fell into pessimism and bigotry. They could be pointed to with pride: look here, uppity radical or snotty European, American culture nearly fifty years

ago could produce complex literary works with as much irony and tragedy as one could want! Implicit in all of this is a sense of the broad-mindedness of the consensus scholars. Devotees of optimistic liberal progressivism, they crowned (with some ambivalence, it’s true) a deeply pessimistic anti-modern conservative. A cohort in which the first generation of Jews allowed into elite American universities in any number could discuss the work of an antisemite without prejudice. One gets the distinct impression that they would not have been as interested in the man if he were less difficult or less of a jerk; after all, many of the consensus scholars liked to play the world-weary sage at times, too. And, of course, Henry Adams was dead, his world was dead, and neither he nor they were coming back, so they were eminently safe to handle.
Who knows what Henry Adams wanted when he wrote The Education? Not his biographers, not me, probably not Adams himself, and not his midcentury interlocutors, though Lord knows they had their ideas. What I think I can speak to is this: if Adams had planned it, he could not have found an audience more ripe for co-optation into his literary scheme – the enshrining of himself and his life as a metaphor for a whole society – than the consensus scholars I read back at Marlboro. I rather suspect the old man honestly convinced himself that he did not want an audience, and that that attitude helped him acquire one. The Education is a monument not just because people took the time to enshrine it, but because of what it meant to those doing the enshrining: in a sense, it is a tomb for those who read it and took it seriously, well after Adams’s death, as evinced by the fact I need to talk to you for a half an hour to get at the beginning of what the whole thing means.

If you’re so inclined, there are numerous reasons to read Henry Adams, not least of which the work’s inherent qualities. But if there’s a reason you should take away from this lecture, it’s that any literary or intellectual phenomenon is best understood in context not just of its own times, but in its reception and the uses it is put to by future interlocutors. Left on its own, The Education is just a dressed-up complaint by an old man. Read contextually, it becomes something bigger, weirder, and in certain respects more unsettling: a look at what exactly goes into literary canonization and intellectual recognition, how it comes and how it goes. I got the most traction in thinking about Henry Adams by making odd connections and treating Adams and his interlocutors as people, with social roles, jobs, and social and personal imperatives that they had to answer to, one way or another. In short – and whether this will prove encouraging or discouraging to my audience is a question that interests me – it was another day in the office.



Yukio Mishima got the timing wrong by JUST a smidgen. One more second and his tragicomic 1970 coup attempt would have ended appropriately- with his autodisembowled corpse on the floor of a Japanese Self-Defense Force officer’s floor, shaming the lapdogs for dishonoring Japan’s imperial past. But alas, in the confusion there was a scuffle and Mishima was apprehended before he could do the deed. This led the Japanese authorities to the delicate question of what to do with him. Right wing nut or no, he was a literary treasure and making a martyr of him could cause more trouble. Some apparatchik, perhaps a man with a dark sense of humor, came up with a solution: send Mishima out as a cultural ambassador. To America.

Specifically, to the Iowa Writers Workshop.

Needless to say, Mishima was displeased. A samurai among the cornfields! Oddly enough though, he became a favorite instructor. He was famous, after all. And the combination of mindless conformism and masochistic careerism that wafts through Ames City like a miasma made Mishima’s harsh, gnomic teaching methods an instant hit. Stories about him quickly spread, including one of him breaking Norman Mailer’s shoulder at one of his notoriously rough jujitsu classes.

Mishima didn’t truly come to love Iowa, however, until he beheld his first tornado. The kami in motion, sweeping all before them! His appetite for danger whetted, he began pursuing the storms, engaging in increasingly reckless behavior and grudgingly befriending several other local storm chasers. The storms served where seppuku did not. Mishima’s ashes were tossed by some of his buddies in the next big tornado they found. His last literary work was discovered by his estate, a fictionalization of his adventures seeking the ultimate through seeking the storm. Unsure how to translate the Japanese title, the publishing house that bought the rights to it gave it a name it thought would grab people: Twister.



H.P Lovecraft was near the end of his tether in the late ’30s. He had been writing for the pulps for nearly a decade with little to show for it except for a small, dedicated fan base — and poverty. His inheritance was nearly spent, he was never in good health, and he faced one of the direst fears a neurotic aesthete could contemplate: getting a job and meeting the public. He tried one last desperate toss to avoid this ignominious fate: Hollywood. He imagined a band of gentlemen-scholars of the unknown, who, much like himself, fell upon hard times and were forced to work for their keep, attempting to contain fell spirits and things that should not be when they intruded upon his beloved Providence- for a nominal consideration, of course. He wrote a treatment and sent it out west. One studio boss, seeing the potential for humor in three bumbling academics chasing ghosts, made some major changes (the scholars would also be tinkerers with all kinds of zany ghost-fighting gadgets, they now had a wise-cracking black employee, and it was set in New York), and turned Lovecraft’s stories into a fairly successful humor serial. Audiences were especially amused by Harold Lloyd’s deadpan delivery of Lovecraftian dialogue, with its references to “Gozer the Gozerian” and “shivs and zuuls.” Lovecraft, however, was not amused by the bowdlerization of his work, even if the films helped his financial situation. Money or no money, his health was not good, and on his deathbed not soon after cursed the name of the creation that would become his most famous: Ghostbusters.

One of the studio boss’s assistants tried to bring Lovecraft’s antisemitism to the boss’s attention. The boss handwaved it away, insisting that the Jewish stereotypes in Ghostbusters was an example of old neighborhood humor. “Look at the way this schmuck writes,” the boss insisted, “and tell me he’s not a garment-cutter’s kid who got a scholarship to a fancy college.”