What Mr. Rutten is peddling in the LA Times today is an apology for non-unionized workers’ stockholm syndrome. Most of them have no alternative means of livelihood to the market of private labor contracts that the owning-class monopolizes and co-ordinates to their advantage. The owning-class (speaking on behalf of their god, variously known as the market, capital, economic necessity, value – or if you’re going to get really Old School, Mammon) has told those workers they are going to get less, and being the unorganized lot they are they have had little choice but to accept. However, having so thoroughly identified with the owning-class, these non-unionized workers don’t even begin to think they’ve been screwed by that class and its economic laws of value, growth and capital accumulation. They have been hosed though, and not only should they not be angry at those who refuse to be screwed, but Rutten shouldn’t be trying to legitmate their misplaced resentment. To do so takes us back to 1930s Germany when it was popular to point to the well-organized Jews as not just racial but economic scape-goats for the German workers’ own struggle with global depression. Unions are not to blame for the current economic malaise, which Rutten offers as a token of pseudo-objectivity, but their non-capitulation to the forces of global capitalism is the only hope this country has.
One commenter, who at least seems to have read the comment I left on this article (essentially reproduced above), lashes back:
So we should all have contracts/pensions/free healthcare like the public employee unions? And we would all be better off and thrive happily ever after?
Hey, What the heck. It worked well for GM and Greece. Lets give it a try.
I never said anything about happily ever after. This union-busting stuff is part of a struggle to which we may see no clear-cut end in our lives. I make no arguments about the economic desirability of unions either (i.e. from the “bargain” perspective). Yes, though, we should ALL have the kind of livelihoods that public employees unions (fight to) secure for them. That this is at odds with an economic system rooted in principles of value, growth and capital accumulation is an argument against the latter.
Armando Salvatore wrote this great essay for The Immanent Frame about the pitfalls of Egypt’s revolutionary moment. The way he ends up talking about the State in terms of Zizek’s cartoonish cat – that has stepped over the precipice but fails or refuses (or to use a Zizekian term, short circuits) to recognize there is nothing holding it up – should be applied to all talk of the economy. I don’t just mean the official cult known as finance, but the much more pervasive popular following behind value, its production and accumulation, and profit. Zizek’s Tom and Jerry analogy, as worn out as it threatens to become, has to be applied not just to the capitalist nomenklatura, but the larger population of devoted capitalists – everyday people who operate on a principle of “getting ahead” and affirm that in their capacity as consumers, workers and voters.
Armando contends with Zizek’s tired cat-and-mouse analysis though:
If the mythology of revolution indicates a pure state of popular will, the mysticism of the state—its modern political theology—reposes on a redundancy: a mysterious ritual of self-establishment that literally allows it to float in the air without the need to look down; it does not need awareness since it is itself, in Hegelian parlance, the peak of consciousness, spirit incarnate. Every state, by definition, walks on the edge of—and indeed across—a precipice: not just by demanding that millions of citizens comply with the law by imposing just a modicum of violence in routine times but also, as more people in the world are now becoming aware, by piling up hundreds of billions of “sovereign” debt for decades without anybody really worrying about it.
This might happen with or without corruption—surely, if the “fat cats,” all the way up to the president, took a large part of that pile of cash into their own accounts, the cat’s game of floating in the air becomes a caricature of itself. Yet, in itself it is neither a caricature nor a cartoon, but the very image of what the state is about, the outcome of a collective entrancement that makes a docile subject out of popular multitudes who know how to organize themselves. Indeed, matching this kleptomaniac, steady drainage of resources under the regime of Mubarak, these thirty years witnessed a spectacular rise of social self-organization and solidarity in a variety of sectors (health and education first) that has blurred the boundary, imposed by the modern state and its weak imaginary, between “formal” and “informal” associations and networks, between “religious” and “secular” NGOs.
Yet, within the collective trance staged by the state, the multitudes as “the people” are none other than the state. In the trance routine, they are its very collective body: at best, they can imagine inhabiting a parallel space called “civil society,” which, however, only exists and flourishes in a symbiotic relationship with the state and manages to pump citizens’ energies “voluntarily” in the “non-profit” sector, thus creating social cohesion at low or zero cost.
We should think of David Cameron’s “Big Society,” which has this vaguely populist cant to it while simultaneously affirming the sovereignty of the economy and its “leaders” by cutting taxes to the rich and privatizing or simply auctioning off State services and property. Not all Brits seem to be taking to cool-aid, but the vague sense of being taken hostage looms. Just a few years ago, when the market began to crash in the United States, the ultimatum given Congress and Wall Street and not a few well-to-do Main Streeters was that if Wall Street (i.e. the institutional face of capitalism) fell then so would Main Street. Even Zizek wouldn’t or couldn’t admit that Wall Street had stepped off the precipice in 2008 when he repeated in the LRB that
The problem is that there is no way to separate the welfare of Main Street from that of Wall Street. Their relationship is non-transitive: what is good for Wall Street isn’t necessarily good for Main Street, but Main Street can’t thrive if Wall Street isn’t doing well – and this asymmetry gives an a priori advantage to Wall Street.
That ultimatum is really not that unlike “Mubarak or chaos,” which Zizek got right in the Guardian saying ‘The argument for Mubarak – it’s either him or chaos – is an argument against him.” The same stance needs to be taken against the position we hear all the time that if we do not do X (cut taxes on the rich, work longer hours, take care of our own healthcare/retirement, deregulate this or that industry, get rid of or otherwise compromise our unions, make ourselves more competitive, and the like) then jobs will go away, and we’ll starve or else succumb to chaos.
I would like to share the short introduction Ursula K. Le Guin wrote for The Left Hand of Darkness, which I’m beginning to read. It’s a powerful statement about the vocation of the writer, the nature of science-fiction and even fiction as such. She makes a devastating case for science fiction not being about the future, which is hard to say is always the case. In Star Trek, for example, much effort is expended to build a believable continuity between their fictive future and our actual present; Roddenberry wanted us to believe in Star Trek as a possible future, while also clearly using it as a way of describing the present.
A great example of what Le Guin means by “science fiction isn’t about the future” is the 2008 b-film, Outlander.
Outlander is set in 8th Century northern Europe, where a space-craft crashes carrying a lone surviving hominid (Kainan) and a monster worthy of Grendel’s name. The twist is that the extra-terrestrial hominid is actually a human, or rather, humans on Earth are an unwitting seed-colony of that parent species. The monster (Moorwen) is really a creature from one of the many planets these space-faring humans have brutally attacked to clear space for their own colonies.
When Kainan runs into Norsemen, who are suspicious of him after finding him passing through a village the Moorwen destroyed, he tries to appeal (it seems) to their primitive sensibilities and claims he’s hunting a dragon. Though it’s actually somewhat ambiguous: all he really does, when they ask him what he’s hunting that could destroy the village, is points to a dragon ornament and says “that.” Whether he cleverly thinks he’s appealing to their beliefs-qua-mythology or naively thinks similar creatures exist on Earth is unclear. He says this though, and the Norsemen are incredulous Realpolitikers. They already suspect he was part of a raid from a neighboring community, and laugh at him for appealing to and maybe trying to trick them with “children’s stories.”
By correlating these children’s stories with a material support in the form of the Moorwen “dragon,” the film isn’t giving something like a historical materialist account of dragon-stories—or dare I say of dragons. To do so would be to believe in the film, itself a work of fiction made in 2008 and not 709, just as the Norse deride Kainan for believing in “what everyone knows” are children’s stories.
In this way, science fiction (when minding its own business, to borrow Le Guin’s language) cannot explain our past any more than it can predict our future. In the Star Trek The Original Series episode, “Who Mourns For Adonais,” the ship is captured by a being that claims to be Apollo. The being seems to have unassailable powers and otherwise appears to be who he says he is, and even explains that he and the other Olympian gods were a band of travelers who happened upon earth 5000 years ago. The point is very clear: the ancient gods were real, but they weren’t really gods. Kirk at one point, balking at Apollo’s demand for worship, asserts: “Mankind has no need for gods. Our One will do.”
The key to the episode is destroying a temple-structure on the planetoid that apparently accompanies Apollo. It’s explained to be a power-source, but its mechanics are left completely unexplored and unexplained. It’s not really a scientific perspective on the nature of this being nor its claims to divinity. Destroying the temple with the phasers does cause Apollo to lose his powers, but the cause and effect are as magical as any mythological understanding this whole Apollo’s-really-an-alien is supposed to dispel. In the end, the mythological fiction, with its “scientific” underpinning, is presented as true and we feel like we’ve been 5000 years in the past than 300 in the future.
At any rate, here’s Le Guin:
Science fiction is often described, and even defined, as extrapolative. The science fiction writer is supposed to take a trend or phenomenon of the here-and-now, purify and intensify it for dramatic effect, and extend it into the future. ‘If this goes on, this is what will happen.’ A prediction is made. Method and results much resemble those of a scientist who feeds large doses of a purified and concentrated food additive to mice, in order to predict what may happen to people who eat it in small quantities for a long time. The outcome seems almost inevitably to be cancer. So does the outcome of extrapolation. Strictly extrapolative works of science fiction generally arrive about where the Club of Rome arrives: somewhere between the gradual extinction of human liberty and the total extinction of terrestrial life.
This may explain why people who do not read science fiction describe it as ‘escapist,’ but when questioned further, admit they do not read it because ‘it is so depressing.’
Almost anything carried to its logical extreme becomes depressing, if not carcinogenic.
Fortunately, though extrapolation is an element in science fiction, it isn’t the name of the game by any means. It is far too rationalist and simplistic to satisfy the imaginative mind, whether the writer’s or the reader’s. Variables are the spice of life.
This book is not extrapolative. If you like you can read it, and a lot of other science fiction, as a thought-experiment. Let’s say (says Mary Shelley) that a young doctor creates a human being in this laboratory; let’ say (says Philip K. Dick) that the Allies lost the Second World War; let’s say this or that is such and so, and see what happens . . . . In a story so conceived, the moral complexity proper to the modern novel need not be sacrificed, nor is there any built-in dead end; thought and intuition can move freely within bounds set only by the terms of the experiment, which may be very large indeed.
The purpose of a thought-experiment, as the term was used by Schrodinger’s and other physicists, is not to predict the future—indeed Schrodinger’s most famous thought-experiment goes to show that the ‘future,’ on the quantum level, cannot be predicted—but to describe reality, the present world.
Science fiction is not predictive; it is descriptive.
Predictions are uttered by prophets (free of charge), by clairvoyants (who usually charge a fee, and are therefore more honored in their day than prophets), and by futurologists (salaried). Prediction is the business of prophets, clairvoyants, and futurologists. It is not the business of novelists. A novelist’s business is lying.
The weather bureau will tell you what next Tuesday will be like, and the Rand Corporation will tell you what the twenty-first century will be like. I don’t recommend that you turn to the writers of fiction for such information. It’s none of their business. All they’re trying to do is tell you what they’re like, and what you ‘re like—what’s going on—what the weather is like now, today, this moment, the rain, the sunlight, look! Open your eyes; listen, listen. That is what the novelists say. But they don’t tell you what you will see and hear. All they can tell you is what they have seen and heard, in their time in this world, a third of it spent in sleep and dreaming, another third of it spent telling lies.
‘The truth against the world!’—Yes. Certainly. Fiction writers, at least in their braver moments, do desire the truth: to know it, speak it, serve it. But they go about it in a peculiar and devious way, which consists in inventing persons, places, and events which never did and never will exist or occur, and tell about these fictions in detail and at length and with a great deal of emotion, and when they say they are done writing down this pack of lies they say, There! That’s the truth!
They may use all kinds of facts to support their tissue of lies. They may describe the Marshalsea Prison, which was a real place, or the battle of Borodino, which really was fought, or the process of cloning, which really takes place in laboratories, or the deterioration of a personality, which is described in real textbooks of psychology, and so on. This weight of verifiable place-event-phenomenon-behavior makes the reader forget that he is reading a pure invention, a history that never took place anywhere but in that unlocalizable region, the author’s mind. In fact, while we read a novel, we are insane—bonkers. We believe in the existence of people who aren’t there, we hear their voices, we watch the battle of Borodino with them, we may even become Napoleon. Sanity returns (in most cases) when the book is closed.
Is it any wonder that no truly respectable society has ever trusted in its artists?
But our society, being troubled and bewildered, seeking guidance, sometimes puts an entirely mistaken trust in its artists, using them as prophets and futurologists.
I do not say that artists cannot be seers, inspired: that the awen cannot come upon them, and the god speak through them. Who would be an artist if they did not believe that happens? If they did not know it happens, because they have felt the god within them use their tongue, their hands? Maybe only once, once in their lives. But once is enough.
Nor would I say that the artist alone is so burdened and so privileged. The scientist is another who prepares, who makes ready, working day and night, sleeping and awake, for inspiration. As Pythagoras knew, the god may speak in the forms of geometry as well as in the shapes of dreams; in the harmony of pure thought as well as in the harmony of sounds; in the number as well as in words.
But it is words that make the trouble and confusion. We are asked now to consider words as useful in only one way: as signs. Our philosophers, some of them, would have us agree that a word (sentence, statement) has value only insofar as it has one single meaning, points to one fact that is comprehensible to the rational intellect, logically sound, and—ideally—quantifiable.
Apollo, the god of light, of reason, of proportion, harmony, number—Apollo blinds those who press too close in worship. Don’t look straight at the sun. Go into a dark bar for a bit and have a beer with Dionysios, every now and then.
I talk about the gods; I am an atheist. But I am an artist too, and therefore a liar. Distrust everything I say. I am telling the truth.
The only truth I can understand or express is, logically defined, a lie. Psychologically defined, a symbol. Aesthetically defined, a metaphor.
Oh, it’s lovely to be invited to participate in Futurological Congresses where Systems Sciences displays its grand apocalyptic graphs, to be asked to tell the newspapers what America will be like in 2001, and all that, but it’s a terrible mistake. I write science fiction, and science fiction isn’t about the future. I don’t know any more about the future than you do, and very likely less.
This book is not about the future. Yes, it begins by announcing that it’s sent in the ‘Ekumenical Year 1490-97,’ but surely you don’t believe that?
Yes, the people in it are androgynous, but that doesn’t mean that I’m predicting that in a millennium or so we will all be androgynous, or announcing that I think we damned well ought to be androgynous. I’m merely observing, in the peculiar, devious, and thought-experimental manner proper to science fiction, that if you look at us at certain odd times of day in certain weathers, we already are. I am not predicting, or prescribing. I am describing. I am describing certain aspects of psychological reality in the novelist’s way, which is by inventing elaborate circumstantial lies.
In reading a novel, any novel, we have to know perfectly well that the whole thing is nonsense, and then, while reading, believe every word of it. Finally, when we’re don with it, we may find—if it’s a good novel—that we’re a bit different from what we were before we read it, that we have been changed a little as if by having met a new face, crossed a street we never crossed before. But it’s very hard to say just what we learned, how we were changed.
The artist deals with what cannot be said in words.
The artist whose medium is fiction does this in words. The novelist says in words what cannot be said in words.
Words can be used thus paradoxically because they have, along with a semiotic usage, a symbolic or metaphoric usage. (They also have a sound—a fact the linguistic positives take no interest in. A sentence or paragraph is like a chord or harmonic sequence in music: its meaning may be more clearly understood by the attentive ear, even though it is read in silence, than by the attentive intellect.)
All fiction is metaphor. Science fiction is metaphor. What sets it apart from older forms of fiction seems to be its use of new metaphors drawn from certain great dominants [domains?] of our contemporary life—science, all the sciences, and technology, and the relativistic and the historical outlook, among them. Space travel is one of these metaphors; so is an alternative society, an alternative biology; the future is another. The future, in fiction, is a metaphor.
A metaphor for what?
If I could have said it non-metaphorically, I would not have written all these words, this novel; and Genly Ai would never have sat down at my desk and used up my ink and typewriter ribbon in informing me, and you, rather solemnly, that the truth is a matter of the imagination.