Showing posts with label cancel culture. Show all posts
Showing posts with label cancel culture. Show all posts

Sunday, 27 June 2021

Locked in?


Whence Woke? There are many theories. Though some, like Lindsay and Pluckrose, stress the movement's origins in predominantly French political theory, many point to its striking predominance in English-speaking countries, a.k.a. the Anglosphere - and even Pluckrose and Lindsay acknowledge the origins of certain key concepts, such as intersectionality, in US academia. This is especially interesting - if that is the word - considering the Anglosphere's prominence in the history of liberalism. How did such an illiberal way of thinking grow out what looked like such liberal soil?

One possibility is that Wokeism developed for its own reasons, but was then spread around the world (at least in the first, and hopefully last, phase of the pandemic) by the English language and the networks and lifestyles that go with it - rather as world-beating rates of obesity have spread from Austin to Auckland with the diffusion of car-focused suburbs and fast-food joints. Another possibility is that it's something else in 'Anglo-Saxon' culture that has done most of the work, the most common culprit being not the flaxen moustaches but the residual habits of Protestant Christianity, with its predilection for puritanism and witch-hunts. 

The point of this short post is just to add another idea to the mix - probably a bad one, but, well, that's what blogs are for. Intellectual historians - and intellectuals tout court - have a tendency to over-state the importance of high-faultin' philosophical ideas on world history, and I'm well aware that's a danger here. So I offer this as just one more factor that may have played a role in the deep history of this new form of extremism. 

The hypothesis is just that the philosophical tradition of empiricism, long strong in English-speaking cultures, may have had a hand here. Philosophers like Locke and especially Hume argued that what we know comes overwhelmingly (even, perhaps, entirely) from our senses. This was a tendency in English-speaking philosophy even until the time of A.J.Ayer (a disciple of Hume) and Bertrand Russell. 

Locke and Hume and Berkeley argued, against continental 'rationalists' like Leibniz, that innate faculties (e.g. reason) played a relatively small part in how we came to understand the world. The debate involved famous puzzles like what would happen to a blind man who was suddenly given the ability to see. Would he simply take in knowledge of his surroundings like the rest of us, or would he be somehow cognitively unprepared for all the new information coming his way? (The answer, it turned out, was the latter.)

Part of Kant's contribution, of course, was to try to reconcile these two traditions: we understand the world, he suggested, by taking in evidence according to certain in-built schemas. Though empiricism retained a role in Kant's brand of idealism, the radical empiricism of the likes of Hume had clearly been left behind.

The problem for radical empiricists of various stripes since Kant has been our growing knowledge of human development and psychology. Aristotle and Spinoza had both intuited that different beings have different in-born tendencies, though neither of them quite understood why. Now we have a much better idea: we act in typically human ways (and even in typically male and female ways) to a large extent because of our genetics. (And the same can be said of cats and bears and flies and jellyfish).

Some writers still like to warn about the dangers and wrong-headedness of 'essentialism,' but, of course, essentialism isn't always wrong. We expect humans to act in certain ways (not like rocks, say, or gold- or star-fish) because we attribute (consciously or not) a humanness to them. We think they - we - have some mysterious human essence. And we're right. Except that it's steadily becoming less mysterious.

The later Wittgenstein, who can be read as a kind of born-again fundamentalist empiricist, tended to want to dissolve human tendencies and actions into 'forms of life,' even to the extent of seeming to say that internal mental states could be read off outward actions. What more aggressive empiricist invasion of the private sources of innatism could there be?

Psychological behavioralists followed this lead. Chomsky cut his teeth criticizing them, in particular by pointing out that languages seemed to have an innate aspect to them. Children across the world seemed to have been born with a 'language instinct.'

Since Darwin, Mendel, and the neo-Darwinian synthesis of evolutionary theory and genetics, we've had a pretty good idea of how this works (even if the details have turned out to be far more complicated than we expected). Our genes encode certain inherited information, and this includes tendencies towards certain behaviours. We can even estimate the proportion of certain traits that are genetic as opposed to environmental (though lay people tend to underestimate the extent of the genetic influence that scientific studies support). 

One of the most obvious features of the Woke culture on university campuses is the hostility (among many other sorts of hostility) towards ideas about human nature. Most of the current elite are outspoken 'blank-slatists,' preferring to believe that we are born as blank slates for our environments to write on, rather than the largely pre-designed, if highly responsive, robots we more closely resemble. 

The sources of this hostility are multiple and have been written about extensively elsewhere. It's my suggestion here, though, that the Anglo-Saxon tradition of philosophical empiricism may be among the roots of this reactivity. Even if we have plenty of good evidence - overwhelming evidence, at this point - that our behaviours are strongly influenced by our genetic essences, there's a strong tendency among English-speakers to want to treat humans as random streams of sense-perceptions. If this is at all right, it's another way (along Puritanism) in which Wokeism emerges, not as a cosmopolitan revolutionary movement, but as an peculiarly reactionary brand of Anglo-Saxon traditionalism.



Saturday, 14 November 2020

How the attention economy is sucking our will to platform

 

People (like the economist Ashley Hodgson) have long been talking about 'the attention economy.' When I first heard about it, it sounded liberatory and utopian: with the advent of the internet, the theory went, people would be paid as a kind of tribute for work they'd chosen to do, kind of like how you leave coins for a busker. 

Nowadays, the attention economy has more dystopian overtones. A friend of mine from college worked for years for a company that built super-computers to calculate the value of bits of space on the internet and bid against other super-computers in instantaneous, online options. Apparently that kind of thing is going on all the time, humming along in vast rows of air-conditioned calculation.

And more than that: basically everyone in the developed world now, from big tech companies to online sex workers, is trying to get your attention. (Me too, sort of.) OK, they may not be aiming at your attention in particular, but, in general, the more attention the better.

Of course, that was always sort of the case. Vendors in markets from time immemorial have shouted at passers-by to try to get them to buy their goods. Then there was advertising, which obviously didn't start with the internet. Getting someone's attention has always been the first step to getting some of their money.

As with other aspects of life, in some ways what the internet has done is simply speed everything up and expand it to a global scale. But social media especially has also introduced a new form of currency in the form of likes, followers, and so on. In the world of the bird app, or of the codex of visages, it's the man with many followers who's king.

And online publications like Quillette and Vox have, obviously, gained influence (and income) by gaining re-tweets rather than by selling collections of the articles in the form of glossy magazines. This has led to one of those little features of online life that runs up against the norms of anybody with a liberal education from more than 10 years go: people refusing to share links to a piece they then condemn.

The reason that seems so weird is that there used to be a strong norm in intellectual life that you didn't hide books or articles or try to stop their circulation. Imagine if I'd said to my fellow students 15 or 20 years ago 'I've read this book I strongly disagreed with but which has been influential; I therefore won't name it, and in fact I've withdrawn it from the library and hid it under a bush so that it won't circulate further.' They would have thought I was bonkers, not to mention that I was curtailing open debate and infringing on their own freedom to read what they saw fit.

I do think all these points still hold today for those who refuse to link to pieces they hate; but I can also see where they're coming from. They're right that clicks help websites, in a way that lending someone a book didn't help Penguin or Anthony Kenny. They're anxiously aware of the importance of attention in the new ecosystem we now live in - and they're keen to deny its oxygen to their enemies. 

All of this might, to some extent, help explain the recent vogue for de-platformings - that is, preventing people from hearing someone speak on campus because you don't like them or what they have to say. This kind of thing is yet another phenomenon that even people as young as this blogger tend to find pretty peculiar - it's another thing that, I think, most of my fellow undergraduates in the first few years of the millennium would have seen as obviously not the way to behave.

One way of looking at what's going on with de-platformings is to think about what the de-platformers think they're doing. One of the things they think they're doing, I would submit, is akin to not linking to a Quillette article. They see their campuses like their chirrup or mug-scroll feeds, and they don't want them to contribute to funnelling more attention towards Christina Sommers (or whoever). 

I remember hearing one of the bullies who tried to shut down Sommers' talk at Lewis & Clark Law School  saying something like 'You already know what she's going to say from YouTube.' Again, she's still wrong to act in the repressive way she did - Sommers still had a right to speak, and the students to hear her - but I think I now understand a bit more about why that student was acting as she was. 

Back in the day, Bjorn Lomborg (or whoever) coming to speak was interesting partly because you got something you didn't get from reading his articles or books. Nowadays, you can easily access recordings of public intellectuals online. But the mention of YouTube, I think, also suggests that the student was thinking of Sommers' appearance very much in internet terms. She didn't see the talk as a source of ideas or as an experience - she saw it as a kind of bid in a game whose point is to amass the most attention-chips. She saw it as she might have viewed a fellow student sharing a Sommers YouTube talk on social media. 

Christina Sommers giving a talk, Bruce Gilley publishing an article - back in the last millennium the obviously correct reaction to such things would have seemed, to most sane individuals, even those who disagreed with them, not much. Maybe they would have gone along and asked a critical question; maybe they would have written a letter to the student paper. Other people paying attention to such things didn't seem much of a threat.

On the online world, though, especially on social media, life is a high-stakes (OK, low-stakes, but it feels high-stakes) battle for attention. Attention accruing to your ideological rival empowers them and thus seems to threaten your own views and values. This economy of attention has become a kind of vortex, not only sucking previously rather somnolent groups like classicists into it, but also exerting its sucking effect on what's left of the offline world. The online economy of attention is sucking at our universities like a horrific hair-cutting 'solution'; and it's sucking at our will to let people explore ideas. 


Friday, 14 August 2020

Harsh but unfair

 


I remember reading once, in a book on Athenian law (perhaps this one) that anthropologists had observed that in societies where criminals were less likely to be apprehended, penalties were harsher. It made sense; after all, modern developed countries, with their highly developed surveillance technology, have (by historical standards) strikingly lenient punishment regimes; pre-modern ones, by contrast, which had zero or only rudimentary policing, had more of a tendency to turn to the gallows, the guillotine - or the gulp of hemlock.

The observation came back to me recently in connection with the current vogue for 'cancelling.' The frequency of this phenomenon has been questioned, but what seems to concern many people isn't necessarily how widespread it is, but how harsh the punishments can be. A disabled grandfather is sacked for sharing a comedy sketch. A researcher loses his job for re-tweeting a study about the effectiveness of peaceful compared with violent protest. And all the while, not-especially-controversial views and tame jokes elicit the kind of fury that used to be reserved for blood feuds. 

Given the many instances of such 'cancellations' that have occurred, it might seems strange that a good few people continue to insist that the whole phenomenon is made up. But there might be a way of explaining both why they think that and why some of these same people engage in such disproportionately harsh punishings of individuals who violates their norms. 

The reason they think the free speech crisis isn't really a crisis is partly because they see people saying things they dislike all the time. That's been one of the effects of the explosion of social media: whereas twenty years ago you wouldn't often be exposed to views from outside your thought-world, and you'd have to put in some work to have your views broadcast, now it's easy to post things and even easier to see things others have posted. 

If you have narrow parameters for what ideas are acceptable, it follows that you're likely to see quite a lot of what are to you unacceptable ideas. Twitter must be terrifying - all those people saying things you think are terrible! What's more, most of them are getting away scot free.

The temptation, then, is to make an example of anyone you are in a positions to punish, pour décourager les autres. This is what ancient societies were up to as well. It makes sense, especially if you consider the point of view of the potential criminal. 

You can look at risk as the combination of how likely a bad thing is to happen, and how bad it will be if it does. You may not be that likely to fall of the cliff if you go right up to the edge, but if you slip you'll die, so why risk it? If you're in a society without a functioning police force, the chance you'll be apprehended for doing something bad is pretty low. One way for the state to increase the risk you face (and hence deter you from wrongdoing), is to increase the penalty you risk facing. You think you probably won't get caught, but if you do...

The temptation to make an example of someone might be especially great when there are artificial barriers in the way of punishing other people who are up to the activity you dislike. For example, if a lot of the people saying things you find unacceptable are represented by anonymous Twitter accounts. Or if there's been a state amnesty saying you can't punish any of the members of a tyrannical junta.

That last thing, of course, is what happened in Athens after the murderous regime of the so-called Thirty Tyrants. Once the democracy had been restored, there seems to have been an agreement not to prosecute anyone involved with the Thirty, except for the Thirty themselves (some of whom had already been killed in the process of restoring the democracy). (What exactly the amnesty required is, like most things in ancient history, a little bit controversial).

In 399, only four years after the Thirty had been toppled, the philosopher Socrates, who had links to some of the Thirty (including Critias, one of the more extreme members), was executed on a vote of a popular jury. Why? It's complicated; there were lots of factors that led to that outcome, including the way he went about defending himself (if that was even what he was up to) in court. 

But one possibility is that his prosecutors indicted him on a charge of inventing new gods and corrupting the youth precisely because they couldn't prosecute him for what they were really angry at him for - the actions of the Thirty. And they also couldn't prosecute many of the people who they knew had collaborated with the Thirty. Nor could they prosecute Critias and others who were already dead. But Socrates was there, still going about his business asking irritating questions in public...

Note that the theory, if it's right, explains not only the excessiveness of the punishings but also the way they have of mistaking their object. At least, it looks an awful lot like all of the guilt for something is being loaded onto the back of one, unfortunate person who happens to be in the wrong place at the wrong time. That, of course, is another phenomenon that's familiar to anthropologists: scapegoating. 

One way to stop this sort of thing, as you might guess, is to get better at apprehending wrong-doers. But it's very questionable in cases like the ones mentioned above (sharing comedy sketches and so on) whether anyone's done anything wrong at all. Another way is to reduce narrow-minded people's exposure to views they find distasteful. 

Doing that by force would be wrong (people should be free to go on social media, of course), but it might be advisable, considering the kinds of moral risks involved, for some people to think twice about the amount of time they spend online. In other words, if you can't deal with different ideas, it might be best just to stay off Twitter. Otherwise you might find yourself with a cup of hemlock in your hand - handing it to an innocent person.