Trafficking in traffic

Ben Smith picked just the right title for his saga of BuzzFeed, Gawker, and The Huffington Post: Traffic (though in the end, he credits the able sensationalist Michael Wolff with the choice). For what Ben chronicles is both the apotheosis and the end of the age of mass media and its obsessive quest for audience attention, for scale, for circulation, ratings, page views, unique users, eyeballs and engagement. 

Most everything I write these days — my upcoming books The Gutenberg Parenthesis in June and a next book, an elegy to the magazine in November, and another that I’m working on about the internet — is in the end about the death of the mass, a passing I celebrate. I write in The Gutenberg Parenthesis

The mass is the child and creation of media, a descendant of Gutenberg, the ultimate extension of treating the public as object — as audience rather than participant. It was the mechanization and industrialization of print with the steam-powered press and Linotype — exploding the circulation of daily newspapers from an average of 4,000 in the late nineteenth century to hundreds of thousands and millions in the next — that brought scale to media. With broadcast, the mass became all-encompassing. Mass is the defining business model of pre-internet capitalism: making as many identical widgets to sell to as many identical people as possible. Content becomes a commodity to attract the attention of the audience, who themselves are sold as a commodity. In the mass, everything and everyone is commodified.

Ben and the anti-heroes of his tale — BuzzFeed founder Jonah Peretti, Gawker Media founder Nick Denton, HuffPost founder Arianna Huffington, investor Kenny Lerer, and a complete dramatis personae of the early players in pure-play digital media — were really no different from the Hearsts, Pulitzers, Newhouses, Luces, Greeleys, Bennetts, Sarnoffs, Paleys, and, yes, Murdochs, the moguls of mass media’s mechanized, industrialized, and corporate age who built their empires on traffic. The only difference, really, was that the digital moguls had new ways to hunt their prey: social, SEO, clickbait, data, listicles, and snark.

Ben tells the story so very well; he is an admirable writer and reporter. His narrative whizzes by like a local train on the express tracks. And it rings true. I had a seat myself on this ride. I was a friend of Nick Denton’s and a member of the board of his company before Gawker, Moreover; president of the online division of Advance (Condé Nast + Newhouse Newspapers); a board member for another pure-play, Plastic (a mashup of Suck et al); a proto-blogger; a writer for HuffPost; and a media critic who occasionally got invited to Nick’s parties and argued alongside Elizabeth Spiers at his kitchen table that he needed to open up to comments (maybe it’s all our fault). So I quite enjoyed Traffic. Because memories.

Traffic is worthwhile as a historical document of an as-it-turns-out-brief chapter in media history and as Ben’s own memoir of his rise from Politico blogger to BuzzFeed News editor to New York Times media critic to co-founder of Semafor. I find it interesting that Ben does not try to separate out the work of his newsroom from the click-factory next door. Passing reference is made to the prestige he and Jonah wanted news to bring to the brand, but Ben does not shy away from association with the viral side of the house. 

I saw a much greater separation between the two divisions of BuzzFeed — not just reputationally but also in business models. It took me years to understand the foundation of BuzzFeed’s business. My fellow media blatherers would often scold me: “You don’t understand, Jeff,” one said, “BuzzFeed is the first data-driven newsroom.” So what? Every newsroom and every news organization since the 1850s measured itself by its traffic, whether they called it circulation or reach or MAUs. 

No, what separated BuzzFeed’s business from the rest was that it did not sell space or time or even audience. It sold a skill: We know how to make our stuff viral, they said to advertisers. We can make your stuff viral. As a business, it (like Vice) was an ad agency with a giant proof-of-concept attached.

There were two problems. The first was that BuzzFeed depended for four-fifths of its distribution on other platforms: BuzzFeed’s own audience took its content to the larger audience where they were, mostly on Facebook, also YouTube and Twitter. That worked fine until it didn’t — until other, less talented copykittens ruined it for them. The same thing happened years earlier to About.com, where The New York Times Company brought me in to consult after its purchase. About.com had answers to questions people asked in Google search, so Google sent them to About.com, where Google sold the ads. It was a beautiful thing, until crappy content farms like Demand Media came and ruined it for them. In a first major ranking overhaul, Google had to downgrade everything that looked like a content farm, including About. Oh, well. (After learning the skills of SEO and waiting too long, The Times Company finally sold About.com; its remnants labor on in Barry Diller’s content farm, DotDash, where the last survivors of Time Inc. and Meredith toil, mostly post-print.)

The same phenomenon struck BuzzFeed, as social networks became overwhelmed with viral crap because, to use Silicon Valley argot, there was no barrier to entry to making clickbait. In Traffic, Ben reviews the history of Eli Pariser’s well-intentioned but ultimately corrupting startup Upworthy, which ruined the internet and all of media with its invention, the you-won’t-believe-what-happened-next headline. The experience of being bombarded with manipulative ploys for attention was bad for users and the social networks had to downgrade it. Also, as Ben reports, they discovered that many people were more apt to share screeds filled with hate and lies than cute kittens. Enter Breitbart. 

BuzzFeed’s second problem was that BuzzFeed News had no sustainable business model other than the unsustainable business model of the rest of news. News isn’t, despite the best efforts of headline writers, terribly clickable. In the early days, BuzzFeed didn’t sell banner ads on its own content and even if it had, advertisers don’t much want to be around news because it is not “brand safe.” Therein lies a terrible commentary on marketing and media, but I’ll leave that for another day. 

Ben’s book comes out just as BuzzFeed killed News. In the announcement, Jonah confessed to “overinvesting” in it, which is an admirably candid admission that news didn’t have a business model. Sooner or later, the company’s real bosses — owners of its equity — would demand its death. Ben writes: “I’ve come to regret encouraging Jonah to see our news division as a worthy enterprise that shouldn’t be evaluated solely as a business.” Ain’t that the problem with every newsroom? The truth is that BuzzFeed News was a philanthropic gift to the information ecosystem from Jonah and Ben.

Just as Jonah and company believed that Facebook et al had turned on them, they turned on Facebook and Google and Twitter, joining old, incumbent media in arguing that Silicon Valley somehow owed the news industry. For what? For sending them traffic all these years? Ben tells of meeting with the gray eminence of the true evil empire, News Corp., to discuss strategies to squeeze “protection money” (Ben’s words) from technology companies. That, too, is no business model. 

Thus the death of BuzzFeed news says much about the fate of journalism today. In Traffic, Ben tells the tale of the greatest single traffic driver in BuzzFeed’s history: The Dress. You know, this one: 

At every journalism conference where I took the stage after that, I would ask the journalists in attendance how many of their news organizations wrote a story about The Dress. Every single hand would go up. And what does that say about the state of journalism today? As we whine and wail about losing reporters and editors at the hands of greedy capitalists, we nonetheless waste tremendous journalistic resource rewriting each other for traffic: everyone had to have their own story to get their own Googlejuice and likes and links and ad impressions and pennies from them. No one added anything of value to BuzzFeed’s own story. The story, certainly BuzzFeed would acknowledge, had no particular social value; it did nothing to inform public discourse. It was fun. It got people talking. It took their attention. It generated traffic

The virus Ben writes about is one that BuzzFeed — and the every news organization on the internet and the internet as a whole — caught from old, coughing mass media: the insatiable hunger for traffic for its own sake. In the book, Nick Denton plays the role of inscrutable (oh, I can attest to that) philosopher. According to Ben, Nick believed that traffic was the key expression of value: “Traffic, to Nick … was something pure. It was an art, not a science. Traffic meant that what you were doing was working.” Yet Nick also knew where traffic could lead. Ben quotes him telling a journalist in 2014: “It’s not jonah himself I hate, but this stage of internet media for which he is so perfectly optimized. I see an image of his cynical smirk — made you click! — every time a stupid buzzfeed listicle pops on Facebook.”

Nick also believed that transparency was the only ethic that really mattered, for the sake of democracy. Add these two premises, traffic and transparency, together and the sex tape that was the McGuffin that brought down Gawker and Nick at the hands of Peter Thiel was perhaps an inevitability. Ben also credits (or blames?) Nick for his own decision to release the Trump dossier to the public on BuzzFeed. (I still think Ben has a credible argument for doing so: It was being talked about in government and in media and we, the public, had the right to judge for ourselves. Or rather, it’s not our right to decide; it’s a responsibility, which will fall on all of us more and more as our old institutions of trust and authority — editing and publishing — falter in the face of the abundance of talk the net enables.)

The problem in the end is that traffic is a commodity; commodities have no unique value; and commodities in abundance will always decrease in price, toward zero. “Even as the traffic to BuzzFeed, Gawker Media, and other adept digital publishers grew,” Ben writes, “their operators began to feel that they were running on an accelerating treadmill, needing ever more traffic to keep the same dollars flowing in.” Precisely

Traffic is not where the value of the internet lies. No, as I write in The Gutenberg Parenthesis (/plug), the real value of the internet is that it begins to reverse the impact print and mass media have had on public discourse. The internet devalues the notions of content, audience, and traffic in favor of speech. Only it is going to take a long time for society to relearn the conversational skills it has lost and — as with Gutenberg and the Reformation, Counter-Reformation, and Thirty Years’ War that followed — things will be messy in between. 

BuzzFeed, Gawker, The Huffington Post, etc. were not new media at all. They were the last gasp of old media, trying to keep the old ways alive with new tricks. What comes next — what is actually new — has yet to be invented. That is what I care about. That is why I teach. 

Darrell V. Jarvis, 1926-2023

My father died on Saturday, April 8. He lived 97 years. Until struck with COVID, he had never had to stay in a hospital. In the last three months, he mustered all his strength to overcome the virus’ results: internal bleeding, then post-COVID pneumonia, then the side-effects of medication, and finally one more case of pneumonia. I curse the disease and all it does. He passed on peacefully last night after we — his entire family, the five of us — had the blessing of spending his last day with him, affirming our love.

I am writing this only for myself. I’m not writing it for him; he outlived everyone he knew. Neither am I writing it for you; I don’t expect you to read this, for you did not know him. I find such memorials for loved ones, including pets, in social media understandable but difficult, for I never know how to react. I do not expect you to. I simply want to memorialize my father, to leave a trace of his life connected with mine here. As an old newspaperman, I understand the value of the obituary more than the grave.

Darrell V. Jarvis was born in the tiny house on his grandfather’s rocky, dirt-poor farm up the holler behind the Methodist church in Weston, West Virginia. His parents were not much educated, Buck finishing the seventh grade, Vera not a lot more. They worked hard and moved often, following drilling crews to gas fields in West Virginia, Kentucky, and southern Illinois, my father attending eight schools along the way. His parents insisted that he and his brother — my late Uncle Richard, a church musician — attend college. My father graduated from the University of Illinois after completing a first stint of service in the Navy at the end of World War II. At U of I, he met my mother, Joan Welch, the daughter of a country doctor and a nurse in Lewistown, Illinois. They had two children: me and my sister, Cynthia, a Presbyterian minister. We lost our mother almost six years ago.

Darrell studied engineering but didn’t much like it. He preferred people. So, after serving in the Navy again in the Korean War, as a lieutenant on the destroyer USS Renshaw, he moved up as a “peddler” — his word — in the electronic and electrical industries, rising to be a VP of sales. He was on the road constantly, “with an airplane strapped to my backside,” and we moved constantly, from Illinois to Iowa to New Jersey to New York to Illinois and then — after Cindy graduated and I, having also attended eight schools, left the nest — to Seattle, California, and Illinois, then back to New Jersey, and finally to three or four places in Florida. They bought many window treatments.

Our mother was quite shy. Our father was the opposite. They did attract. The joke in the family was that by the time he reached the box office at the movie theater, Darrell would be lifelong friends with whoever was behind and in front of him in line, sharing their stories. Meanwhile, we hid. If they ever make a Mad Men about nice people, my father would be the model: the handsome and charming guy in the gray suit with hat and briefcase, for years puffing a pipe, tending to our suburban lawns, playing golf at every opportunity, going to church, sipping bourbon and later martinis at cocktail parties, voting Republican until Trump. Darrell was middle America.

At age 62, Darrell became the model or cliché of the retired American. He and Joan moved to a town in Florida where you must be 55 years old to live (I so want to write that sitcom). He could golf constantly until he needed to stay near Joan because of the type-1 diabetes he nursed her through for more than 50 years. And his knees gave out.

My parents were wonderful grandparents to our children, Jake and Julia. Oh, how they love their PopPop and Mimi. And oh, how my parents adored my wife, Tammy.

I learned much from my father. He so wanted to teach me golf so we could bond on the course, but after I once hit him in the shin with a driver and he hit me in the head with a stray ball, we gave up. He tried to teach me the handy skills, but while watching me try to build a case for the amplifier for my eighth-grade science-fair project (electronic bongos), he shook his head and declared: “You’re so clumsy you couldn’t stick your finger up your ass with two hands.” He said it with love and laughter as well as exasperation. We gave up that ambition, too.

What I did learn from my father more than anything else was ethics. I saw how he did business. I watched how he treated the staff who worked for him. I listened to him teach me how to deal with office politics: Never lower yourself to their level. I will always be proud of him that when he found himself in a meeting that turned out to be about price-fixing, he got up, protested, and left, risking his career. I tried to learn charm from him — even at the end, his smile and warmth would win over every nurse and aide. I yet wish I could learn to be the father he was.

We tried for years to get him and my mother — then him alone — to move up to be near my sister and us. In the summer of 2021 — fully vaccinated — he caught COVID for the first time in his retirement community and spent 11 days in the hospital and a month in rehab before moving to assisted living. Finally, he agreed to move. Tammy asked him: Why not before? “I’m just stupid,” he said. He came to an assisted living community five miles from our home and said he was living where he should. Thus the perverse blessing of COVID was that it brought him to us for a magnificent year and a half we otherwise would not have had — until it stole him from us. We saw him every day. Sister Cindy would come up with her Scottie, Phoebe, to sit in his lap. My dear Tammy took away all his worries of finance and life; he heeded her. She insisted that we have him over to our house three nights a week, culminating in thrill rides on a ramp after his knees finally gave out and he resorted to a wheelchair. He asked me one day to find him an electric wheelchair and that was a wonder to behold, him hot-rodding to meals and bingo and happy hour, balancing a martini in one hand, steering the wheelchair in the other, miraculously managing not to hit any old ladies.

We love you, Pa.

Journalism is lossy compression

There has been much praise in human chat — Twitter — about Ted Chiang’s New Yorker piece on machine chat — ChatGPT. Because New Yorker; because Ted Chiang. He makes a clever comparison between lossy compression — how JPEGs or MP3s save a good-enough artifact of a thing, with some pieces missing and fudged to save space — and large-language models, which learn from and spit back but do not record the entire web. “Think of ChatGTP as a blurry JPEG of all the text on the Web,” he instructs. 

What strikes me about the piece is how unselfaware media are when covering technology.

For what is journalism itself but lossy compression of the world? To save space, the journalist cannot and does not save or report everything known about an issue or event, compressing what is learned into so many available inches of type. For that matter, what is a library or a museum or a curriculum but lossy compression — that which fits? What is culture but lossy compression of creativity? As Umberto Eco said, “Now more than ever, we realize that culture is made up of what remains after everything else has been forgotten.”

Chiang analogizes ChatGPT et al to a computational Xerox machine that made an error because it extrapolated one set of bits for others. Matthew Kirschenbaum quibbles:

Agreed. This reminds me of the sometimes rancorous debate between Elizabeth Eisenstein, credited as the founder of the discipline of book history, and her chief critic, Adrian Johns. Eisenstein valued fixity as a key attribute of print, its authority and thus its culture. “Typographical fixity,” she said, “is a basic prerequisite for the rapid advancement of learning.” Johns dismissed her idea of print culture, arguing that early books were not fixed and authoritative but often sloppy and wrong (which Eisenstein also said). They were both right. Early books were filled with errors and, as Eisenstein pointed out, spread disinformation. “But new forms of scurrilous gossip, erotic fantasy, idle pleasure-seeking, and freethinking were also linked” to printing, she wrote. “Like piety, pornography assumed new forms.” It took time for print to earn its reputation of uniformity, accuracy, and quality and for new institutions — editing and publishing — to imbue the form with authority. 

That is precisely the process we are witnessing now with the new technologies of the day. The problem, often, is that we — especially journalists — make assumptions and set expectations about the new based on the analog and presumptions of the old. 

Media have been making quite the fuss about ChatGPT, declaring in many a headline that Google better watch out because it could replace its Search. As we all know by now, Microsoft is adding ChatGPT to its Bing and Google is said to have stumbled in its announcements about large-language models and search last week. 

But it’s evident that the large-language models we have seen so far are not yet good for search or for factual divination; see the Stochastic Parrots paper that got Tinmit Gebru fired from Google; see also her coauthor Emily Bender’s continuing and cautionary writing on the topic. Then read David Weinberger’s Everyday Chaos, an excellent and slightly ahead of its moment explanation of what artificial intelligence, machine learning, and large language models do. They predict. They take their learnings — whether from the web or some other large set of data — and predict what might happen next or what should come next in a sequence of, say, words. (I wrote about his book here.) 

Said Weinberger: “Our new engines of prediction are able to make more accurate predictions and to make predictions in domains that we used to think were impervious to them because this new technology can handle far more data, constrained by fewer human expectations about how that data fits together, with more complex rules, more complex interdependencies, and more sensitivity to starting points.”

To predict the next, best word in a sequence is a different task from finding the correct answer to a math problem or verifying a factual assertion or searching for the best match to a query. This is not to say that these functions cannot be added onto large-language models as rhetorical machines. As Google and Microsoft are about to learn, these functions damned well better be bolted together before LLMs are unleashed on the world with the promise of accuracy. 

When media report on these new technologies they too often ignore underlying lessons about what they say about us. They too often set high expectations — ChatGPT can replace search! — and then delight in shooting down those expectations — ChatGPT made mistakes!

Chiang wishes ChatGPT to search and calculate and compose and when it is not good at those tasks, he all but dismisses the utility of LLMs. As a writer, he just might be engaging in wishful thinking. Here I speculate about how ChatGPT might help expand literacy and also devalue the special status of the writer in society. In my upcoming book, The Gutenberg Parenthesis (preorder here /plug), I note that it was not until a century and a half after Gutenberg that major innovation occurred with print: the invention of the essay (Montaigne), the modern novel (Cervantes), and the newspaper. We are early our progression of learning what we can do with new technologies such as large-language models. It may be too early to use them in certain circumstances (e.g., search) but it is also too early to dismiss them.

It is equally important to recognize the faults in these technologies — and the faults that they expose in us — and understand the source of each. Large-language models such as ChatGPT and Google’s LaMDA are trained on, among other things, the web, which is to say society’s sooty exhaust, carrying all the errors, mistakes, conspiracies, biases, bigotries, presumptions, and stupidities — as well as genius — of humanity online. When we blame an algorithm for exhibiting bias we should start with the realization that it is reflecting our own biases. We must fix both: the data it learns from and the underlying corruption in society’s soul. 

Chiang’s story is lossy in that he quotes and cites none of the many scientists, researchers, and philosophers who are working in the field, making it as difficult as ChatGPT does to track down the source of his logic and conclusions.

The lossiest algorithm of all is the form of story. Said Weinberger:

Why have we so insisted on turning complex histories into simple stories? Marshall McLuhan was right: the medium is the message. We shrank our ideas to fit on pages sewn in a sequence that we then glued between cardboard stops. Books are good at telling stories and bad at guiding us through knowledge that bursts out in every conceivable direction, as all knowledge does when we let it.
But now the medium of our daily experiences — the internet — has the capacity, the connections, and the engine needed to express the richly chaotic nature of the world.

In the end, Chiang prefers the web to an algorithm’s rephrasing of it. Hurrah for the web. 

We are only beginning to learn what the net can and cannot do, what is good and bad from it, what we should or should not make of it, what it reflects in us. The institutions created to grant print fixity and authority — editing and publishing — are proving inadequate to cope with the scale of speech (aka content) online. The current, temporary proprietors of the net, the platforms, are also so far not up to the task. We will need to overhaul or invent new institutions to grapple with issues of credibility and quality, to discover and recommend and nurture talent and authority. As with print, that will take time, more time than journalists have to file their next story.


 Original painting by Johannes Vermeer; transformed (pixelated) by acagastya., CC0, via Wikimedia Commons

Writing as exclusion

DALL-E image of quill, ink pot, and paper with writing on it.
DALL-E

In The Gutenberg Parenthesis (my upcoming book), I ask whether, “in bringing his inner debates to print, Montaigne raised the stakes for joining the public conversation, requiring that one be a writer to be heard. That is, to share one’s thoughts, even about oneself, necessitated the talent of writing as qualification. How many people today say they are intimidated setting fingers to keys for any written form — letter, email, memo, blog, social-media post, school assignment, story, book, anything — because they claim not to be writers, while all the internet asks them to be is a speaker? What voices were left out of the conversation because they did not believe they were qualified to write? … The greatest means of control of speech might not have been censorship or copyright or publishing but instead the intimidation of writing.”

Thus I am struck by the opportunity presented by generative AI — lately and specifically ChatGPT— to provide people with an opportunity to better express themselves, to help them write, to act as Cyrano at their ear. Fellow educators everywhere are freaking out, wondering how they can ever teach writing and assign essays without wondering whether they are grading student or machine. I, on the other hand, look for opportunity — to open up the public conversation to more people in more ways, which I will explore here.

Let me first be clear that I do not advocate an end to writing or teaching it — especially as I work in a journalism school. It is said by some that a journalism degree is the new English degree, for we teach the value of research and the skill of clear expression. In our Engagement Journalism program, we teach that rather than always extracting and exploiting others’ stories, we should help people tell their own. Perhaps now we have more tools to aid in the effort.

I have for some time argued that we must expand the boundaries of literacy to include more people and to value more means of expression. Audio in the form of podcasts, video on YouTube or TikTok, visual expression in photography and memes, and the new alphabets of emoji enable people to speak and be understood as they wish, without writing. I have contended to faculty in communications schools (besides just my own) that we must value the languages (by that I mean especially dialects) and skills (including in social media) that our students bring.

Having said all that, let us examine the opportunities presented by generative AI. When some professors were freaking out on Mastodon about ChatGPT, one prof — sorry I can’t recall who — suggested creating different assignments with it: Provide students with the product of AI and ask them to critique it for accuracy, logic, expression — that is, make the students teachers of the machines.

This is also an opportunity to teach students the limitations and biases of AI and large language models, as laid out by Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major in their Stochastic Parrots paper. Users must understand when they are listening to a machine that is trained merely to predict the next most sensible word, not to deliver and verify facts; the machine does not understand meaning. They also must realize when the data used to train a language model reflects the biases and exclusions of the web as source — when it reflects society’s existing inequities — or when it has been trained with curated content and rules to present a different worldview. The creators of these models need to be transparent about their makings and users must be made aware of their limitations.

It occurs to me that we will probably soon be teaching the skill of prompt writing: how to get what you want out of a machine. We started exercising this new muscle with DALL-E and other generative image AI — and we learned it’s not easy to guide the machine to draw exactly what we have in mind. At the same time, lots of folks are already using ChatGPT to write code. That is profound, for it means that we can tell the machine how to tell itself how to do what we want it to do. Coders should be more immediately worried about their career prospects than writers. Illustrators should also sweat more than scribblers.

In the end, writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.

Generative AI also brings the reverse potential: helping to prompt the writer. This morning on Mastodon, I empathized with a writer who lamented that he was in the “I’m at the ‘(BETTER WORDS TK)’ stage” and I suggested that he try ChatGPT just to inspire a break in the logjam. It could act like a super-powered thesaurus. Even now, of course, Google often anticipates where I’m headed with a sentence and offers a suggested next word. That still feels like cheating — I usually try to prove Google wrong by avoiding what I now sense as a cliché — but is it so bad to have a friend who can finish your sentences for you?

For years, AI has been able to take simple, structured data — sports scores, financial results — and turn that into stories for wire services and news organizations. Text, after all, is just another form of data visualization. Long ago, I sat in a small newsroom for an advisory board meeting and when the topic of using such AI came up, I asked the eavesdropping, young sports writer a few desks over whether this worried him. Not at all, he said: He would have the machine write all the damned high-school game stories the paper wanted so he could concentrate on more interesting tales. ChatGPT is also proving to be good at churning out dull but necessary manuals and documentation. One might argue, then, that if the machine takes over the most drudgerous forms of writing, we humans would be left with brainpower to write more creative, thoughtful, interesting work. Maybe the machine could help improve writing overall.

A decade ago, I met a professor from INSEAD, Philip Parker, who insisted that contrary to popular belief, there is not too much content in the world; there is too little. After our conversation, I blogged: “Parker’s system has written tens of thousands of books and is even creating fully automated radio shows in many languages…. He used his software to create a directory of tropical plants that didn’t exist. And he has radio beaming out to farmers in poor third-world nations.”

By turning text into radio, Parker’s project, too, redefines literacy, making listening, rather than reading or writing, the necessary skill to become informed. As it happens, in that post from 2011, I starting musing about the theory Tom Pettitt had brought to the U.S. from the University of Southern Denmark: the Gutenberg Parenthesis. In my book, which that theory inspired, I explore the idea that we might be returning to an age of orality — and aurality — past the age of text. Could we be leaving the era of the writer?

And that is perhaps the real challenge presented by ChatGPT: Writers are no longer so special. Writing is no longer a privilege. Content is a commodity. Everyone will have more means to express themselves, bringing more voices to public discourse — further threatening those who once held a monopoly on it. What “content creators” — as erstwhile writers and illustrators are now known — must come to realize is that value will reside not only in creation but also in conversation, in the experiences people bring and the conversations they join.

Montaigne’s time, too, was marked by a new abundance of speech, of writing, of content. “Montaigne was acutely aware that printing, far from simplifying knowledge, had multiplied it, creating a flood of increasingly specialized information without furnishing uniform procedures for organizing it,” wrote Barry Lydgate. “Montaigne laments the chaotic proliferation of books in his time and singles out in his jeremiad a new race of ‘escrivains ineptes et inutiles’ ‘inept and useless writers’ on whose indiscriminate scribbling he diagnoses a society in decay…. ‘Scribbling seems to be a sort of symptom of an unruly age.’”

Today, the machine, too, scribbles.

https://link.medium.com/35wnOIMyYvb

We, the tweeters

The New European commissioned me to write this explanation for folks over there explaining what is behind Musk’s claims of free-speech absolutism. 

Elon Musk, recent convert to the cult of far-right fascism known as the Republican Party, is currently suffering history’s worst case of buyer’s remorse. He also claims to be upholding the principle of free speech absolutism in offering amnesty to the United States’ insurrectionist-in-chief — Trump — and countless malign actors, whose noxious utterances got them kicked off Twitter.

But no. Musk and the far-right are not free speech absolutists. They veil their racism, misogyny, hate and institutional insurrection behind the cloak of free speech and the First Amendment. They claim that anyone who dares criticise them is cancelling them. They give speech a bad name.

The question now is whether Musk’s Twitter is the apotheosis of the American ethic of open public discourse. As a Guardian headline fretted: “Elon Musk’s Twitter is fast proving that free speech at all costs is a dangerous fantasy.” No. What he is doing does not lay bare faults in the First Amendment.

Note well that the First Amendment protects speech — as well as assembly and dissent — only from government interference. “Congress shall make no law… abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” The First Amendment offers no protection for bilious blather in private settings.

Let us examine the components of speech. First, everyone has a birthright to speak. The original dream of the Internet was that it would empower new voices — but there are no new voices, only those that for too long were muffled by big, old, white, corporate mass media. Twitter enabled Black Twitter to create, as André Brock Jr. wrote in Distributed Blackness, a “satellite counterpublic sphere” in which “certain Black users separate themselves from mainstream, offline, and online publics.” It is their speech that threatened the white right and provoked their resurgent racism. As print begat the Reformation, which begat the Counter-Reformation, so the net enabled #BlackLivesMatter, which in turn induced the January 6 insurrection.

Second, speech includes criticism. The American right believe they have an inalienable right to speak without challenge and so against every complaint they issue cries of “cancel culture!” On Twitter, the Canadian philosopher Regina Rini perceptively categorised what’s happening. On one side are folks who suggest new entries into the glossary of things that should and should not be said in decent, polite society — not calling women girls, for example, or respecting folks’ choice of pronouns. On the other side are those who resent being scolded for what they say; she calls them the Status Quo Warriors. In their interplay is the renegotiation of societal norms. It is ever thus.

Third, speech includes choice. When editors, publishers, producers and, yes, platforms choose what to carry and what not to carry in their own spaces, they, too, are exercising their freedom of expression, a right that requires protection. Compelled speech is not free speech. To insist — as Republican legislators in Congress and at least two states do — that platforms must carry the terrible opinions of terrible people abrogates the rights of the editor, host, or moderator. One cannot imagine someone marching into the office of Kath Viner at the Guardian insisting she publish the putrid pronouncements of Nigel Farage; why, then, should we consider arguments that social networks should be told whom and what they must carry?

Fourth, free speech includes bad speech. For who is to define bad? Recall the wisdom of John Milton in Areopagitica pondering who should presume to be “made judge to sit up on the birth or death of books.” In the United States, we point with pride to the fact that Nazis were permitted to march in the Jewish village of Skokie, Illinois, in 1977 and Larry Flynt was allowed to publish his horrid Hustler, for to protect speech — to protect it from government interference — is to protect the worst of it, allowing all to exercise their birthright of expression without authoritarian control but subject to the response of critics and the choice of publishers. This, we believe, is the hallmark of democracy.

And that is where we begin to diverge from Europe and the United Kingdom. I am often told that Europeans give other human rights precedence over free speech — privacy, right of public image, protection from hate.

Take, for example, the new UK online safety bill and protracted debate over requiring online platforms to take down content deemed “legal but harmful” — or in the more poetic description of Stanford legal scholar Daphne Keller, “lawful but awful.” Legislators never acknowledged the reverse-tautology of the doctrine, for if government demands that legal content be erased then logically that content becomes, de jure, illegal.

The legal-but-harmful clause was just erased from the bill, but other troubling remnants of moral panic remain, regarding age verification to view pornography and government surveillance of personal communication with a ban on encryption (so much for the preeminence of privacy) as well as criminalisation for posting falsehoods (who, Milton might ask, shall determine official truth?) Index on Censorship declared that many of the provisions of the bill violate rather than protect human rights. The legislation’s aim is to make the UK “the safest place to be online” — this side of China, Iran, Turkey, Russia, Hungary, and North Korea. But the effect is to criminalise speech.

Internet regulation in the EU has produced a raft of unintended consequences, often granting platforms more, not less, power. Governments find themselves unable to cope with the scale of public speech online and so they deputise often unwilling intermediaries to do their dirty work. This has turned Facebook, Twitter, and Google into private regulators akin to the 17th century Stationer’s Company and their executives into latter-day L’Estranges.

Germany’s NetzDG hate-speech law has put Facebook in the position of deciding what speech is manifestly illegal, leading to overcautious — that is, overzealous — policing of speech to avoid fines up to €50 million. The European court decision on the right to be forgotten put Google in the position of deciding what should and should not be remembered — we should note that memory is how speech lives on. Article 17 of the EU Copyright Directive will surely lead to zealous policing of copyright, resulting in takedowns of innocent users’ comment, parody, and fair use. Article 15’s link tax, along with legislation in Australia that similarly forces platforms to pay for news content they link to, is likely what drove Facebook to stop carrying and financially supporting news in any form. I fear that equally ill-conceived legislation in the US, the Journalism Competition and Protection Act, could have a similar effect on Google’s financial support of journalists.

Add to this welter of regulation the recently enacted EU Digital Services and Digital Markets Acts — which, for example, require hosts of online conversations to explain takedowns and offer appeals, an engraved invitation to trolling and harassment. The result is a crush of compliance work forced on Silicon Valley giants. What’s so wrong with that, you protest? They can afford it. Yes, but new competitors cannot. As Twitter descends into the hellscape that is Elon Musk’s fevered inferiority complex and we wish for alternatives to rise, I fear arduous regulation will scare away new entrants.

Here in the US, we face our own regulatory perils. The right and the left are engaged in a pincer movement against our best protection for online expression: Section 230 of 1996’s Communications Decency Act, described in the title of law professor Jeff Kosseff’s book as The Twenty-Six Words That Created the Internet. Section 230 attempts to protect the quality of public discourse by at once providing hosts, whether platforms or news organisations, a shield from liability for problems with content created by users, as well as a sword to enable them to moderate that content. The left has gone after the platforms for not taking down hate speech and so they decry the shield. The right has protested that the hate speech taken down is often theirs and so they aim at the sword.

If Section 230 is denuded, I fear for the fate of our best alternative to Twitter yet, Mastodon, an open-source network of thousands of independent, volunteer-run servers. No one owns Mastodon, so no Musk can take it over. There are no algorithms there and so far no ads. Moderation is in the hands of volunteers. If they are not protected from liability by Section 230 — and if further legislation in the UK and EU puts more demands on them as hosts of conversation — these volunteers may find themselves at best overloaded with the work of regulatory compliance and at worst sued and fined out of existence. The irony and unintended consequence of regulation aimed at Twitter and Facebook could be that we are stuck with them both in their worsened and weakened states.

In my upcoming book, The Gutenberg Parenthesis, I argue there are lessons to be learned from the age of print, especially now that we seem to be leaving it. I recall the first known call for censorship in 1470 by Niccolò Perotti, a Latin grammarian much offended by a shoddy translation of Pliny. He beseeched the Pope to appoint someone who “would both prescribe to the printers regulations governing the printing of books and would appoint some moderately learned man to examine and emend individual formes before printing…. The task calls for intelligence, singular erudition, incredible zeal, and the highest vigilance.” Note that Perotti was not actually calling for censorship. He wished instead for the establishment of institutions to assure quality. Those institutions, of editing and publishing, would soon follow.

Today, editors and publishers, as well as regulators, cannot cope with a new abundance of speech — an abundance I celebrate, for finally those excluded from mass media might have their say and seat at the table where norms and culture are deliberated. One reaction to this inundation is to cry that there is too much speech, but who shall determine whose speech is too much?

Another reflex, especially in Europe, is to regulate, to control, to play Whac-a-Mole with bad speech, a futile endeavour. What we need instead is new or updated institutions to discover, recommend, support, and improve good speech and speakers, whose information, art, and experiences we may now hear. That is the true fruit of free speech.