Escaping The Web's Dark Forest
Tales From The Dork Web #33
Have you ever seen photos on social media and then felt sad, inadequate, or furious? Ever written a comment, looked back some time later and regretted writing it? Why this happens is complicated, but it's by design. In this issue I’ll look at dark patterns, torture a Nobel Laureate's ideas, and throw in links to things that might help you escape the web’s Dark Forests.
If you’d like Tales From The Dork Web in your inbox, subscribe using the button below. RSS is also supported for those that prefer it.
If you know someone who might like this issue, please share it with them. This issue’s art comes from the brilliant Reza Farazmand’s Poorly Drawn Lines and Gunshow webcomics, and Shooley. Music is brought to you by Apparat. Press play, and read on.
Dark Forests And Cozy Webs
I’ll start with this nugget from my dear friend and font of wisdom Dennis Groves:
Frankly the doors of private communication have been blown off, and so we are hearing from the people who fell behind, something we used to be blind to.
He’s right. For the first time humans have almost unfettered access to global communication platforms. Social media eroded the divide between private and public spaces. Between the Arab Spring and President Trump, bad actors weaponised an optimistic vision of an always-connected public discussion space and turned it against us. The platforms knew of and supported these activities for profit. In the process they became bad actors, leading to today's social media hellscape.
Yancey Strickler’s Dark Forest Internet theory hits the nail on the head. Public online spaces are hyper-optimized for reaction in the name of engagement. To become visible is to become vulnerable, not just now but in some cases forever.
I always wanted to work with the garage door up, moving towards it last year. Events early this year made me rethink. Instead I’ve shrunk my public online footprint. It’s not worth it. I’ve spent more time in private and semi-private spaces. Venkatash Rao calls this the cozyweb and I can see why. Strickler says:
These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments.
Non-indexing enables history but makes digging hard. Non-optimized experiences lack optimized constraints. Non-gamified environments lack gamified behaviours.
The Curated Horror
I’ve linked to it before, but Amy Hoy’s brilliant piece on how the blog’s chronological nature changed the web deserves a mention. While highlighting how early blogging platform Movable Type changed the web, she inadvertently found the root of a broader problem with human behaviour:
Here’s the crux of the problem: When something is easy, people will do more of it.
Followed by this:
But once you are given a tool that operates effortlessly — but only in a certain way — every choice that deviates from the standard represents a major cost.
Social media inherited and weaponised the chronological weblog feed. Showing content based on user activity hooked us in for longer. When platforms discovered anger and anxiety boosts screen time, the battle for our minds was lost.
Till this point the fundamental purpose of software was to support the user’s objectives. Somewhere, someone decided the purpose of users is to support the firm’s objectives. This shift permeates through the Internet. It’s why basic software is subscription-led. It’s why there’s little functional difference between Windows’ telemetry and spyware. It’s why leaving social media is so hard.
Like chronological timelines, users grew to expect these patterns. Non-commercial platforms adopted them because users expect them. While not as optimized as their commercial counterparts, inherited anti-patterns can lead to inherited behaviours.
Or to paraphrase Amy Hoy:
If being an asshole is easier than not being an asshole, people will be assholes online. If a tool operates effortlessly - but especially if you’re being an asshole - every choice that deviates from being an asshole represents a personal cost.
Content stays up when it holds value for the platform. If it didn’t, they’d charge to keep it up. Platforms fetishizing the new push users to new content regardless of context. Does a cooking channel need new recipe videos every few days? At what point does YouTube have enough Mac and Cheese recipe videos?
The flip side of this is content removal friction. I’m lucky enough to have a right to be forgotten under GDPR. Deleting a lobste.rs account, posts, and comments was automatic. When I asked HN to remove my posts, comments and account due to online harassment they said they were swamped and nothing happened. 10 Months later I told them I'd talk about my experience next week. As a fallback I asked them to disown my content and delete my account. 4 Hours and 48 minutes later I was told it was done. It’s still all there, just not linked to my account. Hacker News users can check out, but unless they’re California resident, they can’t really leave.
Scoring systems create user asymmetries. Perceived status from likes, badges, and karma cost platforms nothing. Participation costs users time, effort, and for some platforms, money. Gamifying discourse incentivises gaming behaviours. It reinforces groupthink, biases and self-censorship.
These elements often become best practices associated with success. Rarely does an individual person doing something wrong. Adoption is mostly emergent. Designers, builders and users are just following Hoy’s maxim: If it’s easy, people will do more of it.
Thinking Fast And Slow
In his book Thinking Fast And Slow, Nobel Laureate Daniel Kahneman describes two systems of thought:
System 1 is fast, reactive thinking, often on incomplete information
System 2 is slow, higher-level thinking/reasoning, suited to analysing data
System 1 handles simple learned behaviours, emotional reactions and fast reactions. It helps people drive, stops them from being eaten by lions and is why people throw things when they’re angry. System 2 is more complex and requires focused sustained attention to complete a task. Under Kahneman’s model, System 1 is the default state, calling on System 2 when needed. His book has flaws but is useful as a model for some online behaviours. However I’m not a psychologist and am often wrong.
System 1 is more fallible than System 2, and:
System 1 has biases; systematic errors that it is prone to make in specified circumstances. It sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics.
System 1 appears to prioritise speed over accuracy, which makes sense for Lion-scale problems. System 1 even cheats to avoid using System 2. When faced with a difficult question, System 1 can substitute it and answer a simpler one. When someone responds to a point that was never made that could be a System 1 substitution.
Kahneman called this an availability heuristic. This affects perception based on how easily something comes to mind. One day you might read an article on slowness in Ruby. When told about a new Rails app you might expect it to be slow.
You’re probably familiar with the phrase “correlation is not causation”. Kahneman highlighted that System 1 doesn’t distinguish between the two. System 1 doesn’t care about accuracy or amount of information. It only needs to quickly turn it into something vaguely coherent.
Why Is Jack So Angry?
Online, System 1 effects cut several ways, such as when a System 1 substitution leads to a System 1 conclusion. Online “Why is this person disagreeing with me?” easily becomes, “Why is this person attacking me?” The System 1 answer is often “Because they’re a bad person. If they were a good person they wouldn’t have attacked you”.
Innuendo Studios’ series mostly focuses on Gamergate with a fictitious Angry Jack character. You might stay away from Gamergate, but two videos make interesting meta points. It’s worth a watch even just for the first couple of minutes. From the first:
Has someone ever told you they’re vegan when offered food, or at a party that they don’t drink? Have you ever felt bad hearing that, even internally?
This video looks at how practical statements can be misinterpreted as identity judgements. If they’re vegan and I eat meat, what does that say about me as a person? In-person cues and boundaries limit reactions. Those cues and boundaries don’t always exist online. Platforms often optimize for the opposite outcome.
Part 5 looks at some western cultures’ belief in an innate sense of good and bad, and a perceived imperitive to maintain moral integrity. A perceived slight can appear as an attack on that person’s sense of moral integrity. This escalates quickly on reaction-optimized platforms.
Online interactions can form the whole of a person’s experience of someone else. In some cases it’s not even interactions. If we’ve never met, you might form your views of who I am based on what you’ve read, or what others might say about me. In forming conclusions our brains rarely consider the incompleteness of our data. Kahneman describes this type of phenomenon as WYSIATI (What You See Is All There Is).
Maya Angelou once said:
I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.
If your feelings about someone you've never met are fixed and 1-dimensional, they might be rooted in System 1 and WYSIATI. If your interactions with someone are all negative, maybe there’s something else going on outside of WYSIATI.
Being an asshole is usually temporary. Few people live their lives in a state of zen sphinctitude. Even realising your view of someone online might not resemble them as a whole person, System 1 has more traps ahead.
I Don’t Alieve It!
A person can hold a belief so strongly to them it becomes fact. Beliefs are propositions a believer holds to be true, not the facts themselves.
Just as System 1 sets and reinforces beliefs, System 1 sets and reinforce aliefs. Coined by Philosophy professor Tamar Gendler, an alief is a belief-like attitude in tension with explicit beliefs. Someone may believe a cleaned cup is safe to drink from, but alieve it isn’t. Alice might believe Bob is acting in good faith, but may alieve he isn’t.
This can be seen in online vaccination discussions. In general, vaccinations are a good thing, but people might think differently about specific policies or programme elements. For example, a person might alieve some vaccines aren't sufficiently tested for use with some groups.
Labelling them as an anti-vaxxer might feel like shaming them into change. This judges the aliever on the basis of their alief while ignoring their beliefs. The aliever may feel misjudged and rejected. After all, they believe they’re a good person, with beliefs good people share. Someone pushed out from one group is a potential recruit for an opposing one. Is that person really an anti-vaxxer, or is it WYSIATI? As a result, such labels can be self-defeating.
The Map And The Territory
Sometimes online interactions turn sour due to territory. “Territory” means the discussion landscape. Sometimes argument boundaries are clearly defined - Mac and Cheese vs Macaroni Cheese, Windows vs Linux etc. Sometimes it isn’t. When points raised fall within that territory, discussions tend to be positive. If points stray out of that territory, this can lead to being not even wrong.
Not Even Wrong originates with physicist Wolfgang Pauli, who used the phrase, "Das ist nicht nur nicht richtig, es ist nicht einmal falsch!" ("That is not only not right, it is not even wrong!") to describe an unclear research paper. Online this could be anything derailing a discussion by dragging it out of its’ territory.
When I published a week with Plan 9 I wanted to people to try Plan 9 for themselves. I hoped people would discuss the merits and drawbacks of 9p, ACME or NDB. Instead, a paragraph unrelated to the features I explored completely derailed online discussion. I thought I'd pre-empt questions around why I didn’t pursue the most popular distribution. This was about initial responses to enquiries around content, not the content itself. I thought it wouldn’t form part of the territory. I was wrong.
It turns out that for some people it was the territory. Instead of discussing Plan 9's technical facets they discussed what bad people a user sub-group were. The sub-group weren’t bad people. Most weren’t involved. But System 1 told many people otherwise.
When people feel attacked a source is often easy to find. If the attack is delivered from outside a group, System 1 suggests a person outside of the group must have planned and orchestrated it. Asserting that an event arose through deliberate action or intent is an animistic fallacy. There are many, many cognitive fallacies and biases we routinely fall for. We can try to avoid them but we can’t guarantee success every time.
System 1 & 2 In Online Space Design
“I am (proud / humble / honored) to be (insert title or achievement) with (insert some names of people with higher status, or whose attention you want to attract, or whose reputation you want to be associated with)” - LinkedIn Proverb
It's unsurprising that many commercial public online spaces rely on System 1. The push towards action and reaction form part of letting social media think for you. If users had to consider nuance or reason complex problems they might put their phone down. Going back to Khaneman’s definitions:
System 1 is fast, reactive thinking
System 2 is slow, higher-level thinking/reasoning
It’s easy to believe technical sites promote System 2 thinking. Whether a site promotes System 2 comes from design, not content. Behaviours change as platforms mature. What encourages System 2 on smaller platforms may reward System 1 interactions on larger ones. When users are pushed to interact regardless of expertise it leads to problems like The Fastest Gun In The West, and Not even wrong.
Capital-based systems breed capital-like effects with capital-like consequences. I know this because there are non-capital based systems deliberately designed without those features. Such spaces aren’t problem-free. They just have different issues.
Secure Scuttlebutt (SSB) is a decentralized platform/protocol for building shared off/onlline experiences. Like Bittorrent it’s a peer-to-peer platform. Like Bitcoin it uses blockchains and cryptography. Applications built on top provide offline-first, invite only methods to share messages. Friend-of-a-Friend visibility provides a right to speech but not to be heard. It's high on user autonomy and each user's experience is different. It's invite-only model creates a semi-private/semi-public space. My experience has been overwhelmingly positive, which is why I haven’t really spoken about it. I’ve had one poor interaction on SSB, and I don’t want that to change.
SSB is definitely not ready for mainstream use. It’s very much a work in progress. There’s no score, although individual posts can be liked. Moderation is left to the user. Getting started can be hard and confusing. SSB is slow-paced and doesn’t try to hook you in. It’s pretty easy to try it and not come back for months.
There are weird echo chamber effects. My SSB is almost entirely anarcho-solarpunk, fringe art, homebrew hardware, radio retro computing. I’m sure there’s more out there but the Friend-of-a-Friend visibility means I don’t see it. To try it, grab a copy of Patchwork or ManyVerse, join a pub and dive in.
I also use NNTP servers. These servers aren’t connected to the main Usenet hierarchy. They’re one-off servers (although some are looking into federating). As with SSB, Usenet has an element of friction, structure and user-oriented moderation. Usenet’s scoring and filtering tools are brilliant. Tavis Ormandy’s NNTPit applied this to Reddit with great results.
Meanwhile, For Normal People
Living in smaller spaces there are still times I need to interact on mainstream platforms. I don’t want to be an itinerant asshole, but online cognitive empathy takes effort to develop. Lately I’ve been thinking about printing this and sticking it next to my monitor:
It’s from the excellent Slate Star Codex piece on Varieties of Argumentative Experience. Paul Graham’s essay on how to disagree makes for a good companion piece. This entire issue sits in the meta-debate Sphinx on the right of the image. SSC’s Scott Alexander sums this up well:
Meta-debate is discussion of the debate itself rather than the ideas being debated. … I’ve placed it in a sphinx outside the pyramid to emphasize that it’s not a bad argument for the thing, it’s just an argument about something completely different.
Liam Rosen summarizes and expands on Scott Alexander’s post. He describes arguments as:
An argument should be a collaboration between two people to find the truth.
I can’t even remember if I had a Twitter experience like that.
I try to climb the left side of the pyramid where I can with mixed success. Sometimes I lack the spoons. Sometimes the black dog says no. Still, I try. Most people don’t know of the pyramid. They just don’t want to be publicly seen to be stupid, bad people, or disagreed with. To avoid fighting on the climb up I try to make my point twice at most then detach. Don’t expect to change someone’s mind on the Internet, but be open to changing yours when they climb the pyramid.
Devine Lu Linvega’s digital garden has a great page on discourse. It opens with the Rumi’s three gates of speech:
Before you speak, let your words pass through three gates
At the first gate, ask yourself, “Is it true?”
At the second gate ask, “Is it necessary?”
At the third gate ask, “Is it kind?”
This works well for in-person discussion. Online discourse is structured in opposition to these gates. I came up with this which works for me:
What do you want to gain from this interaction?
Even misinterpreted, will it bring your desired outcome?
How will you view your words a year later?
The questions (naively) promote System 2 reasoning through abstraction. When I think about these three points, most of the time I no longer comment at all. Whether that’s a product of System 2, or System 1 just giving up I don’t know. It works for me and that’s all I need.
I often gain more researching a comment than participating in discussion. System 1 tells me to hunt for things supporting my biases. The pyramid reminds me otherwise. This issue is the result of some of that research.
After my Plan 9 piece, a 9Front user mentioned it used a racial slur. I’d never heard it used as a slur, only within the context of desktop customisation. I could’ve built the world’s worst straw man and took the System 1 chute to the worst possible interpretation. I searched for the term + racism. My search engine substituted the term and showed no results. “Aha! This guy was just one of those 9Front meanies trying to gotcha me”, said System 1. Thankfully I spotted System 1 in action.
I saw the search engine’s term switch, found the real term’s meaning and changed the content. I was right to assume their concerns were genuine from the start. They were right to assume it wasn’t being used with malicious purpose in-mind. There are platforms where this wouldn’t have worked. There are platforms where for a moment, under other circumstances it didn’t. I became the asshole, something I regret.
Straw men are built when we try to find the worst possible interpretation of someone's point. Trying to find the best interpretation of someone’s point is called steel manning. Steel-manning keeps to the left side of the triangle. Philosopher Daniel Dennett coined the term as a debate/argument technique. It shares much in common with the principle of charity.
Leaving The Dark Forest
10 Years ago my life was extremely online. I’ve been the asshole so many times I can’t even count. Was I an asshole? Sure, but the exploitation of mental state in public spaces has a role to play. It’s a strange game. The only way to win is not to play.
Commercial platforms are filled with traps, some inherited, many homegrown. Wrapping it in Zuck’s latest bullshit won’t lead to change. Even without inherited dark patterns, behaviours become ingrained. Platforms designed to avoid these patterns need to consider this if exposed to the Dark Forest.
For everything else it’s becoming easier to just stay away. There are so many private and semi-private spaces far from the madding crowd. You just need to look. I did. I lost followers, but made friends.
Things You May Have Missed
Darren Garrett writes a visual essay about the future we were promised. Speaking of futures that never arrived, the USSR’s Project Sphinx would’ve been incredible if it were ever made. Robin Sloan writes about The Slab and The Permacomputer. Gordon Brander suggests notes are conversations across time.
Adrian Hon discusses what Alternate Reality Games can teach us about QAnon. Mark Pesce shares the joy of disobeying your phone.
Joep Beving is a Dutch composer who does magic with the Piano. The song September is taken from his Henosis album. I don’t know where this quote came from, but applied to online interaction, this really speaks to me:
You cannot heal in the environment that made you sick.
If online interaction leaves you feeling bad, try some points from Datagubbe’s list. I’ll be back with more Tales From The Dork Web next month. If you’d like it in your inbox (careful now), you can sign up below.