Adam Winfield

twitter.com/adamwinfield

The Web’s Closed Hell Future and The Hope of A New Decentralized Internet

The net can be thought of as a gigantic battlefield. Today there are a few impenetrable forts (to which everyone is invited so long as they play by the rules and give up certain freedoms), thousands upon thousands of skirmishes in lawless plains, and millions of scattered corpses across a charred wasteland.
Market- and political control-driven efforts are underway in both Silicon Valley and shady government bureaus across the globe to expand the forts until their walls butt up against each other and the entire web and internet infrastructure falls under elite dominion. Speaking out against this plot makes no difference – they don’t care what the people want, just what serves their respective interests.
It will take a more technically-inclined writer to explain whether it’s actually a distinct possibility the net could be totally locked down by a corporate-government coalition of forts. My take is limited to the assumption that it is possible simply because of the financial and political rewards at stake. Go here for a more technically-adept breakdown of the present day and here for a dummy’s primer on the closing of the internet.
What I can offer is some food for thought on how the net might evolve over, say, the next decade if the worst case scenario plays out. I expect the walled garden forts to strangle the open internet to within an inch of its life, but believe there’s just enough dynamism in the pro-open space to keep it breathing. This dynamism will grow as more people wake up to the merciless expansion of the forts and the escalating removal of their freedoms.
Once you make peace with the idea that the forts will become increasingly hostile to your very existence, supposing you want to do anything other than mindlessly consume certified ‘safe’ content, it can be amusing to imagine how things will unfold as you watch on from outside. You might scoff at the suggestion that the situation could ever get so bad that you – a more sophisticated web user – will have zero interest in spending time inside the closed forts, so I’m here to argue otherwise.
Imagine a typical web user today, as far as such a person exists. This user mostly accesses the web through a handful of apps, and has by now all but forgotten about ‘www.’. Hyperlinks open in in-app browser windows, meaning their online experience is playing out almost exclusively within these apps. This shift brings the forts a step closer to total control of what that user sees.
The forts’ next move will be to restrict hyperlinks to a selection of authorized sites (other forts, ‘trusted’ news sources, and mainstream commercial/civil services) in the name of security. If someone clicks a link their friend posted to an unauthorized site, the app will show an error message. Of course, the typical user will still want to perform web searches to find specific information, services or entertainment, and these will be performed via ‘search apps’ that provide carefully-curated results. The vast majority of users, unconcerned about only being able to view and link to certain sites, will simply adapt to the new reality and continue unperturbed. Meanwhile traditional browsers will be quietly dropped from devices.
Most will in fact welcome this total centralization and standardization, because the closed web will be so simple, safe, efficient and easy to navigate. The forts still face one major problem, however, and that’s controlling what users say, do and think on their platforms. This is where machine learning/AI comes in.
You can be sure the forts are throwing everything at getting auto-censorship and social/psychological engineering right. There’s endless money and political power to be grabbed by creating harmless, heavily-policed ad-friendly sandpits designed purely to encourage senseless consumerism, Huxleyan escapism and Orwellian civil obedience.
Consider how they’re already talking about such things as ‘illegal information’ and ‘algorithmic fairness’. The EU has “ordered [Facebook] to remove…comments…declared to be illegal," and a document leaked from Google contained the sentence “I propose we make machine learning intentionally human-centered and intervene for fairness.” As the machine learning system improves, it will get better at detecting what governments and big tech classify as ‘hate speech’ and ‘illegal content’, as well as the likes of ‘unconscious bias’ and ‘implicit stereotyping’.
Expect to see phrases such as “aggravating social disharmony” and “disrupting truthful narratives” enter the lexicon as justifications for removing politically-inconvenient and ad-hostile discussion. These terms will be invented and disseminated by Content Health Officers, Heads of Equitable Innovation and the like. Naturally, there will be pesky ‘trolls’ trying to get around the system for a lark, and the forts will continue employing tens of thousands of sweatshop workers to clean up whatever slips through.
Stopping ‘trolls’ will be made easier by removing anonymity – fort users will be required to disclose and verify their identities with official documentation and/or facial recognition. With anonymity no longer an option, there’ll be more witch hunts against ‘problematic’ users and lives will be shattered hourly. Bank/crypto accounts will be locked, jobs will be lost, and child protection services will be called. “It will become a game all unto its own, with high scores for ‘scalps’ that were claimed.” Worse still, once the social credit score is recognized by Western law, fort users had better be on their best online behavior, unless they don’t mind being essentially shunned from society with barely a hint of due process.
Those at the cutting edge of the social justice movement will discuss such ideas as ‘content equity balancing’ – if there is too much content being posted by men, for example, their ability to post will be disabled until it balances out. There will also be much talk about ‘community health’, i.e. restricting gatherings of certain kinds - such as large groups of users rallying around ‘extremist’ opinions - which will be enforced by algorithms monitoring flows of traffic and user demographics data.
If you’re thinking this is all still a bit too messy for the forts, then you’re right. One final evolution is required, and that will come in the form of the pre-posting detection system. As the machine becomes more advanced, it will acquire the ability to catch problematic content before it’s posted. If a piece of content breaks the rules – whether it’s text, video or image – the system will grey out the Post button until it’s fixed. If a user thinks the system has incorrectly deemed content inappropriate, he can lodge a complaint with an image or footage of the post attached, but shouldn’t hold his breath for a response.
There, now things are much more serene. But the forts aren’t done yet, far from it. The next phase is no less than bona fide mind control, the era of neuromarketing, thought surveillance. The brain-computer interface. The masses will be led with a medical precision down mapped psychological pathways, tricked into buying even more things they don’t need, worshipping at the feet of ever more degenerate celebrity icons and phony political distractions. We arrive now at the twilight of their humanity, and where their journey goes from here is beyond the limits of my imagination.
So the future inside the forts looks bleak, but what about the slim strip of land outside – the aforementioned soon-to-be-barely-breathing decentralized open internet? This infrastructure – let’s call it the new net - will remain ramshackle by its very nature, patched together like a kind of digital Frankenstein and reminiscent of or perhaps involving the Tor project. It’ll be starved of funds and desperate for skilled developers and community managers. A creatively-fertile space through necessity, the forts will steal its best ideas for their own platforms. Meanwhile the fort’s news services will report insincerely on the heroes of the new net, vilifying them as hateful criminals and painting the new net as a depraved, dangerous place no sane person would want to visit.
Most architects of the new net will strive to remain anonymous, because they won’t want the hassle, and may even keep their day jobs coding for the forts. Those who choose to make themselves public figures will be under constant pressure from the powers that be, deprived of a free citizen’s privileges.
There are other possibilities. Curtis Yarvin, aka Mencius Moldbug, thinks we can build a “new internet on top of the old internet” by replacing client-server computing with a p2p model. That’s what Urbit – a project Yarvin worked on but left in January – could become. This excellent article by Isaac Simpson calls it a “total system overhaul, a potential Copernican Revolution,” because it could strip the forts of their power by bundling web services into “a command center for your “personal server.”” Twitter user @bronzejaguar calls Urbit “the only viable solution [to] make computing revolve around humanity.”
If Urbit or something like it succeeds, our dark web digital Frankenstein might not be needed after all. But this story is unlikely to have such a happy ending. Get used to the idea of stepping out of the forts into a strange and scrappy new online world. Chances are the experience will at times be frustrating and difficult, but I promise it will be more fun and rewarding than the forts’ bland and barren hell-web.
Read my free novella Under-Toronto, set in a future with the internet under total corporate-government control.
Twitter: @adamwinfield
Blog: Palimpsest

Tags

Comments

More by Adam Winfield

Topics of interest