For decades technology companies have enjoyed a near-unbroken run of great publicity. Products and services are lauded as shiny and covetable. Adoption is couched as inevitable. Direction goes unquestioned. Engineering genius is assumed. And a generous margin is indefinitely applied to gloss over day-to-day errors (‘oh, just a few bugs!’) — allowing problematic functioning to be normalized and sanctioned in all but a handful of outlier instances.
The worst label these companies have generally had to worry about is being called ‘boring’. Or, at a push, overly addictive.
Tech giants have been given space to trumpet their products as revolutionary! Break through! Cutting-edge agents of mass behavioral change! To, on the one hand, tell us their tools are actively restructuring our societies. Yet also fade into the background of the conversation the moment any negative impact gets raised.
Then they prefer to stay silent.
When forced, they might put out a blog post — claiming their tools are impartial, their platforms neutral, their role mere ‘blameless intermediary’.
The not so subtle subtext is: The responsibility for any problems caused by our products is all yours, dear users.
Or at least that was the playbook, until very recently.
What’s changed is that lately the weight of problems being demonstrably attached to heavily used tech services has acquired such a gravitational and political pull that it’s becoming harder and harder for these businesses to sidestep the concept of wider societal and civic responsibilities.
Whether it’s Facebook and election disinformation. Google’s questionable choices in information ranking and ad monetization. Amazon’s appetite for crushing traditional retail and evading tax. Airbnb diluting local neighborhoods and pushing up rents. Uber being outted as a rule-breaker and a bully — again and again. Or Twitter providing a safe place for nazis to spread violent hate speech and misogynists to harass women.
Libertarians are unlikely to object to any of this, of course, but it really is long overdue that the rose-tinted glasses came off the liberal view of tech companies.
The warning signs have been there for some years now. Few apparently cottoned on.
The honeymoon is over
Silicon Valley’s creativity may have been seeded in the 1960s by hippy counterculture but the technological powerhouse its community constructed has graduated from hanging around in communes to churning out some of the most fervent capitalists in human history.
Growth is its icon, now. Power the preferred trip. And free love became voyeuristic data capture.
You may champion capitalism and believe, of all available systems, it alone delivers the best and widest societal benefits — albeit trickle down economics is a desiccating theory still in dire need of a flood… (And that’s before you even start to factor in advancing automation destroying lower skilled jobs).
But the messages tech giants have used to sell their services have hardly amounted to an honest summary of their product propositions. That would require their marketing to confess to something more like this: ‘Hi! We’re here to asset strip your personal data and/or public infrastructure to maximize our revenues and profits any way we can — but at least you’re getting to use our convenient service!’
Instead they’ve stood behind grand statements about making the world more open and connected. Organizing information and making it universally accessible. Living like a local. Having a global mission. And so on and on.
They’ve continued to channel hippyish, feel good vibes. Silicon Valley still stuck on claims of utopianism.
This of course is the slippery lie called marketing. But tech’s disingenuous messages have generally been allowed to pass with far less critical scrutiny than gets applied to companies in all sorts of other industries and sectors.
And as a consequence of, what? On some level it seems to be the result of an uncritical awe of gadgetry and ‘techno-newness’ — coupled with a fetishization of the future that’s greased by ‘association attachment’ to sci-fi themes that are in turn psychologically plugged into childhood nostalgia (and/or fueled by big Hollywood marketing budgets).
On the other hand it may well also be a measure of the quantity of VC funding that has been pumped into digital businesses — and made available for polishing marketing messages and accelerating uptake of products through cost subsidization.
Uber rides, for example, are unsustainably cheap because Uber has raised and is burning through billions of VC dollars.
You don’t see — say — big pharma being put on the kind of pedestal that tech giants have enjoyed. And there the products are often literally saving lives.
Meanwhile technologists of the modern era have enjoyed an extended honeymoon in publicity and public perception terms.
Perhaps, though, that’s finally coming to an end.
And if it is, that will be a good thing. Because you can’t have mature, informed debate about the pros and cons of software powered societal change if critical commentary gets shouted down by a bunch of rabid fanboys the moment anyone raises a concern.
Money for monopolizing attention
The long legacy of near zero critical debate around the de-formative societal pressures of tech platforms — whose core priority remains continued growth and market(s) dominance, delivered at a speed and scale that outstrips even the huge upheavals of the industrial revolution — has helped entrench a small group of tech companies as some of the most powerful and wealthiest businesses the world has ever known.
Indeed, the race is on between tech’s big hitters to see who can become the first trillion dollar company. Apple almost managed it earlier this month, after the launch of its latest iPhone. But Alphabet, Facebook, Amazon and Microsoft are all considered contenders in this insane valuation game.
At the same time, these companies have been disrupting all sorts of other structures and enterprises — as a consequence of their dominance and power.
Like the free Internet. Now people who spend time online spend the majority of their time in a series of corporate walled gardens that are ceaselessly sucking up their input signals to order to continuously micro-target content and advertising.
Social media behemoth Facebook also owns Instagram, also owns WhatsApp. It doesn’t own your phone’s OS but Facebook probably pwns your phone’s battery usage because of how much time you’re spending inside its apps.
The commercially owned social web is a far cry from the vision of academically minded knowledge exchange envisaged by the World Wide Web’s inventor. (Tim Berners-Lee’s view now is that the system is failing. “People are being distorted by very finely trained AIs that figure out how to distract them,” he told The Guardian earlier this month.)
It’s also a seismic shift in media terms. Mass media used to mean everyone in the society watching the same television programs. Or reading news in the same handful of national or local newspapers.
Those days are long gone. And media consumption is increasingly shifting online because a few tech platforms have got so expert at dominating the attention economy.
More importantly, media content is increasingly being encountered via algorithmically driven tech platforms — whose AIs apparently can’t distinguish between deliberately skewed disinformation and genuine reportage. Because it’s just not in their business interests to do so.
Engagement is their overriding intent. And the tool they use to keep eyeballs hooked is micro-targeted content at the individual level. So, given our human tendency to be triggered by provocative and sensationalist content, it’s provocative and sensationalist content the algorithms prefer to serve. Even if it’s fake. Even if it’s out-and-out malicious. Even if it’s hateful.
An alternative less sensationalist interpretation or a boring truth just doesn’t get as much airplay. And easily gets buried under all the other more clickable stuff.
These algorithms don’t have an editorial or a civic agenda. Their mission is to optimize revenue. They are unburdened by considerations of morality — because they’re not human.
Meanwhile their human masters have spent years shrugging off editorial and civic responsibilities which they see as a risk to their business models — by claiming the platform is just a pipe. No matter if the pipe is pumping sewage and people are consuming it.
Traditional media has its own problems with skewed agendas and bias, of course. But the growing role and power of tech platforms as media distributors suggests the communal consensus represented by the notion of ‘mass media’ is dissolving precisely because algorithmic priorities are so singleminded in their pursuit of engaged eyeballs.
Tech giants have perfected automated, big data fueled content customization and personalization engines that are able to pander to each individual and their peculiar tastes — regardless of the type of content that means they end up pushing.
None of us know what stuff another person eyeing one of these tech platforms is seeing in any given moment. We’re all in the dark as to what’s going on beyond our own feeds.
Many less techie people won’t even realize that what they see isn’t the same as what everyone else sees. Or isn’t just the sum of all the content their friends are sharing, in the case of Facebook’s platform.
The recipes underpinning these individual information hierarchies are only abstractly alluded to. They are certainly not publicly shared. The full gamut of targeting factors are never disclosed. The algorithms are not open sourced. Calls to open up their black boxes have been studiously ignored.
What self-regulation there is tends to be piecemeal. After the outraged fact — of YouTube being shown monetizing extremism, for example, or (a more recent accusation) pandering to pedophiles.
But now some politicians are talking openly about regulating the Internet — apparently emboldened by growing public disquiet. That’s how bad it’s got.
After the love is gone
If we allow social consensus to be edited out by a tiny number of massively dominate content distribution platforms which are algorithmically bent on accelerating a kind of totalitarian individualism, the existential question that raises is how can we hope to maintain social cohesion?
The risk seems to be that social media’s game of micro-targeted fragmentation ends up ripping us apart along our myriad fault lines — by playing to our prejudices and filtering out differences of opinion. Russian agents are just taking what’s there and running with it — via the medium of Facebook ads or Twitter bots.
Were they able to swing a vote or two? Even worse: Were they so successful at amplifying prejudice they’ve been able to drive an uptick in hate crime?
Even if you set aside directly hostile foreign agents using tech tools with the malicious intent of sewing political division and undermining democratic processes, the commercial lure of online disinformation is a potent distorting force in its own right.
This pull spun up a cottage industry of viral content generating teens in Macedonia — thousands of miles away from the US presidential election — financially encouraging them to pen provocative yet fake political news stories designed to catch the attention of Facebook’s algorithm, go viral and rack up revenue thanks to Google’s undiscriminating ad network.
The incentives on these platforms are the same: It’s about capturing attention — at any cost.
Another example where algorithmic incentives can be seen warping content is the truly awful stuff that’s made for (and uploaded at scale to) YouTube — with the sole and cynical intention of ad display monetization via children’s non-discerning eyeballs. No matter the harm it might cause. The incentives of the medium form content into whatever is necessary to generate the click.
In the past decade we even coined a new word for this phenomenon: ‘Clickbait’. Bait meaning something that looks tasty when glimpsed, yet once you grab it you’re suddenly the thing that’s being consumed.
Where algorithmic platforms have been allowed to dominate media distribution what’s happened is the grand shared narratives that traditionally bring people together in societies have come under concealed yet sustained attack.
Both as a consequence of algorithmic micro-targeting priorities; and, in many cases, by intentional trolling (be that hostile foreign agents, hateful groups or just destructive lolzseekers) — those agents and groups who have got so good at understanding and gamifying tech platforms’ algorithms they’ve been able to “weaponize information” as the UK Prime Minister put it earlier this month — when she publicly accused Russia of using the Internet to try to disrupt Western democracies.
And tech platforms gaining so much power over media distribution seems to have resulted in a splintering of public debate into smaller and angrier factions, with groups swelling in polarized opposition over the dividing lines of multiple divisive issues.
Some of the heated debate has been fake, clearly (seeded on the platforms by Kremlin trolls). But the point is fake opinions can help form real ones. And again it’s the tech pipes channeling and fueling these divisive views which work to fracture social consensus and undo compromise.
Really the result looks to be the opposite of those feel-good social media marketing claims about ‘bringing people closer together’.
A few massively powerful tech platforms controlling so much public debate is not just terrible news for social cohesion and media pluralism, given their algorithms have no interest in sifting fake from real news (au contraire). Nor even in airing alternative minority perspectives (unless they’ve divisively clickable).
It’s also bad news if you’re an entrepreneur hoping to build something disruptive of your own.
Unseating a Google or a Facebook is hardly conceived of as a possibility in the startup space these days. Instead many startups are being founded and funded to build a specific feature or technology in the explicit hope of selling it to Google or Facebook or Amazon or Apple as a quick feature bolt-on for their platforms. Or else to flash them with relevant talent and encourage an acquihire.
These startups are effectively already working as unpaid outsourcers within tech giants’ product dev departments, bootstrapping or raising a little early funding for their IP and feature idea in the hopes of cashing out with a swift exit and a quick win.
But the real winners are still the tech giants. Their platforms are the rule and the rulers now.
Sure, in the social space Snapchat stood its ground against big acquisition offers. And managed to claw its way to an IPO. Yet Facebook has responded by systematically cloning its rival’s ideas — copy-pasting key features across its own social platforms to amplify its own growth — and successfully boxing Snap’s momentum.
If Facebook had not been allowed to acquire additional social networks it might be a different story. Instead it’s been able to pay to maintain and extend its category dominance.
Just last month it acquired a social startup, tbh, which had got a little bit popular with teens. And because it already owns or can buy any potentially popular rival network, network effects work to seal its category dominance in place. The exception is China — which has its own massively dominant homegrown social giants as a consequence of actively walling out Western tech giants.
In the West, the only shade darkening the platform giants’ victory parade is the specter of regulators and regulation. Google, for example, was fined a record-breaking $2.73BN this September by the EU for antitrust violations around how it displays price comparison information in search results. The Commission judged it had both demoted rival search comparison services in organic search results, and prominently placed its own.
In Europe, where Google has a circa 90 per cent share of the Internet search market, it has been named a dominant company in that category — putting it under special obligation not to abuse its power to try to harm existing competitors or block new entrants.
This obligation applies both in a market where a company is judged to be dominant and in any other markets it may be seeking to enter — which perhaps raises wider competition questions over, for example, Alphabet/Google’s new push, via its DeepMind division, into the digital health sector.
You could even argue that the overturning of net neutrality in the US could have the potential to challenge tech platform power. Except that’s far more likely to end up penalizing smaller players who don’t have the resources to pay for their services to be prioritized by ISPs — while tech giants have deep pockets and can just cough up to continue their ability to dominate the online conversation.
Even the European Commission’s record-breaking antitrust fine against Google Shopping shrinks beside a company whose dominance of online advertising has brought it staggering wealth: Its parent entity, Alphabet, posted annual revenues of more than $90BN in 2016.
That said, the Commission has other antitrust irons in the fire where Google is concerned — including a formal investigation looking at how other Google services are bundled with its dominant Android mobile OS. And it has suggested more fines are on the way.
The EC has also gone after Amazon over e-book pricing and publisher contracts — forcing a change to its practices to settle that antitrust probe.
European regulators’ willingness to question and even attempt to check tech platform power may be inspiring others to take action — earlier this month, for example, the state of Missouri launched an investigation into whether Google has broken its consumer protection and antitrust laws.
Meanwhile Silicon Valley darling, Uber, got a big shock this September when the local transport regulator in its most important European market — London — said it would not be renewing its license to operate in the city, citing concerns about its corporate behavior and its attitude to passenger safety. (A decision that’s since been validated by the news which broke this month that Uber had concealed a massive data breach affecting 57M of its users and drivers for a full year.)
Next year incoming European data protection regulation will bring in a requirement for companies to disclose data breaches within 72 hours — or face large fines of up to 4% of their annual global turnover (or €20M, whichever is greater).
These much stiffer penalties for data mishandling are intended, by European lawmakers, to rework how digital businesses view information generated by and around their users.
The change they’re seeking is for digital data to no longer be seen as a limitless resource to be siphoned off, stored and ceaselessly data-mined; but as a potential liability to be collected sparingly, applied carefully and deleted the moment it’s no longer needed — because the financial risk of holding on to it has suddenly, drastically inflated.
The General Data Protection Regulation (GDPR) can really be seen as a reaction to the kinds of behaviors that have enabled and entrenched a few tech giants at the top of the pile.
Another example: Google’s AI division, DeepMind, has had its fingers burnt in the UK this summer after a data-sharing partnership it inked with a London National Health Service Trust was judged to have violated data protection law.
The rules around sharing medical data are already subject to multiple layers of regulation, ethics and information governance. Nonetheless DeepMind was handed access to the non-anonymized medical records of 1.6M patients without their knowledge or consent, and under loosely defined contract terms that failed to firmly lock down what the company might be able to do with highly sensitive medical data.
Was the NHS Trust suffering from an overly shiny perception of technologists’ intentions when it agreed to hand millions of people’s medical records to an ad-targeting giant in exchange for a little free app development assistance and a few years’ unpaid access to the resulting service? You really do have to wonder.
It had been DeepMind’s intention to apply AI to the same data-set. Before it presumably grasped the extent of legal minefield it was careening into — and apparently backed away from feeding the medical records to its AI models (it has claimed the AI research it had intended to conduct on this data-set “has not taken place”).
In the wake of this controversy, the data-sharing contracts DeepMind has been able to ink with other NHS Trusts (most recently Yeovil) have been considerably tighter than the terms applied to its initial data grab. And none of these data-sharing arrangements enable DeepMind to use its data access to develop AI. (A research partnership with Moorfields Eye Hospital is the sole exception.)
The long and short of all that is that — at least in Europe — regulators are responding to tech giants’ moves into sensitive spaces rather faster than they might have done in years past. (Even if still a little after the fact.) And as the sheen comes off tech’s marketing, it’s arguably getting easier for politicians to challenge and stand up to big tech’s unchecked claims.
More and more politicians are also wondering aloud how society can place checks and balances on tech platform power. And as the next sets of rules get forged, they are being written with eyes opened to the disruptive power of software.
It may be that we’ve reached the end of the road for tech businesses being able to get such an easy and uncritical ride — and that means tech businesses of all stripes, large or small, established or just starting up.
If so, it’ll be because a few tech giants got so good at selling their one-sided stories they were able to crush competition and entrench themselves as the default. For search. For social. For shopping. For media. For capturing and monopolizing attention.
And because no one thought to question what would happen if — for example — a dominant communication platform allows anyone with a few dollars to spend to micro-target any kind of advertising they choose. Anyone — or any government or hostile actor with a divisive agenda to push.
Facebook has defended its failure to anticipate misuse of its tools by saying ‘we just didn’t think of that’. Which suggests the company’s management was either drunk on its own Kool Aid. Or deliberately chose not to be responsible for anything outside the narrow scope of business growth.
But if they’re not taking responsibility for negative civic and societal impacts, regulators are going to have to make them act with more care.
No one also apparently thought to question what would happen to the quality of information being surfaced and made mainstream if a dominant tech platform’s information sorting algorithms were chained directly to its revenue generating mechanism.
Pulling up the most clickable stuff might be great for business growth but it’s really not so edifying for the eyeballs and minds encountering the base stuff your tech ends up pushing.
A lot of people have spent a lot of time warning about what’s lost when you allow hateful speech to dominate and drown out constructive opinions.
But Twitter ignored them anyway. Because its platform apparently gets more engagement by giving trolls a soapbox. And now abusive behavior accounts for the vast majority of government complaints the company receives.
While people in popular tourist destinations have increasingly been asking what happens to local communities and neighborhoods if you throw open the door to short term rental opportunism without considering the impact on long-term residents and rents.
You still won’t read anything about those downsides in Airbnb’s marketing, though.
And what about the people who fought for better background checks for the self-employed contractors that were being placed in positions of trust on ride-hailing platforms just because it lowered the barrier to faster business expansion if they didn’t have to run fingerprint checks.
Do those people feel vindicated now that Uber is facing regulatory backlash in London over safety and corporate responsibility issues as well as a class action lawsuit in the US alleging it endangered thousands of female passengers?
Or do they just feel really, really angry?
Whose fault is it that software has been given such a free ride to eat the world no matter the wider human and societal costs?
It’s especially egregious when you consider that just a tiny bit of critical thought could have helped steer off so many of these bad outcomes.
A little more critical thinking, for example, and it would have been obvious that requiring tech platforms to disclose who’s paying for digital political ads is a good idea. TV and print ads already do it. Why on earth should digital ads be any different?
Had that rule been in place last year, Russian agents would at least have had to be a bit more creative about their election disruption. As it was they were able to pay in Rubles — and using their known ‘troll farm’ name.
It beggars belief how the power of tech platforms has passed under the political radar for so long. And likely speaks to how tech illiterate so many politicians still are.
But again, they’re waking up. And boy is it a rude awakening.
Critical analysis of technologies to consider their wider impacts — on politics, on relationships, on emotional and mental wellbeing, on local and global environments — is what’s been sorely lacking in the tech narrative for years.
So really it’s hardly surprising that civic and societal considerations have been systematically de-prioritized by tech algorithms and omitted from the one-sided conversations technologists love to have.
But maybe the backlash is finally coming.
And instead of mindless cheerleading accompanying every startup pitch we’ll get a lot more Bodega fury wake-up calls.
Certainly we will if tech entrepreneurs keep making ‘disruption’ a clarion call for destruction and exploitation.
So founders, ask yourselves this: What might break because of what I’m trying to make?
And then: What am I going to do about it?
Featured Image: Arielle Calderon/Flickr UNDER A CC BY 2.0 LICENSE