The Politics of Consumer Technology (2015)

I think this might have been the best chapter of my 2015 book on examining the personal relationship with technology had it not been cut by the editors. Maybe it actually didn’t fit. Maybe I should have just written the book about this instead. Anyway, I just found it in my drafts folder, and I figured I might as well publish it on its own.


It’s inescapable. The technology we use — and how we use it — is a reflection and extension of our ideology. It might be painful, but if we care about the impact we have on the world, we have to interrogate what our relationship with technology says about us.

To quickly recap, this book’s definition of technology is quite broad. Any application of knowledge to affect our world meets the criteria. In other chapters, that’s meant that we’ve considered things as seemingly basic as human language to be technologies. That’s the level on which I mean that personal technology is a reflection of personal ideology.

The way we use language not only reflects the way we see the world, it also tries to impose that view on the world. It reinforces our own conceptual frameworks for understanding the way things are, and it suggests those same frameworks to anyone in earshot. You probably have to major in this in college to reach the depths at which language and ideology are linked, and I didn’t, although I couldn’t avoid people who did.

We don’t need to dig all the way down right now. It’s still an informative exercise to examine the ideology behind the kinds of consumer technologies to which we’re usually referring when we use the T-word. This will get us to a strong starting or restarting point for establishing a healthy relationship with our personal technology.

Hardware and Platform

One reveals the most basic level of one’s tech ideology at the device level. The very act of buying a computer of any size or quality makes a statement that one wants to put that money into digital technology instead of some other priority. It also shows that one accepts the use of the energy and raw materials that went into making and delivering that computer — and the environmental destruction and emission of pollutants that resulted.

Buying a computer signals the intention to spend a significant amount of time using it. One has narrowed down the freedom of how to spend one’s time by exactly the amount of time one spends using the computer. It is now a reflection of ideology what kinds of computer-based tasks occupy that slice of time. Just like money, time spent on a task is a statement — implicit or explicit — of one’s value of that task.

Crucially, buying a computer is also what technologists would call a platform decision. By choosing which computer to buy, one throws one’s weight behind a set of companies and individuals making hardware and software supporting that computer. And if one does not make that decision deliberately, instead uncritically accepting the suggestions of friends, experts, or salespeople, one is giving up control over that decision.

The platform decision is the next ideological level up. Each technology platform has a set of values. Let’s consider hypothetical examples based on real companies without naming names.

One technology platform might be designed around satisfying customers with convenient services that require lots of personal data. In order to gather enough data, it offers those services for free on a wide range of expensive and inexpensive devices. Since these services are centralized and served over the internet, they can be used from any computer, even ones that aren’t designed expressly for this ecosystem. The platform company may be able to offer steeply discounted or even free devices just to get people using the services and sharing their data.

This platform may offer exceptional maps and directions, but to serve them accurately, it needs to know its users’ exact locations. It may offer intelligent, personalized search, but to know how to find relevant answers, it needs to know its users’ interests and history of searching, browsing, and buying. In its quest to build more, better, and more convenient services, this company may launch expensive experiments in unproven, futuristic fields — like automated cars or computers mounted on eyeglasses — just to better understand how these technologies affect people.

Buying into this ecosystem supports these values. It privileges widespread technological innovation for its own sake. On the one hand, it makes powerful digital technologies widely available to more people and in more places. On the other hand, it gives technology companies — especially the platform provider but also its partners and participating developers — control over vast amounts of sensitive, personal information. This makes that information, as well as the intelligence gained from it, vulnerable to attack, but it also just enforces a social norm of less privacy. In a world where this platform succeeds, one trades privacy for convenience.

A different technology platform might be designed around satisfying customers with delightful hardware and software that just works in intuitive ways. It privileges a sense of personal control over the computer, meaning that the platform company puts lots of work into making its products seem easy to use. It makes the highest-quality hardware available, and it comes packaged with the company’s own software, which was designed with this hardware in mind. The company may also try to build networked services like maps and messages, so these devices are useful to people right away.

Supporting this platform gives over design decisions to one company. It offers a flexible but limited range of computers to choose from, and no other hardware will work with the company’s software and services. The software allows people to install applications made by third parties, but it regulates them through an official store that has strict rules about what third-party software can and cannot do. This company also tries to minimize the uncertain quality of third-party user experiences by building as many of the basic applications as it can by itself, so most people don’t even need more. And since so much of the user’s activity involves exclusive software and services, it’s difficult to leave for another ecosystem.

These two example platform companies are philosophically quite different in some respects, but they also share a feature that’s common to most technology companies. Because the companies need to innovate constantly to stay in business, they push the limits of what consumer technology can do year after year. This relentless improvement of performance and capabilities quickly makes computers technically obsolete, often long before they physically stop working.

To keep up with the times, customers must regularly buy new computers. They can resist up to a point, but every software company occasionally releases mandatory updates, and those can render old hardware impossibly slow and buggy to use. So buying a computer can easily become a never-ending cycle of buying more computers. And while it’s easier than ever to build and customize one’s own computer using free and open-source software, the all-in-one platform providers are making that not just personally but socially inconvenient to do.

Social Software

The social reinforcement of consumer tech ideologies rests on top of hardware and operating system platforms, and it amplifies and reinforces all those values as well as new ones. One’s platform choices also make social statements about the platform’s values. By having certain devices or not others, or using or not using certain services, people signal to each other what kinds of platforms they value, not to mention how much money they think is work spending on a computer. But across platforms, social technologies and the interactions they enable are the most abstract and sophisticated reflections of consumer tech ideology.

The most basic level of ideology in online social networks is the level of identity. Online social networks usually require people to present their identities in ways that are visible both to other people on the network and to the operators of the network itself. Even if it’s unintentional, one’s choices of social software support that software’s politically sensitive stances on personal identity. And by the very act of using this software to communicate, one imposes its identity values on others.

Most online social networks require some kind of user account, but the requirements of these accounts vary widely. Some only require pseudonyms, so people can participate in those social networks without providing much personal information. But others, particularly the major ad-supported social networks used by millions of people, have policies requiring their users to provide their real names, and they strongly encourage them to also provide photos, contact information, location, employment information, and even to identify their friends.

To people with enough socio-economic privileges to feel secure about it, these disclosures may seem perfectly sensible. But to marginalized people, it could be emotionally or even physically dangerous to disclose such information online. If anyone is trying to find you, providing identifying information to online services makes it easier for them. It could be a bully, a rival, an estranged family member, a political opponent, or a stalker. It could be a political or religious group looking to persecute people with certain identities. It could be governments or state-friendly agents looking to suppress dissent. These kinds of vulnerabilities make it impossible for some people to freely use mainstream communication tools because of their stances on personal identity, which most people may not notice as users.

Furthermore, since online social networks need to fit the complexity of human lives and relationships into neat categories, so they can exploit them for advertising purposes, they often impose their own values on users through the available boxes in their user profiles. For example, a social network with a gender field that only allows for “male” and “female,” especially if one is required to make a choice, excludes people of many gender identities and imposes an ideology of binary gender on its users. Fortunately, many major social networks have begun to change this practice, with some even allowing users to type in their own responses. But even just by requiring a real name, social network profiles are still imposing values on their users, and their users are supporting those values.

Beyond identity, when it comes to actually using a social network, the next ideological choice is between permanence and ephemerality of posts, messages, photos, and other shared content. At the dawn of the age of online social networks, it wasn’t clear to users, many of whom were brand new to the internet, that the things they shared there would last forever. Something regrettable they did in college might be discovered by a prospective employer ten years later just by a quick search for the person’s name. As this began to happen to people, a series of public outcries from users started forcing social networking companies to build better privacy controls, but that was too little too late for people who had already been burned by the permanence of their posts.

There are good reasons for web services to maintain permanent links. If someone wants to return to someone’s great photo years after it’s posted, even if it was only shared privately between friends, they’d really hope the link they saved still works. But clearly some personal communications would work better as temporary messages that disappeared after some amount of time. The second wave of social applications — which are also driven by the mobile and camera capabilities of smartphones — has swelled on the promise that their messages will disappear, or can be set to do so. The data seem to indicate that young people are flocking to these new, ephemeral services and away from the first generation of public, permanent ones.

The ultimate ideological concern with an online social network is the question of privacy versus transparency. That concern encompasses both the identity and communication aspects of social networking. How much information about us, our activities, and our relationships should be accessible online, and how much should be private, anonymized, or both? The ability for anyone in the world to post publicly to the web is one of its great reasons to exist; it’s the radical conclusion of the notion of freedom of speech. But the public stage provided by the web and social apps must be balanced with the need for private communication and sharing. One’s ideal mix represents an ideology about freedom of information.

By using a social network that collects and uses personal information and messages and uses them for its own purposes, one is supporting that kind of company and endorsing the world it wants to create. By using one that values security and privacy, one supports that alternative to the free, ad-driven services. There’s certainly room for both types of services, depending on the kinds of sharing and communication for which they’re being used. But the world needs the right balance. One should weigh these decisions the same way one decides whether to buy cheap, generic groceries or local, organic ones.

Crucially, the values of a social network also set the stage for how its users treat each other. Many of the great web forums — not to mention the comments sections of major publications — have learned that public posting, pseudonymous accounts, and easy sign-up can lead to a disastrous lack of community standards. But major social networks have also seen that real-name policies can lead to harassment or persecution, even if only the name and photo — and especially if the user’s location — is made public. A balance must be struck. The best way to influence tech companies to get them to strike that balance is to only use online social networks in ways that live up to one’s standards for community.

How To Make Technological Change

At the level of hardware and platform, it’s much easier for users to vote for their ideologies with their wallets, because the companies building these technologies depend on much more money per user than software companies do. It’s the way ethical capitalism is always supposed to work: buy the thing you support, don’t buy the thing you don’t support. This, of course, only applies to users with money to spend on computers. Those who don’t can still be served, but only by the companies that can make money off of free or heavily subsidized services by selling user data and/or advertisements.

When it comes to the web and social software, it’s harder to make an impact, since these services are usually free or very cheap, so one user’s vote is drowned out by advertising dollars. But with the right mix of hardware and software choices and social activism, it’s still possible to advocate for one’s values on the social web.

One way — the hard way — is to support the development and adoption of alternatives to the ad-supported and venture-backed companies. For just about every category of software product dominated by an ad-supported company that doesn’t value privacy, there is at least one start-up offering an alternative. Many of these are paid services. But in exchange, they offer guarantees such as privacy, security, and better design that isn’t compromised by advertising.

The challenge in supporting these alternatives is convincing the other people you know to use them instead of the alternatives you don’t support. Network effects are hard to overcome in social groups. If all your friends use the free social network that millions of other people use, it’s going to be difficult to convince them all to leave for a new one (unless or until some kind of privacy disaster or terrible redesign forces them to flee). You may win over a few, but if you don’t get them all, you’re going to be stuck in two places at once, which can feel like a waste of time and attention.

The other option is to just use the dominant social networks moderately, creatively, and effectively. The easiest thing to do is control the amount of information you share about yourself. Don’t add any profile information unless you completely trust the way the software and parent company is going to use it. Don’t share things over the service unless you’re sure about the privacy and security implications.

The harder but equally important thing to do is control the information you share about others. It’s astonishing how much personal information people share about each other, if you think about it. Just by being connected to someone online, you’re giving the network and anyone else who can see its data a glimpse of who they are. Large social applications are sometimes very aggressive about getting users to connect with friends of friends, so even just “friending” or “following” someone you know can potentially put their face and name in front of many strangers.

But there are many more ways that intentional sharing can implicate other people. If you post a message that mentions someone, the social network can associate both of you with the content of that message. It might contain a photo or a location that reveals something about that person without their consent. If you post a photo of that person’s face, you don’t even need to explicitly tag them. Major Internet companies have facial recognition software that’s reliable enough to figure out who that is without you telling it.

So if you value privacy, or at least value other people’s right to privacy, it’s important to use social applications in a mindful way. The safest bet is not to tag or share photos of others, but that can limit much of the fun of using online social networks. So a more balanced practice is to ask for permission to share photos and information about people before you do it. If you’d like for them to extend you the same courtesy, this is an opportunity to advocate for your privacy values.

Remember, location information is especially sensitive. That’s why it’s so valuable, and why mobile social apps want you to share it so badly. But be careful. If you share the fact that someone else is physically with you, you’re also sharing the information that they are not somewhere else — at home, for example. So someone interested in robbing their house could potentially take advantage of that. It may sound like an extreme example, but is it worth the risk to do something as totally unnecessary as sharing someone’s location without their permission?

Once you start to care about this, it can become frustrating when other people in your network don’t live up to your standards in sharing information about you. But if you take that as an opportunity to teach them how you’d prefer to be addressed online, that conversation could be a step forward for tech ideology in your community. If you set a good example in your network, hopefully others will emulate you.