So we already knew that tools for “maximizing engagement” can shape the political sphere. In 2014, Wadhwa concluded, “Whether it wants this responsibility or not, Facebook has now become an integral part of the democratic process globally.”
We also know that technology can be harmful to our democracy. Privacy invasions and algorithmic manipulation, for example, can limit the ability to research and formulate opinions, and then in turn affect how people express views—even via voting. When companies implement practices that are good for targeted advertising but bad for individuals’ democratic engagement (like, for example, the practices involved in the use of “dark posts” on Facebook, tied to the creation of psychological profiles for hundreds of millions of Facebook users in the U.S.), the benefits-versus-harms balance tilts pretty sharply.
Who minds that balance?
You often hear the adage that law can’t keep up with technology. What about ethics? Ethics, too, is deliberative, and new norms take some time to develop; but an initial ethical analysis of a new development or practice can happen fairly quickly. Many technologists, however, are not encouraged to conduct that analysis, even superficially. They are not even taught to spot an ethical issue—and some (though certainly not all) seem surprised when backlash ensues against some of their creations. (See, for example, the critical coverage of the now-defunct Google Buzz, or more recent reaction to, say, “Hello Barbie.”)
A growing chorus has argued that we need a code of ethics for technologists. That’s a start, but we need more than that. If technology can mold us, and technologists are the ones who shape that technology, we should demand some level of ethics training for technologists. And that training should not be limited to the university context; an ethics training component should also be included in the curriculum of any developer “bootcamp,” and maybe in the onboarding process when tech companies hire new employees.
Such training would not inoculate technologists against making unethical decisions—nothing can do that, and in some situations we may well reach no consensus on what the ethical action is. Such training, however, would prepare them to make more thoughtful decisions when confronted, say, with ethical dilemmas that involve conflicts between competing goods. It would help them make choices that better reflect their own values.
Sometimes, we need consumers and regulators to push back against Big Tech. But in his talk titled “Build a Better Monster: Morality, Machine Learning, and Mass Surveillance,” Maciej Ceglowski argues that “[t]he one effective lever we have against tech companies is employee pressure. Software engineers are difficult to hire, expensive to train, and take a long time to replace.” If he is right, then tech employees might have even more power than people realized—or at least an additional kind of power they can wield. All the more reason why we should demand that technologists get at least some ethics training and recognize their role in defending democracy.
I work in an applied ethics center, and we do believe that technology can help democracy (we offer a free ethical-decision-making app, for example; we even offer a MOOC—a free online course—on ethical campaigning!). For it to do that, though, we need people who are ready to tackle the ethical questions—both within and outside of tech companies.
This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.