Given recent concerns over the imminently expanding robotic workforce, a simple solution to ease this apprehension is to employ robots for jobs humans would gladly surrender. One job in particular not only fits this description, but is also ideally suited for robots (or, in many cases, disembodied algorithms), and has real consequences for saving money as well as human lives: whistle-blowing.
“Whistle-blower” does not currently constitute an actual job for which one can apply with a resume, but given recent crises at General Motors, the National Security Agency, and banks like BNP Paribas, one could argue it should. Who would want such a job though?
Whistle-blowers in these cases reported wrongdoing out of personal responsibility rather than professional duty, but such brave souls are rare—and for good reason: whistle-blowing tends to come at an extreme personal cost. One study examined 230 cases of corporate fraud and found that 82 percent of these cases ended poorly for named whistle-blowers. They were fired, they quit under duress, or they returned to significantly altered job duties. Another report suggests a rise in physical violence in response to workplace whistle-blowing, and disdain for whistle-blowers is also reflected in familiar, derogatory labels such as rat, snitch, tattletale.
When in 2007, a worker at a California Toyota plant that produced the Tacoma pickup truck accused managers of downgrading and ignoring defects, her bosses demoted her and questioned her mental health. Just last week, Toyota recalled 790,000 Tacoma pickups because of defects.
Another recent court case illustrates the whistle-blower’s plight. Joshua Harman, a former guardrail installer for Trinity Industries accused Trinity of cutting costs by intentionally installing cheaper guardrails that mangled the cars they were supposed to cushion.* A recent study of these guardrails suggested that this new design was 1.36 times more likely to produce an injury and 2.82 times more likely to produce a fatality than the previous design. Trinity responded to Harman by accusing Harman of seeking revenge over a previous patent-infringement skirmish between the two, and has sued Harman twice for defamation. Meanwhile, the judge in the present case has declared a mistrial over speculation that Trinity intimidated a second would-be whistle-blower from appearing as a witness.
A similarly bleak narrative followed GM’s massive car recall over a deadly ignition defect, as details emerged that GM had silenced employees like Courtland Kelley, who raised safety concerns as far back as 2002. GM first ignored Kelley’s concerns, then bullying him, and banishing him to an undesirable role with no real responsibility. When Kelley filed a whistle-blower’s suit against GM, Kelley said he “felt morally responsible” to voice potentially lethal safety concerns, but when GM’s lawyer asked him specifically whether raising such concerns was “part of your job description,” Kelley could only respond, “No.” As the Trinity and GM circumstances illustrate, even when human lives are at stake, whistle-blowers often face resistance and confrontation.
Kelley’s case in particular raises the question, what if whistle-blowing was part of one’s job description? What if every organization, particularly those in highly regulated industries, explicitly created a whistle-blower position? The job seems essential, yet applicants might be scant and coworkers might view them similarly to the childhood schoolmate who reminds the teacher about the homework assignment.
The position’s social, reputational, and emotional risks thus make whistle-blower the perfect job for a robot. Robots—and algorithms—largely lack the “hot” social and emotional attributes that commonly (and, often, unfairly) litter portrayals of many whistle-blowers—self-interest, revenge, spite, disloyalty, betrayal, and resentment. At the same time, robots are proficient at “cold” skills necessary for diligent evaluation and inspection of organizational errors—calculation, routinization, automation, and consistency.
When human colleagues raise questions about improper safety precautions, fraudulent financial behavior, or governmental abuse of resources, we ponder their motives, which then color our interpretation of the issues they raise. Computers, however, cannot have motives. They cannot be self-interested, disgruntled, or disloyal (if we don't program them to be that way), and therefore they offer a more objective eye for potential organizational violations. We trust Microsoft Word’s spellcheck to instruct us on word hyphenation whereas we might view a human editor’s same instruction as meddling or conceited. We abide by our car’s beeping seatbelt alarms to buckle up whereas we might view the same suggestion from a family member as overbearing. It is not far-fetched to imagine humans would also respond more favorably to a computer specifically designed to identify organizational failures than to a human identifying the same issues.
Robotic whistle-blowers could identify the types of safety defects that plague the car industry (much safety inspection is already automated) whereas others could be deployed to the banking industry to identify financial irregularities such as mortgage defects. For automobiles, already machines can effectively examine braking, steering, and suspension. The robotic whistle-blower would simply have the added feature of automatically reporting to an authority if particular thresholds for safety are not met (rather than simply displaying information for any human to interpret at will). Similar technologies and computer programs could be employed in the financial industry to scan mortgage applications for quirks (such as a mortgage given to a dishwasher claiming a $500,000 yearly income) and automatically alert executives when such defects are detected.
Rather than encounter backlash, software-based whistle-blowers might bring relief to human employees from subordinates that lack job security to CEOs required to sign off on quarterly financial statements. Outsourcing whistle-blowing to automatons could alleviate the burden of admitting wrongdoing, speaking truth to power, or navigating thorny reputational dilemmas within one’s organization. Of course, one could always “power off” the software, but that procedure too could be automated, to send an email to the company board members (or a tweet to the public) announcing that someone has pulled the plug. More likely, however, organizations would embrace robotic whistle-blowers as colleagues that both contribute to human consumers’ well-being and protect human employees from the ethical entanglements associated with reporting wrongdoing.
* This post originally stated that the first name of the whistleblower in the Trinity Industries case is Dan. We regret the error.