After a recent Uber ride, I hesitated between offering the four-star rating that captured my adequate ride and the five-star rating that I knew the driver expected. Eventually I tapped five stars and closed out of the app, relieved to be done with this tiny moral quandary. Later, the phone buzzed in my pocket with a text asking me to rate my experience getting an oil change. The next day, I politely declined to stay on the line “for just four to six minutes” to complete another customer-satisfaction survey. Sorry, but I have feedback fatigue.
Companies promise that “your feedback is important to us,” but providing it does not necessarily yield discernible change. Instead, the endless requests for feedback often feel dehumanizing. Being pestered for thumbs-ups and “likes” makes me feel like just another cog in the machine.
There’s a reason for that: The current mania for feedback can be traced to the machine that kick-started the Industrial Revolution, the steam governor. Revisiting that machine, and understanding feedback’s lost, mechanical origins, can help people better use, and refuse, its constant demands.
Traceable to antiquity, the idea of feedback roared to prominence in the 18th century when the Scottish engineer James Watt figured out how to harness the mighty but irregular power of steam. Watt’s steam governor solved the problem of wasted fuel by feeding the machine’s speed back into the apparatus to control it. When the machine ran too fast, the governor reduced the amount of steam fed to the engine. And when it slowed down, the governor could increase the flow of steam to keep the machine’s speed steady. The steam governor drove the Industrial Revolution by making steam power newly efficient and much more potent. Because it could maintain a relatively stable speed, Watt’s steam engine used up to one-third less energy than previous steam-powered engines.
Few of today’s machines are steam-powered, but many use feedback. Governors control the speed of aircraft propellers while in flight. They prevent ceiling-fan lights from overheating and limit how fast cars can go. Long before Nest controlled home temperatures with fancy digital sensors, analog thermostats used feedback to maintain comfort.
So how did feedback shift from a means of regulating engine behavior to a kind of customer service? In 1948, Norbert Wiener coined cybernetics, his term for a science of automatic control systems. Wiener took Watt’s steam governor as the model for the modern feedback loop. He even named cybernetics after kybernetes, the Greek word for governor.
Wiener broadened the definition of feedback, seeing it as a generic “method of controlling a system” by using past results to affect future performance. Any loop that connects past failures and successes to the present performance promises an improved future. But instead of energy, Wiener thought of feedback in terms of information. No matter the machine, Wiener hypothesized, it took in “information from the outer world” and, “through the internal transforming powers of the apparatus,” made information useful. Water flow, engine speed, temperature—all become information.
Cybernetics promised a utopia of systemic self-regulation. Wiener imagined the feedback loop as a structure that could explain almost any system: not just engines or thermostats, but also racial identity, the free-market economy, or the Holy Roman empire.
Even people were seen as feedback-driven structures: Wiener saw them as “a special sort of machine.”
Human beings, like machines, can change their behavior by learning from past successes or failures. But far from characterizing a soulless automaton, the feedback loop was meant to testify to the human power to adapt. For Wiener, feedback became the highest “human use” of power in the age of machines.
Cybernetics’ popularity faded in the 1970s, but its insights live on. Starting in the 1950s, management seized on the idea of feedback as an integral practice of modern business. The founder of management cybernetics, Stafford Beer, claimed, “If cybernetics is the science of control, management is the profession of control.” Beer’s emphasis on control, rather than improvement, echoes Watt’s insight into steam regulation. One of Beer’s earliest, most compelling examples of management cybernetics standardized a complex system to halve energy costs for steel production.
Approaches like Watt’s and Beer’s, which keep a system operating within tight parameters, demonstrate negative feedback. That’s not pessimistic or bad feedback, but feedback that prompts the system to maintain control. In traditional, cybernetic terms, negative feedback isn’t a one-star rating, but any information that helps the system regulate itself. Negative feedback is actually good feedback because it yields greater efficiency and performance, as in Watt’s steam governor.
Positive feedback, by contrast, causes the system to keep going, unchecked. Like a thermostat that registers the room as too warm and cranks up the furnace, it’s generally meant to be avoided.
But today’s understanding of feedback has reversed those terms. Positive ratings are a kind of holy grail on sites like Yelp and TripAdvisor, and negative reviews can sink a burgeoning small business or mom-and-pop restaurant. That shift has created a misunderstanding about how feedback works. The original structure of the loop’s information regulation has been lost.
Think about it: The proliferation of ratings systems doesn’t necessarily produce a better restaurant or hotel experience. Instead, it homogenizes the offerings, as people all go to the same top-rated establishments. Those places garner ever more reviews, bouncing them even farther up the list of results. Rather than a quality check, feedback here becomes a means to bland sameness.
Unharnessed from its cybernetic meaning, positive feedback becomes an evaluation of services rendered rather than a measure of the system’s performance. Untethered from the system that they’re meant to evaluate, these measurements of quality have no loop to go back into. They float out in the world, stars and number ratings and comment cards generated in response to the sucking need for more feedback, not in the service of improved outcomes.
Chasing ever more ratings abandons the original lesson of mechanical feedback: Specific, critical information can make a system perform well. The thoughts, opinions, experiences, and advice that consumers are asked to share all seem to have equal significance—and organizations seek ever more quantities of that feedback. An app called DropThought, for instance, promises to “capture feedback anywhere” from users who can reply “easily with one click using their smartphones.” Any thought, any response is worth capturing.
DropThought’s rotating tagline suggests that all feedback is interchangeable, promising that “Instant Feedback Equals Happy Customers/Clients/Students.” The only measure of quality is how quickly the reviews roll in. Feedback may matter to the corporations that solicit it, but the nature of the feedback itself—the people who provide it, the relevance of their opinions, and the quality of the information—seems not to matter at all. What if people want different things? What if they are mistaken in their desires?
All feedback isn’t created equal. Watt’s steam governor used a form of negative feedback, temperature, specific to that system. Other factors, such as how much a bystander liked the look of the engine, were not relevant to its internal operation. Wiener cautioned that good feedback isn’t simply a stream of numerical data. Learning happens when the input is suited to the system’s “pattern of performance”—that is, when the feedback is perfectly calibrated to the system. The call to “close the loop” works only when the information the system receives is attuned to the environment.
The love affair with feedback for its own sake has inadvertently abandoned the mechanical insights of the steam governor. Indiscriminately valuing feedback of any kind from any source reduces its ability to regulate the system. That isn’t to say that opinions, stars, and reviews aren’t helpful. I’ve scoured book reviews on Amazon and Yelped my way to good ramen. But that kind of feedback—variable, messy, unchecked—doesn’t easily translate to systemic improvement. It is too attached to human user’s feelings and passions. Perhaps the problem isn’t that feedback loops are dehumanizing, but that they aren’t dehumanizing enough.
If thumbs-ups or ratings on a five-point scale are not automatically useful, what kind of feedback would be? Finely tuned feedback that targets the system it’s meant to regulate will always surpass a barrage of angry or ecstatic reviews. Rather than trumpeting the desirability of all feedback, apps and review sites should pursue only the information that is crucial for making the system work better.
That approach also reveals some of the ethical shortcomings of feedback as it is used today. In the wake of many scandals, the ride-sharing company Uber recently introduced a new, faster way to give feedback: Rate the ride before it’s even over. Uber frames this offer as a sign of the company’s humanity: “We never want to miss an opportunity to listen and improve.” But giving feedback is not the same thing as being heard. Encouraging users to fire off reviews—especially those that have consequences, such as a driver’s livelihood—turns opinions into information. That information gets fed back into the system regardless of its quality, and gig-economy workers and small-business owners suffer the consequences.
Mechanization can seem like a bad thing, but its lessons can be humanizing. Don’t confuse feedback with listening. Clicking a thumbs-up is not the same thing as “making your voice heard.” It’s just introducing noise into the system. Collecting less information of more relevance and higher quality could produce better results. And it might mean an end to feedback fatigue in the process, which would be a five-star improvement for sure.
This post appears courtesy of Object Lessons.