1
The Rise and Fall of Pseudo-Productivity
In the summer of 1995, Leslie Moonves, the newly appointed head of entertainment for CBS, was wandering the halls of the network's vast Television City headquarters. He was not happy with what he saw: it was 3:30 p.m. on a Friday, and the office was three quarters empty. As the media journalist Bill Carter reports in Desperate Networks, his 2006 book about the television industry during this period, a frustrated Moonves sent a heated memo about the empty office to his employees. "Unless anybody hasn't noticed, we're in third place [in the ratings]," he wrote. "My guess is that at ABC and NBC they're still working at 3:30 on Friday. This will no longer be tolerated."
On first encounter, this vignette provides a stereotypical case study about the various ways the knowledge sector came to think about productivity during the twentieth century: “Work” is a vague thing that employees do in an office. More work creates better results than less. It’s a manager’s job to ensure enough work is getting done, because without this pressure, lazy employees will attempt to get away with the bare minimum. The most successful companies have the hardest workers.
But how did we develop these beliefs? We've heard them enough times to convince ourselves that they're probably true, but a closer look reveals a more complicated story. It doesn't take much probing to discover that in the knowledge work environment, when it comes to the basic goal of getting things done, we actually know much less than we're letting on . . .
What Does "Productivity" Mean?
As the full extent of our culture's growing weariness with "productivity" became increasingly apparent in recent years, I decided to survey my readers about the topic. My goal was to nuance my understanding of what was driving this shift. Ultimately, close to seven hundred people, almost all knowledge workers, participated in my informal study. My first substantive question was meant to be easy; a warm-up of sorts: "In your particular professional field, how would most people define 'productivity' or 'being productive'?" The responses I received to this initial query, however, surprised me. The issue was less what they said than what they didn't. By far the most common style of answer simply listed the types of things the respondent did in their job.
"Producing content and services for the benefit of our member organizations," replied an executive named Michael. "The ability to produce [sermons] while simultaneously caring for your flock via personal visits," said a pastor named Jason. A researcher named Marianna pointed to "attending meetings . . . running lab experiments . . . and producing peer-reviewed articles." An engineering director named George defined productivity to be "doing what you said you would do."
None of these answers included specific goals to meet, or performance measures that could differentiate between doing a job well versus badly. When quantity was mentioned, it tended to be in the general sense that more is always better. (Productivity is "working all the time," explained an exhausted postdoc named Soph.) As I read through more of my surveys, an unsettling revelation began to emerge: for all of our complaining about the term, knowledge workers have no agreed-upon definition of what "productivity" even means.
This vagueness extends beyond the self-reflection of individuals; it's also reflected in academic treatments of this topic. In 1999, the management theorist Peter Drucker published an influential paper titled "Knowledge-Worker Productivity: The Biggest Challenge." Early in the article, Drucker admits that "work on the productivity of the knowledge worker has barely begun." In an attempt to rectify this reality, he goes on to list six "major factors" that influence productivity in the knowledge sector, including clarity about tasks and a commitment to continuous learning and innovation. As in my survey responses, all of this is just him talking around the issue-identifying things that might support productive work in a general sense, not providing specific properties to measure, or processes to improve. A few years ago, I interviewed a distinguished Babson College management professor named Tom Davenport for an article. I was interested in Davenport because, earlier in his career, he was one of the few academics I could find who seriously attempted to study productivity in the knowledge sector, culminating in his 2005 book, Thinking for a Living: How to Get Better Performance and Results from Knowledge Workers. Davenport ultimately became frustrated with the difficulty of making meaningful progress on this topic and moved on to more rewarding areas. "In most cases, people don't measure the productivity of knowledge workers," he explained. "And when we do, we do it in really silly ways, like how many papers do academics produce, regardless of quality. We are still in the quite early stages." Davenport has written or edited twenty-five books. He told me that Thinking for a Living was the worst selling of them all.
It’s hard to overemphasize how unusual it is that an economic sector as large as knowledge work lacks useful standard definitions of productivity. In most every other area of our economy, not only is productivity a well-defined concept, but it’s often central to how work unfolds. Indeed, much of the astonishing economic growth fueling modernity can be attributed to a more systematic treatment of this fundamental idea. Early uses of the term can be traced back to agriculture, where its meaning is straightforward. For a farmer, the productivity of a given parcel of land can be measured by the amount of food the land produces. This ratio of output to input provides a compass of sorts that allows farmers to navigate the possible ways to cultivate their crops: systems that work better will produce measurably more bushels per acre. This use of a clear productivity metric to help improve clearly defined processes might sound obvious, but the introduction of this approach enabled explosive leaps forward in efficiency. In the seventeenth century, for example, it was exactly this type of metric-driven experimentation that led to the Norfolk four-course system of planting, which eliminated the need to leave fields fallow. This in turn made many farmers suddenly much more productive, helping to spur the British agricultural revolution.
As the Industrial Revolution began to emanate outward from Britain in the eighteenth century, early capitalists adapted similar notions of productivity from farm fields to their mills and factories. As with growing crops, the key idea was to measure the amount of output produced for a given amount of input and then experiment with different processes for improving this value. Farmers care about bushels per acre, while factory owners care about automobiles produced per paid hour of labor. Farmers might improve their metric by using a smarter crop rotation system, while factory owners might improve their metric by shifting production to a continuous-motion assembly line. In these examples, different types of things are being produced, but the force driving changes in methods is the same: productivity.
There was, of course, a well-known human cost to this emphasis on measurable improvement. Working on an assembly line is repetitive and boring, and the push for individuals to be more efficient in their every action creates conditions that promote injury and exhaustion. But the ability for productivity to generate astonishing economic growth in these sectors swept aside most such concerns. Assembly lines are dreary for workers, but when Henry Ford switched his factory in Highland Park, Michigan, to this method in 1913, the labor-hours required to produce a Model T dropped from 12.5 to around 1.5-a staggering improvement. By the end of the decade, half of the cars in the United States had been produced by the Ford Motor Company. These rewards were too powerful to resist. The story of economic growth in the modern Western world is in many ways a story about the triumph of productivity thinking.
But then the knowledge sector emerged as a major force in the mid-twentieth century, and this profitable dependence on crisp, quantitative, formal notions of productivity all but vanished. There was, as it turns out, a good reason for this abandonment: the old notions of productivity that worked so well in farming and manufacturing didn't seem to apply to this new style of cognitive work. One problem is the variability of effort. When the infamous efficiency consultant Frederick Winslow Taylor was hired to improve productivity at Bethlehem Steel in the early twentieth century, he could assume that each worker at the foundry was responsible for a single, clear task, like shoveling slag iron. This made it possible for him to precisely measure their output per unit of time and seek ways to improve this metric. In this particular example, Taylor ended up designing a better shovel for the foundry workers that carefully balanced the desire to move more iron per scoop while also avoiding unproductive overexertion. (In case you're wondering, he determined the optimal shovel load was twenty-one pounds.)
In knowledge work, by contrast, individuals are often wrangling complicated and constantly shifting workloads. You might be working on a client report at the same time that you're gathering testimonials for the company website and organizing an office party, all the while updating a conflict of interest statement that human resources just emailed you about. In this setting, there's no clear single output to track. And even if you do wade through this swamp of activity to identify the work that matters most-recall Davenport's example of counting a professor's academic publications-there's no easy way to control for the impact of unrelated obligations on each individual's ability to produce. I might have published more academic papers than you last year, but this might have been, in part, due to a time-consuming but important committee that you chaired. In this scenario, am I really a more productive employee?
A Henry Ford-style approach of improving systems instead of individuals also struggled to take hold in the knowledge work context. Manufacturing processes are precisely defined. At every stage of his development of the assembly line, Ford could detail exactly how Model Ts were produced in his factory. In the knowledge sector, by contrast, decisions about organizing and executing work are largely left up to individuals to figure out on their own. Companies might standardize the software that their employees use, but systems for assigning, managing, organizing, collaborating on, and ultimately executing tasks are typically left up to each individual. "The knowledge worker cannot be supervised closely or in detail," argued Peter Drucker in his influential 1967 book, The Effective Executive. "He can only be helped. But he must direct himself."
Knowledge work organizations took this recommendation seriously. The carefully engineered systems of factories were replaced with the "personal productivity" of offices, in which individuals deploy their own ad hoc and often ill-defined collection of tools and hacks to make sense of their jobs, with no one really knowing how anyone else is managing their work. In such a haphazard setting, there's no system to easily improve, no knowledge equivalent of the ten times productivity boost attributed to the assembly line. Drucker himself eventually grew to recognize the difficulties of pursuing productivity amid so much autonomy. "I think he did believe it was hard to improve . . . we let the inmates run the asylum, let them do the work as they wish," Tom Davenport told me, recalling conversations he had with Drucker in the 1990s.
These realities created a real problem for the emergent knowledge sector. Without concrete productivity metrics to measure and well-defined processes to improve, companies weren't clear how they should manage their employees. And as freelancers and small entrepreneurs in the sector became more prevalent, these individuals, responsible only for themselves, weren't sure how they should manage themselves. It was from this uncertainty that a simple alternative emerged: using visible activity as a crude proxy for actual productivity. If you can see me in my office-or, if I'm remote, see my email replies and chat messages arriving regularly-then, at the very least, you know I'm doing something. The more activity you see, the more you can assume that I'm contributing to the organization's bottom line. Similarly, the busier I am as a freelancer or entrepreneur, the more I can be assured I'm doing all I can to get after it.
As the twentieth century progressed, this visible-activity heuristic became the dominant way we began thinking about productivity in knowledge work. It's why we gather in office buildings using the same forty-hour workweeks originally developed for limiting the physical fatigue of factory labor, and why we feel guilty about ignoring our inboxes, or experience internalized pressure to volunteer or "perform busyness" when we see the boss is nearby. In the absence of more sophisticated measures of effectiveness, we also gravitate away from deeper efforts toward shallower, more concrete tasks that can be more easily checked off a to-do list. Long work sessions that don't immediately produce obvious contrails of effort become a source of anxiety-it's safer to chime in on email threads and "jump on" calls than to put your head down and create a bold new strategy. In her response to my reader survey, a social worker who identified herself only as N described the necessity of "not taking breaks, rushing, and hurrying all day," while a project manager named Doug explained that doing his job well reduced to "churning out lots of artifacts," whether they really mattered or not.
This switch from concrete productivity to this looser proxy heuristic is so important for our discussion to follow that we should give it a formal name and definition:
Pseudo-Productivity
The use of visible activity as the primary means of approximating actual productive effort.
It's the vagueness of this philosophy that gave my readers so much trouble when I asked them to define "productivity." It's not a formal system that can be easily explained; it's more like a mood-a generic atmosphere of meaningful activity maintained through frenetic motion. Its flaws are also more subtle. For early knowledge workers, there were clear advantages to pseudo-productivity when compared with the concrete systems that organized industrial labor. Many people would rather pretend to be busy in an air-conditioned office than stamp sheet metal all day on a hot factory floor. As we'll see next, it really wasn't until the last couple of decades before an approach to work centered on pseudo-productivity derailed. But once it did, the damage was significant.
Why Are We So Exhausted?
The opening vignette about CBS is a classic demonstration of pseudo-productivity. Les Moonves needed better performance, so he turned the obvious knob: demanding his employees work longer hours. Another reason why I chose this specific story, however, was its timing. In the mid-1990s, when Moonves sent out his frustrated memo, the sustainability of pseudo-productivity as a means for organizing knowledge work had begun, seemingly all at once, to quietly but rapidly degrade.
Copyright © 2024 by Cal Newport. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.