Stop Working So Hard

In response to a thoughtful reply from John Carmack, I share some thoughts on why we all need to stop working so damn hard.

Recently, I saw this tweet where John Carmack posted to a thread on Hacker News about working hours. As this post propagated a good many bad ideas about working hours, particularly in the software industry, I of course had to reply. After some further back-and-forth on Twitter, Carmack followed up.

First off, thanks to Mr. Carmack for writing such a thorough reply in good faith. I suppose internet arguments have made me a bit cynical in that I didn’t expect that. I still definitely don’t agree, but I think there’s a legitimate analysis of the available evidence there now, at least.

When trying to post this reply to HN, I was told that the comment was too long, and I suppose it is a bit long for a comment. So, without further ado, here are my further thoughts on working hours.

... if only the workers in Greece would ease up a bit, they would get the productivity of Germany. Would you make that statement?

Not as such, no. This is a hugely complex situation mixing together finance, culture, management, international politics, monetary policy, and a bunch of other things. That study, and most of the others I linked to, is interesting in that it confirms the general model of ability-to-work (i.e. “concentration” or “willpower”) as a finite resource that you exhaust throughout the day; not in that “reduction in working hours” is a panacea solution. Average productivity-per-hour-worked would definitely go up.

However, I do believe (and now we are firmly off into interpretation-of-results territory, I have nothing empirical to offer you here) that if the average Greek worker were less stressed to the degree of the average German one, combining issues like both overwork and the presence of a constant catastrophic financial crisis in the news, yes; they’d achieve equivalent productivity.

Total net productivity per worker, discounting for any increases in errors and negative side effects, continues increasing well past 40 hours per week. ... Only when you are so broken down that even when you come back the following day your productivity per hour is significantly impaired, do you open up the possibility of actually reducing your net output.

The trouble here is that you really cannot discount for errors and negative side effects, especially in the long term.

First of all, the effects of overwork (and attendant problems, like sleep deprivation) are cumulative. While productivity on a given day increases past 40 hours per week, if you continue to work more, you productivity will continue to degrade. So, the case where “you come back the following day ... impaired” is pretty common... eventually.

Since none of this epidemiological work tracks individual performance longitudinally there are few conclusive demonstrations of this fact, but lots of compelling indications; in the past, I’ve collected quantitative data on myself (and my reports, back when I used to be a manager) that strongly corroborates this hypothesis. So encouraging someone to work one sixty-hour week might be a completely reasonable trade-off to address a deadline; but building a culture where asking someone to work nights and weekends as a matter of course is inherently destructive. Once you get into the area where people are losing sleep (and for people with other responsibilities, it’s not hard to get to that point) overwork starts impacting stuff like the ability to form long-term memories, which means that not only do you do less work, you also consistently improve less.

Furthermore, errors and negative side effects can have a disproportionate impact.

Let me narrow the field here to two professions I know a bit about and are germane to this discussion; one, health care, which the original article here starts off by referencing, and two, software development, with which we are both familiar (since you already raised the Mythical Man Month).

In medicine, you can do a lot of valuable life-saving work in a continuous 100-hour shift. And in fact residents are often required to do so as a sort of professional hazing ritual. However, you can also make catastrophic mistakes that would cost a person their life; this happens routinely. Study after study confirms this, and minor reforms happen, but residents are still routinely abused and made to work in inhumane conditions that have catastrophic outcomes for their patients.

In software, defects can be extremely expensive to fix. Not only are they hard to fix, they can also be hard to detect. The phenomenon of the Net Negative Producing Programmer also indicates that not only can productivity drop to zero, it can drop below zero. On the anecdotal side, anyone who has had the unfortunate experience of cleaning up after a burnt-out co-worker can attest to this.

There are a great many tasks where inefficiency grows significantly with additional workers involved; the Mythical Man Month problem is real. In cases like these, you are better off with a smaller team of harder working people, even if their productivity-per-hour is somewhat lower.

The specific observation from the Mythical Man Month was that the number of communication links on a fully connected graph of employees increases geometrically whereas additional productivity (in the form of additional workers) increases linearly. If you have a well-designed organization, you can add people without requiring that your communication graph be fully connected.

But of course, you can’t always do that. And specifically you can’t do that when a project is already late: you already figured out how the work is going to be divided. Brooks’ Law is formulated as: “Adding manpower to a late software project makes it later.” This is indubitable. But one of the other famous quotes from this book is “The bearing of a child takes nine months, no matter how many women are assigned.”

The bearing of a child also takes nine months no matter how many hours a day the woman is assigned to work on it. So “in cases like these” my contention is that you are not “better off with ... harder working people”: you’re just screwed. Some projects are impossible and you are better off acknowledging the fact that you made unrealistic estimates and you are going to fail.

You called my post “so wrong, and so potentially destructive”, which leads me to believe that you hold an ideological position that the world would be better if people didn’t work as long. I don’t actually have a particularly strong position there; my point is purely about the effective output of an individual.

I do, in fact, hold such an ideological position, but I’d like to think that said position is strongly justified by the data available to me.

But, I suppose calling it “so potentially destructive” might have seemed glib, if you are really just looking at the microcosm of what one individual might do on one given week at work, and not at the broader cultural implications of this commentary. After all, as this discussion shows, if you are really restricting your commentary to a single person on a single work-week, the case is substantially more ambiguous. So let me explain why I believe it’s harmful, as opposed to merely being incorrect.

First of all, the problem is that you can’t actually ignore the broader cultural implications. This is Hacker News, and you are John Carmack; you are practically a cultural institution yourself, and by using this site you are posting directly into the broader cultural implications of the software industry.

Software development culture, especially in the USA, suffers from a long-standing culture of chronic overwork. Startup developers in their metaphorical (and sometimes literal) garages are lionized and then eventually mythologized for spending so many hours on their programs. Anywhere that it is celebrated, this mythology rapidly metastasizes into a severe problem; the Death March

Note that although the term “death march” is technically general to any project management, it applies “originally and especially in software development”, because this problem is worse in the software industry (although it has been improving in recent years) than almost anywhere else.

So when John Carmack says on Hacker News that “the effective output of an individual” will tend to increase with hours worked, that sends a message to many young and impressionable software developers. This is the exact same phenomenon that makes pop-sci writing terrible: your statement may be, in some limited context, and under some tight constraints, empirically correct, but it doesn’t matter because when you expand the parameters to the full spectrum of these people’s careers, it’s both totally false and also a reinforcement of an existing cognitive bias and cultural trope.

I can’t remember the name of this cognitive bias (and my Google-fu is failing me), but I know it exists. Let me call it the “I’m fine” bias. I know it exists because I have a friend who had the opportunity to go on a flight with NASA (on the Vomit Comet), and one of the more memorable parts of this experience that he related to me was the hypoxia test. The test involved basic math and spatial reasoning skills, but that test wasn’t the point: the real test was that they had to notice and indicate when the oxygen levels were dropping and indicate that to the proctor. Concentrating on the test, many people failed the first few times, because the “I’m fine” bias makes it very hard to notice that you are impaired.

This is true of people who are drunk, or people who are sleep deprived, too. Their abilities are quantifiably impaired, but they have to reach a pretty severe level of impairment before they notice.

So people who are overworked might feel generally bad but they don’t notice their productivity dropping until they’re way over the red line.

Combine this with the fact that most people, especially those already employed as developers, are actually quite hard-working and earnest (laziness is much more common as a rhetorical device than as an actual personality flaw) and you end up in a scenario where a good software development manager is responsible much more for telling people to slow down, to take breaks, and to be more realistic in their estimates, than to speed up, work harder, and put in more hours.

The trouble is this goes against the manager’s instincts as well. When you’re a manager you tend to think of things in terms of resources: hours worked, money to hire people, and so on. So there’s a constant nagging sensation for a manager to encourage people to work more hours in a day, so you can get more output (hours worked) out of your input (hiring budget). The problem here is that while all hours are equal, some hours are more equal than others. Managers have to fight against their own sense that a few more worked hours will be fine, and their employees’ tendency to overwork because they’re not noticing their own burnout, and upper management’s tendency to demand more.

It is into this roiling stew of the relentless impulse to “work, work, work” that we are throwing our commentary about whether it’s a good idea or not to work more hours in the week. The scales are weighted very heavily on one side already - which happens to be the wrong side in the first place - and while we’ve come back from the unethical and illegal brink we were at as an industry in the days of ea_spouse, software developers still generally work far too much.

If we were fighting an existential threat, say an asteroid that would hit the earth in a year, would you really tell everyone involved in the project that they should go home after 35 hours a week, because they are harming the project if they work longer?

Going back to my earlier explanation in this post about the cumulative impact of stress and sleep deprivation - if we were really fighting an existential threat, the equation changes somewhat. Specifically, the part of the equation where people can have meaningful downtime.

In such a situation, I would still want to make sure that people are as well-rested and as reasonably able to focus as they possibly can be. As you’ve already acknowledged, there are “increases in errors” when people are working too much, and we REALLY don’t want the asteroid-targeting program that is going to blow apart the asteroid that will wipe out all life on earth to have “increased errors”.

But there’s also the problem that, faced with such an existential crisis, nobody is really going to be able to go home and enjoy a fine craft beer and spend some time playing with their kids and come back refreshed at 100% the next morning. They’re going to be freaking out constantly about the comet, they’re going to be losing sleep over that whether they’re working or not. So, in such a situation, people should have the option to go home and relax if they’re psychologically capable of doing so, but if the option for spending their time that makes them feel the most sane is working constantly and sleeping under their desk, well, that’s the best one can do in that situation.

This metaphor is itself also misleading and out of place, though. There is also a strong cultural trend in software, especially in the startup ecosystem, to over-inflate the importance of what the company is doing - it is not “changing the world” to create a website for people to order room-service for their dogs - and thereby to catastrophize any threat to that goal. The vast majority of the time, it is inappropriate to either to sacrifice -- or to ask someone else to sacrifice -- health and well-being for short-term gains. Remember, given the cumulative effects of overwork, that’s all you even can get: short-term gains. This sacrifice often has a huge opportunity cost in other areas, as you can’t focus on more important things that might come along later.

In other words, while the overtime situation is complex and delicate in the case of an impending asteroid impact, there’s also the question of whether, at the beginning of Project Blow Up The Asteroid, I want everyone to be burnt out and overworked from their pet-hotel startup website. And in that case, I can say, unequivocally, no. I want them bright-eyed and bushy-tailed for what is sure to be a grueling project, no matter what the overtime policy is, that absolutely needs to happen. I want to make sure they didn’t waste their youth and health on somebody else’s stock valuation.

Panopticon Rift

Why exactly is it that Oculus Rift fans hate Facebook so much?

Greg Price expresses a common opinion here:

I myself had a similar reaction; despite not being particularly invested in VR specifically (I was not an Oculus backer) I felt pretty negatively about the fact that Facebook was the acquirer. I wasn’t quite sure why, at first, and after some reflection I’d like to share my thoughts with you on both why this is a bad thing and also why gamers, in particular, were disturbed by it.

The Oculus Rift really captured the imagination of the gaming community’s intelligentsia. John Carmack’s imprimatur alone was enough to get people interested, but the real power of the Oculus was to finally deliver on the promise of all those unbelievably clunky virtual reality headsets that we’ve played around with at one time or another.

Virtual Boy

The promise of Virtual Reality is, of course, to transport us so completely to another place and time that we cease to even be aware of the real world. It is that experience, that complete and overwhelming sense of being in an imagined place, that many gamers are looking for when they sit down in front of an existing game. It is the aspiration to that experience that makes “immersive” such a high compliment in game criticism.

Personally, immersion in a virtual world was the beginning of my real interest in computers. I’ve never been the kind of gamer who felt the need to be intensely competitive. I didn’t really care about rules and mechanics that much, at least not for their own sake. In fact, the term “gamer” is a bit of a misnomer - I’m more a connoisseur of interactive experiences. The very first “game” I remember really enjoying was Zork. Although Zork is a goal-directed game you can “win”, that didn’t interest me. Instead, I enjoyed wandering around the environments it provided, and experimenting with different actions to see what the computer would do.

Computer games are not really just one thing. The same term, “games”, encompasses wildly divergent experiences including computerized Solitaire, Silent Hill, Dyad, and Myst. Nevertheless, I think that pursuit of immersion – on really, fully, presently paying attention to an interactive experience – is a primary reason that “gamers” feel the need to distinguish themselves (ourselves?) from the casual computing public. Obviously, the fans of Oculus are among those most focused on immersive experiences.

Gamers feel the need to set themselves apart because computing today is practically defined by a torrential cascade of things that we’re only barely paying attention to. What makes computing “today” different than the computing of yesteryear is that computing used to be about thinking, and now it’s about communication.

The advent of social media has left us not only focused on communication, but focused on constant, ephemeral, shallow communication. This is not an accident. In our present economy there is, after all, no such thing as a “social media business” (or, indeed, a “search engine business”); there are only ad agencies.

The purveyors of social media need you to be engaged enough with your friends that you won’t leave their sites, so there is some level of entertainment or interest they must bring to their transactions with you, of course; but they don’t want you to be so engaged that a distracting advertisement would be obviously crass and inappropriate. Lighthearted banter, pictures of shiba inus, and shallow gossip are fantastic fodder for this. Less so are long, soul-searching long-form writing explaining and examining your feelings.

An ad for cat food might seem fine if you’re chuckling at a picture of a dog in a hoodie saying “wow, very meme, such meta”. It’s less likely to drive you through to the terminus of that purchase conversion funnel if you’re intently focused on supporting a friend who is explaining to you how a cancer scare drove home how they’re doing nothing with their life, or how your friend’s sibling has run away from home and your friend doesn’t know if they’re safe.

Even if you’re a highly social extrovert, the dominant emotion that this torrent of ephemeral communication produces is, at best, sarcastic amusement. More likely, it produces constant anxiety. We do not experience our better selves when we’re not really paying focused attention to anything1. As Community Season 5 Episode 8 recently put it, somewhat more bluntly: “Mark Zuckerberg is Fidel Castro in flip-flops.”

I think that despite all the other reasons we’re all annoyed - the feelings of betrayal around the Kickstarter, protestations of Facebook’s creepyness, and so on - the root of the anger around the Facebook acquisition is the sense that this technology with so much potential to reverse the Balkanization of our attention has now been put directly into the hands of those who created the problem in the first place.

So now, instead of looking forward to a technology that will allow us to visit a world of pure imagination, we now eagerly await something that will shove distracting notifications and annoying advertisements literally an inch away from our eyeballs. Probably after charging us several hundred dollars for the privilege.

It seems to me that’s a pretty clear justification for a few hundred negative reddit comments.


  1. Paid link. See disclosures