Okay, I’m A Centrist I Guess

Market simulator video game mechanics reveal the core of human soul.

Today I saw a short YouTube video about “cozy games” and started writing a comment, then realized that this was somehow prompting me to write the most succinct summary of my own personal views on politics and economics that I have ever managed. So, here goes.

Apparently all I needed to trim down 50,000 words on my annoyance at how the term “capitalism” is frustratingly both a nexus for useful critque and also reductive thought-terminating clichés was to realize that Animal Crossing: New Horizons is closer to my views on political economy than anything Adam Smith or Karl Marx ever wrote.


Cozy games illustrate that the core mechanics of capitalism are fun and motivating, in a laboratory environment. It’s fun to gather resources, to improve one’s skills, to engage in mutually beneficial exchanges, to collect things, to decorate. It’s tremendously motivating. Even merely pretending to do those things can captivate huge amounts of our time and attention.

In real life, people need to be motivated to do stuff. Not because of some moral deficiency, but because in a large complex civilization it’s hard to tell what needs doing. By the time it’s widely visible to a population-level democratic consensus of non-experts that there is an unmet need — for example, trash piling up on the street everywhere indicating a need for garbage collection — that doesn’t mean “time to pick up some trash”, it means “the sanitation system has collapsed, you’re probably going to get cholera”. We need a system that can identify utility signals more granularly and quickly, towards the edges of the social graph. To allow person A to earn “value credits” of some kind for doing work that others find valuable, then trade those in to person B for labor which they find valuable, even if it is not clearly obvious to anyone else why person A wants that thing. Hence: money.

So, a market can provide an incentive structure that productively steers people towards needs, by aggregating small price signals in a distributed way, via the communication technology of “money”. Authoritarian communist states are famously bad at this, overproducing “necessary” goods in ways that can hold their own with the worst excesses of capitalists, while under-producing “luxury” goods that are politically seen as frivolous.

This is the kernel of truth around which the hardcore capitalist bootstrap grindset ideologues build their fabulist cinematic universe of cruelty. Markets are motivating, they reason, therefore we must worship the market as a god and obey its every whim. Markets can optimize some targets, therefore we must allow markets to optimize every target. Markets efficiently allocate resources, and people need resources to live, therefore anyone unable to secure resources in a market is undeserving of life. Thus we begin at “market economies provide some beneficial efficiencies” and after just a bit of hand-waving over some inconvenient details, we get to “thus, we must make the poor into a blood-sacrifice to Moloch, otherwise nobody will ever work, and we will all die, drowning in our own laziness”. “The cruelty is the point” is a convenient phrase, but among those with this worldview, the prosperity is the point; they just think the cruelty is the only engine that can possibly drive it.

Cozy games are therefore a centrist1 critique of capitalism. They present a world with the prosperity, but without the cruelty. More importantly though, by virtue of the fact that people actually play them in large numbers, they demonstrate that the cruelty is actually unnecessary.

You don’t need to play a cozy game. Tom Nook is not going to evict you from your real-life house if you don’t give him enough bells when it’s time to make rent. In fact, quite the opposite: you have to take time away from your real-life responsibilities and work, in order to make time for such a game. That is how motivating it is to engage with a market system in the abstract, with almost exclusively positive reinforcement.

What cozy games are showing us is that a world with tons of “free stuff” — universal basic income, universal health care, free education, free housing — will not result in a breakdown of our society because “no one wants to work”. People love to work.

If we can turn the market into a cozy game, with low stakes and a generous safety net, more people will engage with it, not fewer. People are not lazy; laziness does not exist. The motivation that people need from a market economy is not a constant looming threat of homelessness, starvation and death for themselves and their children, but a fun opportunity to get a five-star island rating.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. Okay, I guess “far left” on the current US political compass, but in a just world socdems would be centrists. 

Safer, Not Later

How “Move Fast and Break Things” ruined the world by escaping the context that it was intended for.

Facebook — and by extension, most of Silicon Valley — rightly gets a lot of shit for its old motto, “Move Fast and Break Things”.

As a general principle for living your life, it is obviously terrible advice, and it leads to a lot of the horrific outcomes of Facebook’s business.

I don’t want to be an apologist for Facebook. I also do not want to excuse the worldview that leads to those kinds of outcomes. However, I do want to try to help laypeople understand how software engineers—particularly those situated at the point in history where this motto became popular—actually meant by it. I would like more people in the general public to understand why, to engineers, it was supposed to mean roughly the same thing as Facebook’s newer, goofier-sounding “Move fast with stable infrastructure”.

Move Slow

In the bad old days, circa 2005, two worlds within the software industry were colliding.

The old world was the world of integrated hardware/software companies, like IBM and Apple, and shrink-wrapped software companies like Microsoft and WordPerfect. The new world was software-as-a-service companies like Google, and, yes, Facebook.

In the old world, you delivered software in a physical, shrink-wrapped box, on a yearly release cycle. If you were really aggressive you might ship updates as often as quarterly, but faster than that and your physical shipping infrastructure would not be able to keep pace with new versions. As such, development could proceed in long phases based on those schedules.

In practice what this meant was that in the old world, when development began on a new version, programmers would go absolutely wild adding incredibly buggy, experimental code to see what sorts of things might be possible in a new version, then slowly transition to less coding and more testing, eventually settling into a testing and bug-fixing mode in the last few months before the release.

This is where the idea of “alpha” (development testing) and “beta” (user testing) versions came from. Software in that initial surge of unstable development was extremely likely to malfunction or even crash. Everyone understood that. How could it be otherwise? In an alpha test, the engineers hadn’t even started bug-fixing yet!

In the new world, the idea of a 6-month-long “beta test” was incoherent. If your software was a website, you shipped it to users every time they hit “refresh”. The software was running 24/7, on hardware that you controlled. You could be adding features at every minute of every day. And, now that this was possible, you needed to be adding those features, or your users would get bored and leave for your competitors, who would do it.

But this came along with a new attitude towards quality and reliability. If you needed to ship a feature within 24 hours, you couldn’t write a buggy version that crashed all the time, see how your carefully-selected group of users used it, collect crash reports, fix all the bugs, have a feature-freeze and do nothing but fix bugs for a few months. You needed to be able to ship a stable version of your software on Monday and then have another stable version on Tuesday.

To support this novel sort of development workflow, the industry developed new technologies. I am tempted to tell you about them all. Unit testing, continuous integration servers, error telemetry, system monitoring dashboards, feature flags... this is where a lot of my personal expertise lies. I was very much on the front lines of the “new world” in this conflict, trying to move companies to shorter and shorter development cycles, and move away from the legacy worldview of Big Release Day engineering.

Old habits die hard, though. Most engineers at this point were trained in a world where they had months of continuous quality assurance processes after writing their first rough draft. Such engineers feel understandably nervous about being required to ship their probably-buggy code to paying customers every day. So they would try to slow things down.

Of course, when one is deploying all the time, all other things being equal, it’s easy to ship a show-stopping bug to customers. Organizations would do this, and they’d get burned. And when they’d get burned, they would introduce Processes to slow things down. Some of these would look like:

  1. Let’s keep a special version of our code set aside for testing, and then we’ll test that for a few weeks before sending it to users.
  2. The heads of every department need to sign-off on every deployed version, so everyone needs to spend a day writing up an explanation of their changes.
  3. QA should sign off too, so let’s have an extensive sign-off process where each individual tester does a fills out a sign-off form.

Then there’s my favorite version of this pattern, where management decides that deploys are inherently dangerous, and everyone should probably just stop doing them. It typically proceeds in stages:

  1. Let’s have a deploy freeze, and not deploy on Fridays; don’t want to mess up the weekend debugging an outage.
  2. Actually, let’s extend that freeze for all of December, we don’t want to mess up the holiday shopping season.
  3. Actually why not have the freeze extend into the end of November? Don’t want to mess with Thanksgiving and the Black Friday weekend.
  4. Some of our customers are in India, and Diwali’s also a big deal. Why not extend the freeze from the end of October?
  5. But, come to think of it, we do a fair amount of seasonal sales for Halloween too. How about no deployments from October 10 onward?
  6. You know what, sometimes people like to use our shop for Valentine’s day too. Let’s just never deploy again.

This same anti-pattern can repeat itself with an endlessly proliferating list of “environments”, whose main role ends up being to ensure that no code ever makes it to actual users.

… and break things anyway

As you may have begun to suspect, there are a few problems with this style of software development.

Even back in the bad old days of the 90s when you had to ship disks in boxes, this methodology contained within itself the seeds of its own destruction. As Joel Spolsky memorably put it, Microsoft discovered that this idea that you could introduce a ton of bugs and then just fix them later came along with some massive disadvantages:

The very first version of Microsoft Word for Windows was considered a “death march” project. It took forever. It kept slipping. The whole team was working ridiculous hours, the project was delayed again, and again, and again, and the stress was incredible. [...] The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote “return 12;” and waited for the bug report to come in [...]. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as “infinite defects methodology”.

Which lead them to what is perhaps the most ironclad law of software engineering:

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

A corollary to this is that the longer you wait to discover a bug, the costlier it is to fix.

Some bugs can be found by code review. So you should do code review. Some bugs can be found by automated tests. So you should do automated testing. Some bugs will be found by monitoring dashboards, so you should have monitoring dashboards.

So why not move fast?

But here is where Facebook’s old motto comes in to play. All of those principles above are true, but here are two more things that are true:

  1. No matter how much code review, automated testing, and monitoring you have some bugs can only be found by users interacting with your software.
  2. No bugs can be found merely by slowing down and putting the deploy off another day.

Once you have made the process of releasing software to users sufficiently safe that the potential damage of any given deployment can be reliably limited, it is always best to release your changes to users as quickly as possible.

More importantly, as an engineer, you will naturally have an inherent fear of breaking things. If you make no changes, you cannot be blamed for whatever goes wrong. Particularly if you grew up in the Old World, there is an ever-present temptation to slow down, to avoid shipping, to hold back your changes, just in case.

You will want to move slow, to avoid breaking things. Better to do nothing, to be useless, than to do harm.

For all its faults as an organization, Facebook did, and does, have some excellent infrastructure to avoid breaking their software systems in response to features being deployed to production. In that sense, they’d already done the work to avoid the “harm” of an individual engineer’s changes. If future work needed to be performed to increase safety, then that work should be done by the infrastructure team to make things safer, not by every other engineer slowing down.

The problem is that slowing down is not actually value neutral. To quote myself here:

If you can’t ship a feature, you can’t fix a bug.

When you slow down just for the sake of slowing down, you create more problems.

The first problem that you create is smashing together far too many changes at once.

You’ve got a development team. Every engineer on that team is adding features at some rate. You want them to be doing that work. Necessarily, they’re all integrating them into the codebase to be deployed whenever the next deployment happens.

If a problem occurs with one of those changes, and you want to quickly know which change caused that problem, ideally you want to compare two versions of the software with the smallest number of changes possible between them. Ideally, every individual change would be released on its own, so you can see differences in behavior between versions which contain one change each, not a gigantic avalanche of changes where any one of hundred different features might be the culprit.

If you slow down for the sake of slowing down, you also create a process that cannot respond to failures of the existing code.

I’ve been writing thus far as if a system in a steady state is inherently fine, and each change carries the possibility of benefit but also the risk of failure. This is not always true. Changes don’t just occur in your software. They can happen in the world as well, and your software needs to be able to respond to them.

Back to that holiday shopping season example from earlier: if your deploy freeze prevents all deployments during the holiday season to prevent breakages, what happens when your small but growing e-commerce site encounters a catastrophic bug that has always been there, but only occurs when you have more than 10,000 concurrent users. The breakage is coming from new, never before seen levels of traffic. The breakage is coming from your success, not your code. You’d better be able to ship a fix for that bug real fast, because your only other option to a fast turn-around bug-fix is shutting down the site entirely.

And if you see this failure for the first time on Black Friday, that is not the moment where you want to suddenly develop a new process for deploying on Friday. The only way to ensure that shipping that fix is easy is to ensure that shipping any fix is easy. That it’s a thing your whole team does quickly, all the time.

The motto “Move Fast And Break Things” caught on with a lot of the rest of Silicon Valley because we are all familiar with this toxic, paralyzing fear.

After we have the safety mechanisms in place to make changes as safe as they can be, we just need to push through it, and accept that things might break, but that’s OK.

Some Important Words are Missing

The motto has an implicit preamble, “Once you have done the work to make broken things safe enough, then you should move fast and break things”.

When you are in a conflict about whether to “go fast” or “go slow”, the motto is not supposed to be telling you that the answer is an unqualified “GOTTA GO FAST”. Rather, it is an exhortation to take a beat and to go through a process of interrogating your motivation for slowing down. There are three possible things that a person saying “slow down” could mean about making a change:

  1. It is broken in a way you already understand. If this is the problem, then you should not make the change, because you know it’s not ready. If you already know it’s broken, then the change simply isn’t done. Finish the work, and ship it to users when it’s finished.
  2. It is risky in a way that you don’t have a way to defend against. As far as you know, the change works, but there’s a risk embedded in it that you don’t have any safety tools to deal with. If this is the issue, then what you should do is pause working on this change, and build the safety first.
  3. It is making you nervous in a way you can’t articulate. If you can’t describe an known defect as in point 1, and you can’t outline an improved safety control as in step 2, then this is the time to let go, accept that you might break something, and move fast.

The implied context for “move fast and break things” is only in that third condition. If you’ve already built all the infrastructure that you can think of to build, and you’ve already fixed all the bugs in the change that you need to fix, any further delay will not serve you, do not have any further delays.

Unfortunately, as you probably already know,

This motto did a lot of good in its appropriate context, at its appropriate time. It’s still a useful heuristic for engineers, if the appropriate context is generally understood within the conversation where it is used.

However, it has clearly been taken to mean a lot of significantly more damaging things.

Purely from an engineering perspective, it has been reasonably successful. It’s less and less common to see people in the industry pushing back against tight deployment cycles. It’s also less common to see the basic safety mechanisms (version control, continuous integration, unit testing) get ignored. And many ex-Facebook engineers have used this motto very clearly under the understanding I’ve described here.

Even in the narrow domain of software engineering it is misused. I’ve seen it used to argue a project didn’t need tests; that a deploy could be forced through a safety process; that users did not need to be informed of a change that could potentially impact them personally.

Outside that domain, it’s far worse. It’s generally understood to mean that no safety mechanisms are required at all, that any change a software company wants to make is inherently justified because it’s OK to “move fast”. You can see this interpretation in the way that it has leaked out of Facebook’s engineering culture and suffused its entire management strategy, blundering through market after market and issue after issue, making catastrophic mistakes, making a perfunctory apology and moving on to the next massive harm.

In the decade since it has been retired as Facebook’s official motto, it has been used to defend some truly horrific abuses within the tech industry. You only need to visit the orange website to see it still being used this way.

Even at its best, “move fast and break things” is an engineering heuristic, it is not an ethical principle. Even within the context I’ve described, it’s only okay to move fast and break things. It is never okay to move fast and harm people.

So, while I do think that it is broadly misunderstood by the public, it’s still not a thing I’d ever say again. Instead, I propose this:

Make it safer, don’t make it later.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I make changes to my codebase, but, like, good ones”.

Telemetry Is Not Your Enemy

Not all data collection is the same, and not all of it is bad.

Part 1: A Tale of Two Metaphors

In software development “telemetry” is data collected from users of the software, almost always delivered to the authors of the software via the Internet.

In recent years, there has been a great deal of angry public discourse about telemetry. In particular, there is a lot of concern that every software vendor and network service operator collecting any data at all is spying on its users, surveilling every aspect of our lives. The media narrative has been that any tech company collecting data for any purpose is acting creepy as hell.

I am quite sympathetic to this view. In general, some concern about privacy is warranted whenever some new data-collection scheme is proposed. However it seems to me that the default response is no longer “concern and skepticism”; but rather “panic and fury”. All telemetry is seen as snooping and all snooping is seen as evil.

There’s a sense in which software telemetry is like surveillance. However, it is only like surveillance. Surveillance is a metaphor, not a description. It is far from a perfect metaphor.

In the discourse around user privacy, I feel like we have lost a lot of nuance about the specific details of telemetry when some people dismiss all telemetry as snooping, spying, or surveillance.

Here are some ways in which software telemetry is not like “snooping”:

  1. The data may be aggregated. The people consuming the results of telemetry are rarely looking at individual records, and individual records may not even exist in some cases. There are tools, like Prio, to do this aggregation to be as privacy-sensitive as possible.
  2. The data is rarely looked at by human beings. In the cases (such as ad-targeting) where the data is highly individuated, both the input (your activity) and the output (your recommendations) are both mainly consumed by you, in your experience of a product, by way of algorithms acting upon the data, not by an employee of the company you’re interacting with.1
  3. The data is highly specific. “Here’s a record with your account ID and the number of times you clicked the Add To Cart button without checking out” is not remotely the same class of information as “Here’s several hours of video and audio, attached to your full name, recorded without your knowledge or consent”. Emotional appeals calling any data “surveillance” tend to suggest that all collected data is the latter, where in reality most of it is much closer to the former.

There are other metaphors which can be used to understand software telemetry. For example, there is also a sense in which it is like voting.

I emphasize that voting is also a metaphor here, not a description. I will also freely admit that it is in many ways a worse metaphor for telemetry than “surveillance”. But it can illuminate other aspects of telemetry, the ones that the surveillance metaphor leaves out.

Data-collection is like voting because the data can represent your interests to a party that has some power over you. Your software vendor has the power to change your software, and you probably don’t, either because you don’t have access to the source code. Even if it’s open source, you almost certainly don’t have the resources to take over its maintenance.

For example, let’s consider this paragraph from some Microsoft documentation about telemetry:

We also use the insights to drive improvements and intelligence into some of our management and monitoring solutions. This improvement helps customers diagnose quality issues and save money by making fewer support calls to Microsoft.

“Examples of how Microsoft uses the telemetry data” from the Azure SDK documentation

What Microsoft is saying here is that they’re collecting the data for your own benefit. They’re not attempting to justify it on the basis that defenders of law-enforcement wiretap schemes might. Those who want literal mass surveillance tend to justify it by conceding that it might hurt individuals a little bit to be spied upon, but if we spy on everyone surely we can find the bad people and stop them from doing bad things. That’s best for society.

But Microsoft isn’t saying that.2 What Microsoft is saying here is that if you’re experiencing a problem, they want to know about it so they can fix it and make the experience better for you.

I think that is at least partially true.

Part 2: I Qualify My Claims Extensively So You Jackals Don’t Lose Your Damn Minds On The Orange Website

I was inspired to write this post due to the recent discussions in the Go community about how to collect telemetry which provoked a lot of vitriol from people viscerally reacting to any telemetry as invasive surveillance. I will therefore heavily qualify what I’ve said above to try to address some of that emotional reaction in advance.

I am not suggesting that we must take Microsoft (or indeed, the Golang team) fully at their word here. Trillion dollar corporations will always deserve skepticism. I will concede in advance that it’s possible the data is put to other uses as well, possibly to maximize profits at the expense of users. But it seems reasonable to assume that this is at least partially true; it’s not like Microsoft wants Azure to be bad.

I can speak from personal experience. I’ve been in professional conversations around telemetry. When I have, my and my teams’ motivations were overwhelmingly focused on straightforwardly making the user experience good. We wanted it to be good so that they would like our products and buy more of them.

It’s hard enough to do that without nefarious ulterior motives. Most of the people who develop your software just don’t have the resources it takes to be evil about this.

Part 3: They Can’t Help You If They Can’t See You

With those qualifications out of the way, I will proceed with these axioms:

  1. The developers of software will make changes to it.
  2. These changes will benefit some users.
  3. Which changes the developers select will be derived, at least in part, from the information that they have.
  4. At least part of the information that the developers have is derived from the telemetry they collect.

If we can agree that those axioms are reasonable, then let us imagine two user populations:

  • Population A is privacy-sensitive and therefore sees telemetry as bad, and opts out of everything they possibly can.
  • Population B doesn’t care about privacy, and therefore ignores any telemetry and blithely clicks through any opt-in.

When the developer goes to make changes, they will have more information about Population B. Even if they’re vaguely aware that some users are opting out (or refusing to opt in), the developer will know far less about Population A. This means that any changes the developer makes will not serve the needs of their privacy-conscious users, which means fewer features that respect privacy as time goes on.

Part 4: Free as in Fact-Free Guesses

In the world of open source software, this problem is even worse. We often have fewer resources with which to collect and analyze telemetry in the first place, and when we do attempt to collect it, a vocal minority among those users are openly hostile, with feedback that borders on harassment. So we often have no telemetry at all, and are making changes based on guesses.

Meanwhile, in proprietary software, the user population is far larger and less engaged. Developers are not exposed directly to users and therefore cannot be harassed or intimidated into dropping their telemetry. Which means that proprietary software gains a huge advantage: they can know what most of their users want, make changes to accommodate it, and can therefore make a product better than the one based on uninformed guesses from the open source competition.

Proprietary software generally starts out with a panoply of advantages already — most of which boil down to “money” — but our collective knee-jerk reaction to any attempt to collect telemetry is a massive and continuing own-goal on the part of the FLOSS community. There’s no inherent reason why free software’s design cannot be based on good data, but our community’s history and self-selection biases make us less willing to consider it.

That does not mean we need to accept invasive data collection that is more like surveillance. We do not need to allow for stockpiled personally-identifiable information about individual users that lives forever. The abuses of indiscriminate tech data collection are real, and I am not suggesting that we forget about them.

The process for collecting telemetry must be open and transparent, the data collected needs to be continuously vetted for safety. Clear data-retention policies should always be in place to avoid future unanticipated misuses of data that is thought to be safe today but may be de-anonymized or otherwise abused in the future.

I want the collaborative feedback process of open source development to result in this kind of telemetry: thoughtful, respectful of user privacy, and designed with the principle of least privilege in mind. If we have this kind of process, then we could hold it up as an example for proprietary developers to follow, and possibly improve the industry at large.

But in order to be able to produce that example, we must produce criticism of telemetry efforts that is specific, grounded in actual risks and harms to users, rather than a series of emotional appeals to slippery-slope arguments that do not correspond to the actual data being collected. We must arrive at a consensus that there are benefits to users in allowing software engineers to have enough information to do their jobs, and telemetry is not uniformly bad. We cannot allow a few users who are complaining to stop these efforts for everyone.

After all, when those proprietary developers look at the hard data that they have about what their users want and need, it’s clear that those who are complaining don’t even exist.


  1. Please note that I’m not saying that this automatically makes such collection ethical. Attempting to modify user behavior or conduct un-reviewed psychological experiments on your customers is also wrong. But it’s wrong in a way that is somewhat different than simply spying on them. 

  2. I am not suggesting that data collected for the purposes of improving the users’ experience could not be used against their interest, whether by law enforcement or by cybercriminals or by Microsoft itself. Only that that’s not what the goal is here. 

No More Stories

Journalists need to stop writing “stories” and start monitoring empirical consensus.

This is a bit of a rant, and it's about a topic that I’m not an expert on, but I do feel strongly about. So, despite the forceful language, please read this knowing that there’s still a fair amount of epistemic humility behind what I’m saying and I’m definitely open to updating my opinion if an expert on journalism or public policy were to have some compelling reason for the Chestertonian fence of the structure of journalistic institutions. Comments sections are the devil’s playground so I don’t have one, but feel free to reach out and if we have a fruitful discussion I’m happy to publish it here.

One of the things that COVID has taught me is that the concept of a “story” in the news media is a relic that needs to be completely re-thought. It is not suited to the challenges of media communication today.

Specifically, there are challenging and complex public-policy questions which require robust engagement from an informed electorate1. These questions are open-ended and their answers are unclear. What’s an appropriate strategy for public safety, for example? Should policing be part of it? I have my preferred snappy slogans in these areas but if we want to step away from propaganda for a moment and focus on governance, this is actually a really difficult question that hinges on a ton of difficult-to-source data.

For most of history, facts were scarce. It was the journalist’s job to find facts, to write them down, and to circulate them to as many people as possible, so that the public discourse could at least be fact-based; to have some basis in objective reality.

In the era of the Internet, though, we are drowning in facts. We don't just have facts, we have data. We don't just have data, we have metadata; we have databases and data warehouses and data lakes and all manner of data containers in between. These data do not coalesce into information on their own, however. They need to be collected, collated, synthesized, and interpreted.

Thus was born the concept of Data Journalism. No longer is it the function of the journalist simply to report the facts; in order for the discussion to be usefully grounded, they must also aggregate the facts, and present their aggregation in a way that can be comprehended.

Data journalism is definitely a step up, and there are many excellent data-journalism projects that have been done. But the problem with these projects is that they are often individual data-journalism stories that give a temporal snapshot of one journalist's interpretation of an issue. Just a tidy little pile of motivated reasoning with a few cherry-picked citations, and then we move on to the next story.

And that's when we even get data journalism. Most journalism is still just isolated stories, presented as prose. But this sort of story-after-story presentation in most publications provides a misleading picture of the world. Beyond even the sample bias of what kinds of stories get clicks and can move ad inventory, this sequential chain of disconnected facts is extremely prone to cherry-picking by bad-faith propagandists, and even much less malicious problems like recency bias and the availability heuristic.

Trying to develop a robust understanding of complex public policy issues by looking at individual news stories is like trying to map a continent's coastline by examining individual grains of sand one at a time.

What we need from journalism for the 21st century is a curated set of ongoing collections of consensus. What the best strategy is to combat COVID might change over time. Do mask mandates work? You can't possibly answer that question by scrounging around on pubmed by yourself, or worse yet reading a jumbled stream of op-ed thinkpieces in the New York Times and the Washington Post.

During COVID, some major press institutions started caving to the fairly desperate need for this sort of structure by setting up "trackers" for COVID vaccinations, case counts, and so on. But these trackers are still fit awkwardly within the "story" narrative. This one from the Washington post is a “story” from 2020, but has data from December 16th, 2021.

These trackers monitor only a few stats though, and don’t provide much in the way of meta-commentary on pressing questions: do masks work? Do lockdowns work? How much do we know about the efficacy of various ventilation improvements?

Each journalistic institution should maintain a “tracker” for every issue of public concern, and ideally they’d be in conversation with each other, constantly curating their list of sources in real time, updating conclusions as new data arrives, and recording an ongoing tally of what we can really be certain about and what is still a legitimate controversy.2


Make Time For Hope

Pandora hastened to replace the lid! but, alas! the whole contents of the jar had escaped, one thing only excepted, which lay at the bottom, and that was HOPE. So we see at this day, whatever evils are abroad, hope never entirely leaves us; and while we have THAT, no amount of other ills can make us completely wretched.

Bulfinch’s Mythology

It’s been a rough couple of weeks, and it seems likely to continue to be so for quite some time. There are many real and terrible consequences of the mistake that America made in November, and ignoring them will not make them go away. We’ll all need to find a way to do our part.

It’s not just you — it’s legit hard to focus on work right now. This is especially true if, as many people in my community are, you are trying to motivate yourself to work on extracurricular, after-work projects that you used to find exciting, and instead find it hard to get out of bed in the morning.

I have no particular position of authority to advise you what to do about this situation, but I need to give a little pep talk to myself to get out of bed in the morning these days, so I figure I’d share my strategy with you. This is as much in the hope that I’ll follow it more closely myself as it is that it will be of use to you.

With that, here are some ideas.

It’s not over.

The feeling that nothing else is important any more, that everything must now be a life-or-death political struggle, is exhausting. Again, I don’t want to minimize the very real problems that are coming or the need to do something about them, but, life will go on. Remind yourself of that. If you were doing something important before, it’s still important. The rest of the world isn’t going away.

Make as much time for self-care as you need.

You’re not going to be of much use to anyone if you’re just a sobbing wreck all the time. Do whatever you can do to take care of yourself and don’t feel guilty about it. We’ll all do what we can, when we can.1

You need to put on your own oxygen mask first.

Make time, every day, for hope.

“You can stand anything for 10 seconds. Then you just start on a new 10 seconds.”

Every day, set aside some time — maybe 10 minutes, maybe an hour, maybe half the day, however much you can manage — where you’re going to just pretend everything is going to be OK.2

Once you’ve managed to securely fasten this self-deception in place, take the time to do the things you think are important. Of course, for my audience, “work on your cool open source code” is a safe bet for something you might want to do, but don’t make the mistake of always grimly setting your jaw and nose to the extracurricular grindstone; that would just be trading one set of world-weariness for another.

After convincing yourself that everything’s fine, spend time with your friends and family, make art, or heck, just enjoy a good movie. Don’t let the flavor of life turn to ash on your tongue.

Good night and good luck.

Thanks for reading. It’s going to be a long four years3; I wish you the best of luck living your life in the meanwhile.


  1. I should note that self-care includes just doing your work to financially support yourself. If you have a job that you don’t feel is meaningful but you need the wages to survive, that’s meaningful. It’s OK. Let yourself do it. Do a good job. Don’t get fired. 

  2. I know that there are people who are in desperate situations who can’t do this; if you’re an immigrant in illegal ICE or CBP detention, I’m (hopefully obviously) not talking to you. But, luckily, this is not yet the majority of the population. Most of us can, at least some of the time, afford to ignore the ongoing disaster. 

  3. Realistically, probably more like 20 months, once the Rs in congress realize that he’s completely destroyed their party’s credibility and get around to impeaching him for one of his numerous crimes.