Deciphering Glyphhttps://blog.glyph.im/2024-02-05T14:36:00-08:00Let Me Tell You A Secret2024-02-05T14:36:00-08:002024-02-05T14:36:00-08:00Glyphtag:blog.glyph.im,2024-02-05:/2024/02/let-me-tell-you-a-secret.html<p>In which I provide you with hundreds of dollars worth of software
consulting, for free, in a single blog post.</p><body><p>I do consulting<sup id=fnref:1:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:1:let-me-tell-you-a-secret-2024-2 id=fnref:1>1</a></sup> on software architecture, network protocol development,
python software infrastructure, streamlined cloud deployment, and open source
strategy, among other nerdy things. I enjoy solving challenging, complex
technical problems or contributing to the open source commons. On the best
jobs, I get to do both.</p>
<p>Today I would like to share with you a secret of the software technology
consulting trade.</p>
<p>I should note that this secret is not specific to me. I have several colleagues
who have also done software consulting and have reflected versions of this
experience back to me.<sup id=fnref:2:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:2:let-me-tell-you-a-secret-2024-2 id=fnref:2>2</a></sup></p>
<p>We’ll get to the secret itself in a moment, but first, some background.</p>
<hr>
<p>Companies do not go looking for consulting when things are going great. This
is particularly true when looking for high-level consulting on things like
system architecture or strategy. Almost by definition, there’s a problem that
I have been brought in to solve. Ideally, that problem is a technical
challenge.</p>
<p>In the software industry, your team probably already has some software
professionals with a variety of technical skills, and thus they know what to do
with technical challenges. Which means that, as often as not, the problem is
to do with <em>people</em> rather than technology, even it appears otherwise.</p>
<p>When you hire a staff-level professional like myself to address your software
team’s general problems, that consultant will need to gather some information.
If I am that consultant and I start to suspect that the purported technology
problem that you’ve got is in fact a people problem, here is the <em>secret
technique</em> that I am going to use:</p>
<p>I am going to go get a pen and a pad of paper, then schedule a 90-minute
meeting with the most senior IC<sup id=fnref:3:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:3:let-me-tell-you-a-secret-2024-2 id=fnref:3>3</a></sup> engineer that you have on your team. I will
bring that pen and paper to the meeting. I will then ask one question:</p>
<blockquote>
<p>What is fucked up about this place?</p>
</blockquote>
<p>I will then write down their response in as much detail as I can manage. If I
have begun to suspect that this meeting is necessary, 90 minutes is typically
not enough time, and I will struggle to keep up. Even so, I will usually
manage to capture the highlights.</p>
<p>One week later, I will schedule a meeting with executive leadership, and during
that meeting, I will read back a <em>very</em> lightly edited<sup id=fnref:4:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:4:let-me-tell-you-a-secret-2024-2 id=fnref:4>4</a></sup> version of the
transcript of the previous meeting. This is then routinely praised as a keen
strategic insight.</p>
<hr>
<p>I should pause here to explicitly note that — obviously, I hope — this is not
an oblique reference to any current or even recent client; if I’d had this
meeting recently it would be pretty awkward to answer that “so, I read your
blog…” email.<sup id=fnref:5:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:5:let-me-tell-you-a-secret-2024-2 id=fnref:5>5</a></sup> But talking about clients in this way, no matter how
obfuscated and vague the description, is always a bit professionally risky. So
why risk it?</p>
<p>The thing is, I’m not a people manager. While I <em>can</em> do this kind of work,
and I do not begrudge doing it if it is the thing that needs doing, I find it
stressful and unfulfilling. I am a technology guy, not a people person. This
is generally true of people who elect to go into technology consulting; we know
where the management track is, and we didn’t pick it.</p>
<p>If you are going to hire me for my highly specialized technical expertise, I
want you to get the maximum value out of it. I know my value; my rates are not
low, and I do not want clients to come away with the sense that I only had a
couple of “obvious” meetings.</p>
<p>So the intended audience for this piece is potential clients, leaders of teams
(or organizations, or companies) who have a general technology problem and are
wondering if they need a consultant with my skill-set to help them fix it.
Before you decide that your issue is the need to implement a complex
distributed system consensus algorithm, check if that is really what’s at
issue. Talk to your ICs, and — taking care to make sure they understand that
you want honest feedback and that they are safe to offer it — ask them what
problems your organization has.</p>
<p>During this meeting it is important to <em>only listen</em>. Especially if you’re at
a small company and you are regularly involved in the day-to-day operations,
you might feel immediately defensive. Sit with that feeling, and process it
later. Don’t unload your emotional state on an employee you have power
over.<sup id=fnref:6:let-me-tell-you-a-secret-2024-2><a class=footnote-ref href=#fn:6:let-me-tell-you-a-secret-2024-2 id=fnref:6>6</a></sup></p>
<p>“Only listening” doesn’t exclusively mean “don’t push back”. You also
shouldn’t be committing to fixing anything. While the information you are
gathering in these meetings is extremely valuable, and you should probably act
on more of it than you will initially want to, your ICs won’t have the full
picture. They really may not understand why certain priorities are set the way
they are. You’ll need to take <em>that</em> as feedback for improving internal comms
rather than “fixing” the perceived problem, and you certainly don’t want to
make empty promises.</p>
<p>If you have these conversations directly, you can get something from it that no
consultant can offer you: credibility. If you can actively listen, the
conversation alone can improve morale. People like having their concerns
heard. If, better still, you manage to make meaningful changes to <em>address</em>
the concerns you’ve heard about, you can inspire true respect.</p>
<p>As a consultant, I’m going to be seen as some random guy wasting their time
with a meeting. Even if you make the changes I recommend, it won’t resonate
the same way as someone remembering that they <em>personally</em> told you what was
wrong, and you took it seriously and fixed it.</p>
<p>Once you know what the problems are with your organization, and you’ve got
solid technical understanding that you really <em>do</em> need that event-driven
distributed systems consensus algorithm implemented using Twisted, I’m
absolutely your guy. Feel free to <a href=mailto:consulting@glyph.im>get in touch</a>.</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:let-me-tell-you-a-secret-2024-2>
<p id=fn:1>While I immensely value <a href="/pages/patrons.html">my
patrons support</a> and am eternally grateful
for their support, at — as of this writing — less than $100 per month it
doesn’t exactly pay the SF bay area cost-of-living bill. <a class=footnote-backref href=#fnref:1:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id=fn:2:let-me-tell-you-a-secret-2024-2>
<p id=fn:2>When I reached out for feedback on a draft of this essay, every other
consultant I showed it to said that something similar had happened to them
<em>within the last month</em>, all with different clients in different sectors of
the industry. I really cannot stress how common it is. <a class=footnote-backref href=#fnref:2:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 2 in the text">↩</a></p>
</li>
<li id=fn:3:let-me-tell-you-a-secret-2024-2>
<p id=fn:3>“individual contributor”, if this bit of jargon isn’t universal in your
corner of the world; i.e.: “not a manager”. <a class=footnote-backref href=#fnref:3:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 3 in the text">↩</a></p>
</li>
<li id=fn:4:let-me-tell-you-a-secret-2024-2>
<p id=fn:4>Mostly, I need to remove a bunch of profanity, but sometimes I will also
need to have another interview, usually with a more junior person on the
team to confirm that I’m not relaying only a single person’s
perspective. It is pretty rare that the top-of-mind problems are specific
to one individual, though. <a class=footnote-backref href=#fnref:4:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 4 in the text">↩</a></p>
</li>
<li id=fn:5:let-me-tell-you-a-secret-2024-2>
<p id=fn:5>To the extent that this is about anything saliently recent, I am perhaps
grumbling about how
<a href="https://techcrunch.com/2023/03/26/tech-company-layoffs-2023-morale/">tech</a>
<a href="https://www.cnbc.com/2024/01/03/2023-layoffs-will-continue-to-affect-workers-in-2024-report-says.html">CEOs</a>
<a href="https://www.nytimes.com/2023/04/12/technology/meta-layoffs-employees-management.html">aren’t</a>
<a href="https://www.businessinsider.com/snap-layoffs-staff-expecting-more-next-week-2024-2">taking</a>
morale problems generated by the <a href="https://www.bloomberg.com/opinion/articles/2024-02-05/mass-layoffs-have-become-alarmingly-routine">constant
drumbeat</a>
of layoffs <a href="https://blog.pragmaticengineer.com/layoffs-push-down-scores-on-glassdoor/">seriously
enough</a>. <a class=footnote-backref href=#fnref:5:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 5 in the text">↩</a></p>
</li>
<li id=fn:6:let-me-tell-you-a-secret-2024-2>
<p id=fn:6>I am not always in the role of a consultant. At various points in my
career, I have <em>also</em> been a leader needing to sit in this particular
chair, and believe me, I know it sucks. This would not be a common problem
if there weren’t a common reason that leaders tend to avoid this kind of
meeting. <a class=footnote-backref href=#fnref:6:let-me-tell-you-a-secret-2024-2 title="Jump back to footnote 6 in the text">↩</a></p>
</li>
</ol>
</div></body>The Macintosh2024-01-24T22:31:00-08:002024-01-24T22:31:00-08:00Glyphtag:blog.glyph.im,2024-01-24:/2024/01/the-macintosh.html<p>Today is its 40th anniversary, but what <em>is</em> the Macintosh?</p><body><p><img alt="A 4k ultrawide classic MacOS desktop screenshot featuring various Finder windows and MPW Workshop" src="https://blog.glyph.im/images/just-a-regular-classic-mac.png"></p>
<p>Today is the 40th anniversary of the announcement of the Macintosh. Others have
articulated <a href="https://mastodon.social/deck/@danilo@hachyderm.io/111811994313558262">compelling emotional
narratives</a>
that easily eclipse my own similar childhood memories of the Macintosh family
of computers. So instead, I will ask a question:</p>
<p>What <em>is</em> the Macintosh?</p>
<p>As this is the anniversary of the beginning, that is where I will begin. The
original Macintosh, the classic MacOS, the original “System Software” are a
shining example of “fake it till you make it”. The original mac operating
system was fake.</p>
<p>Don’t get me wrong, it was an <em>impressive</em> technical achievement to fake
something like this, but what Steve Jobs did was to see a demo of a
<a href="https://en.wikipedia.org/wiki/Smalltalk">Smalltalk</a>-76 system, an
object-oriented programming environment with 1-to-1 correspondences between
graphical objects on screen and runtime-introspectable data structures, a
self-hosting high level programming language, memory safety, message passing,
garbage collection, and many other advanced facilities that would not be
popularized for decades, and make a fake version of it which ran on hardware
that consumers could actually afford, by throwing out most of what made the
programming environment interesting and replacing it with a much more
memory-efficient illusion implemented in 68000 assembler and Pascal.</p>
<p>The machine’s RAM didn’t have room for a kernel. Whatever application was
running was in control of the whole system. No protected memory, no preemptive
multitasking. It was a house of cards that was destined to collapse. And
collapse it did, both in the short term and the long. In the short term, the
system was buggy and unstable, and application crashes resulted in system halts
and reboots.</p>
<p>In the longer term, the company based on the Macintosh effectively went out of
business and was reverse-acquired by NeXT, but they kept the better-known
branding of the older company. The old operating system was gradually disposed
of, quickly replaced at its core with a significantly more mature generation of
operating system technology based on BSD UNIX and Mach. With the removal of
Carbon compatibility 4 years ago, the last vestigial traces of it were removed.
But even as early as 2004 the Mac was no longer <em>really</em> the Macintosh.</p>
<p>What NeXT had built was much closer to the Smalltalk system that Jobs was
originally attempting to emulate. Its programming language, “Objective C”
explicitly called back to Smalltalk’s message-passing, right down to the
syntax. Objects on the screen now <em>did</em> correspond to “objects” you could send
messages to. The development environment understood this too; that was a major
selling point.</p>
<p>The NeXSTEP operating system and Objective C runtime did not have garbage
collection, but it provided a similar developer experience by providing
reference-counting throughout its object model. The original vision was
finally achieved, for real, and that’s what we have on our desks and in our
backpacks today (and in our pockets, in the form of the iPhone, which is in
some sense a tiny next-generation NeXT computer itself).</p>
<hr>
<p>The one detail I will relate from my own childhood is this: my first computer
was not a Mac. My <em>first</em> computer, as a child, was an
<a href="https://en.wikipedia.org/wiki/Amiga">Amiga</a>. When I was 5, I had a computer
with 4096 colors, real multitasking, 3D graphics, and a paint program that
could draw hard real-time animations with palette tricks. Then the writing was
on the wall for Commodore and I got a computer which had 256 colors, a bunch of
old software that was still black and white, an operating system that would
freeze if you held down the mouse button on the menu bar and couldn’t even play
animations smoothly. Many will relay their first encounter with the Mac as a
kind of magic, but mine was a feeling of loss and disappointment. Unlike
almost everyone at the time, I knew what a computer <em>really</em> could be, and
despite many pleasant and formative experiences with the Macintosh in the
meanwhile, it would be a decade before I saw a real one again.</p>
<p>But this is not to deride the faking. The faking was <em>necessary</em>. Xerox was not
going to put an Alto running Smalltalk on anyone’s desk. People have always
grumbled that Apple products are expensive, but in 2024 dollars, one of these
Xerox computers cost roughly $55,000.</p>
<p>The Amiga was, in its own way, a similar sort of fake. It managed its own
miracles by putting performance-critical functions into dedicated hardware
which rapidly became obsolete as software technology evolved much more rapidly.</p>
<p>Jobs is celebrated as a genius of product design, and he certainly wasn’t bad
at it, but I had the rare privilege of seeing the homework he was cribbing from
in that subject, and in my estimation he was a B student at best. Where he got
an A was bringing a vision to life by <em>creating an organization</em>, both inside
and outside of his companies.</p>
<p>If you want a culture-defining technological artifact, <em>everybody in the
culture</em> has to be able to get their hands on one. This doesn’t just mean that
the builder has to be able to build it. The buyer also has to be able to
afford it, obviously. Developers have to be able to develop for it. The buyer
has to <em>actually want</em> it; the much-derided “marketing” is a necessary part of
the process of making a product what it is. Everyone needs to be able to move
together in the direction of the same technological future.</p>
<p>This is why it was so fitting that Tim Cook was made Jobs's successor. The
supply chain <em>was</em> the hard part.</p>
<p>The crowning, final achievement of Jobs’s career was the fact that not only did
he fake it — the fakes were flying fast and thick at that time in history, even
if they mostly weren’t as good — it was that he faked it and then he built the
real version and then he <em>bridged the transitions</em> to get to the real thing.</p>
<p>I began here by saying that the Mac isn’t really the Mac, and speaking in terms
of a point in time analysis that is true. Its technology today has practically
nothing in common with its technology in 1984. This is not merely an artifact
of the length of time here: the technology at the core of various UNIXes in
1984 bears a lot of resemblance of UNIX-like operating systems today<sup id=fnref:1:the-macintosh-2024-1><a class=footnote-ref href=#fn:1:the-macintosh-2024-1 id=fnref:1>1</a></sup>. But
looking across its whole history from 1984 to 2024, there is undeniably a
<em>continuity</em> to the conceptual “Macintosh”.</p>
<p>Not just as a user, but as a developer moving <em>through</em> time rather than
looking at just a few points: the “Macintosh”, such as it is, has transitioned
from the Motorola 68000 to the PowerPC to Intel 32-bit to Intel 64-bit to ARM.
From obscurely proprietary to enthusiastically embracing open source and then,
sadly, much of the way back again. It moved from black and white to color,
from desktop to laptop, from Carbon to Cocoa, from Display PostScript to
Display PDF, all the while preserving instantly recognizable iconic features
like the apple menu and the cursor pointer, while providing developers
documentation and SDKs and training sessions that helped them transition their
apps through multiple near-complete rewrites as a result of all of these
changes.</p>
<hr>
<p>To paraphrase Abigail Thorne’s <a href="https://www.youtube.com/watch?v=cYVFep0xFYs">first video about
Identity</a>, <em>identity </em><strong>is</strong><em> what
survives</em>. The Macintosh is an interesting case study in the survival of the
<em>idea</em> of a platform, as distinct from the platform itself. It is the Computer
of Theseus, a thought experiment successfully brought to life and sustained
over time.</p>
<p>If there is a personal lesson to be learned here, I’d say it’s that one’s own
efforts need not be perfect. In fact, a significantly flawed vision that you
can achieve <em>right now</em> is often much, much better than a perfect version that
might take just a little bit longer, if you don’t have the resources to
actually sustain going that much longer<sup id=fnref:2:the-macintosh-2024-1><a class=footnote-ref href=#fn:2:the-macintosh-2024-1 id=fnref:2>2</a></sup>. You have to be bad at things
before you can be good at them. Real artists, as Jobs famously put it, <em>ship</em>.</p>
<p>So my contribution to the 40th anniversary reflections is to say: the Macintosh
is dead. Long live the Mac.</p>
<hr>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>!</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:the-macintosh-2024-1>
<p id=fn:1>including, ironically, the modern macOS. <a class=footnote-backref href=#fnref:1:the-macintosh-2024-1 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id=fn:2:the-macintosh-2024-1>
<p id=fn:2>And that is why I am posting this right now, rather than proofreading it
further. <a class=footnote-backref href=#fnref:2:the-macintosh-2024-1 title="Jump back to footnote 2 in the text">↩</a></p>
</li>
</ol>
</div></body>Unsigned Commits2024-01-24T16:29:00-08:002024-01-24T16:29:00-08:00Glyphtag:blog.glyph.im,2024-01-24:/2024/01/unsigned-commits.html<p>I’m not going to cryptographically sign my git commits, and you
shouldn’t either.</p><body><p>I am going to tell you why I don’t think you should sign your Git commits, even
though doing so with SSH keys is now easier than ever. But first, to
contextualize my objection, I have a brief hypothetical for you, and then a bit
of history from the evolution of security on the web.</p>
<hr>
<p><img alt="paper reading “Sign Here:” with a pen poised over it" src="https://blog.glyph.im/images/just-sign-here.jpeg"></p>
<hr>
<p>It seems like these days, everybody’s signing all different kinds of papers.</p>
<p>Bank forms, permission slips, power of attorney; it seems like if you want to
securely validate a document, you’ve gotta sign it.</p>
<p>So I have invented a machine that automatically signs every document on your
desk, just in case it needs your signature. Signing is good for security, so
you should probably get one, and turn it on, just in case something needs your
signature on it.</p>
<p>We also want to make sure that verifying your signature is easy, so we will
have them all notarized and duplicates stored permanently and publicly for
future reference.</p>
<p>No? Not interested?</p>
<hr>
<p>Hopefully, that sounded like a silly idea to you.</p>
<p>Most adults in modern civilization have learned that signing your name to a
document has an <em>effect</em>. It is not merely decorative; the words in the
document being signed have some specific meaning and can be enforced against
you.</p>
<p>In some ways the metaphor of “signing” in cryptography is bad. One does not
“sign” things with “keys” in real life. But here, it is spot on: a
cryptographic signature can have an effect.</p>
<p>It should be an <em>input</em> to some software, one that is acted upon. Software
does a thing differently depending on the presence or absence of a signature.
If it doesn’t, the signature probably shouldn’t be there.</p>
<hr>
<p>Consider the most venerable example of encryption and signing that we all deal
with every day: HTTPS. Many years ago, browsers would happily display
unencrypted web pages. The browser would also encrypt the connection, if the
server operator had paid for an expensive certificate and correctly configured
their server. If that operator messed up the encryption, it would pop up a
helpful dialog box that would tell the user “This website did something wrong
that you cannot possibly understand. Would you like to ignore this and keep
working?” with buttons that said “Yes” and “No”.</p>
<p>Of course, these are not the precise words that were written. The words, as
written, said things about “information you exchange” and “security
certificate” and “certifying authorities” but “Yes” and “No” were the words
that most users <em>read</em>. Predictably, most users just clicked “Yes”.</p>
<p>In the usual case, where users ignored these warnings, it meant that no user
ever got meaningful security from HTTPS. It was a component of the web stack
that did nothing but funnel money into the pockets of certificate authorities
and occasionally present annoying interruptions to users.</p>
<p>In the case where the user carefully read and honored these warnings in the
spirit they were intended, adding any sort of transport security to your
website was a potential liability. If you got everything perfectly correct,
nothing happened except the browser would display a picture of a <a href="https://www.wired.com/2016/11/googles-chrome-hackers-flip-webs-security-model/">small green
purse</a>. If
you made any small mistake, it would scare users off and thereby directly harm
your business. You would only want to do it if you were doing something that
put a big enough target on your site that you became unusually interesting to
attackers, or were required to do so by some contractual obligation like credit
card companies.</p>
<p>Keep in mind that the second case here is the <em>best</em> case.</p>
<p>In 2016, the browser makers noticed this problem and started taking some
<a href="https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html">pretty aggressive
steps</a>
towards actually enforcing the security that HTTPS was supposed to provide, by
fixing the user interface to do the right thing. If your site didn’t have
security, it would be shown as “Not Secure”, a subtle warning that would
gradually escalate in intensity as time went on, correctly incentivizing site
operators to adopt transport security certificates. On the user interface
side, certificate errors would be significantly harder to disregard, making it
so that users who didn’t understand what they were seeing would actually be
stopped from doing the dangerous thing.</p>
<p>Nothing fundamental<sup id=fnref:1:unsigned-commits-2024-1><a class=footnote-ref href=#fn:1:unsigned-commits-2024-1 id=fnref:1>1</a></sup> changed about the technical aspects of the
cryptographic primitives or constructions being used by HTTPS in this time
period, but <em>socially</em>, the meaning of an HTTP server signing and encrypting
its requests changed a lot.</p>
<hr>
<p>Now, let’s consider signing Git commits.</p>
<p>You may have heard that in some abstract sense you “should” be signing your
commits. GitHub puts a little green “verified” badge next to commits that are
signed, which is neat, I guess. They provide “security”. 1Password provides a
<a href="https://developer.1password.com/docs/ssh/git-commit-signing/">nice UI</a> for
setting it up. If you’re not a 1Password user, GitHub itself recommends you
<a href="https://docs.github.com/en/authentication/managing-commit-signature-verification/telling-git-about-your-signing-key#telling-git-about-your-ssh-key">put in just a few lines of
configuration</a>
to do it with either a GPG, SSH, or even an S/MIME key.</p>
<p>But while GitHub’s documentation quite lucidly tells you <em>how</em> to sign your
commits, its explanation of
<a href="https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification">why</a>
is somewhat less clear. Their purse is the word “Verified”; it’s still green.
If you enable “<a href="https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits">vigilant
mode</a>”,
you can make the blank “no verification status” option say “Unverified”, but
not much else changes.</p>
<p>This is like the old-style HTTPS verification “Yes”/“No” dialog, except that
there is not even an interruption to your workflow. They might put the
“Unverified” status on there, but they’ve gone ahead and clicked “Yes” for you.</p>
<p>It is tempting to think that the “HTTPS” metaphor will map neatly onto Git
commit signatures. It was bad when the web wasn’t using HTTPS, and the next
step in that process was for <a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt">Let’s
Encrypt</a> to come along and for
the browsers to fix their implementations. Getting your certificates properly
set up in the meanwhile and becoming familiar with the tools for properly doing
HTTPS was unambiguously a <em>good</em> thing for an engineer to do. I did, and I’m
quite glad I did so!</p>
<p>However, there is a significant difference: signing and encrypting an HTTPS
request is ephemeral; signing a Git commit is functionally permanent.</p>
<p>This ephemeral nature meant that errors in the early HTTPS landscape were
easily fixable. Earlier I mentioned that there was a time where you might
<em>not</em> want to set up HTTPS on your production web servers, because any small
screw-up would break your site and thereby your business. But if you were
really skilled and you could see the future coming, you could set up
monitoring, avoid these mistakes, and rapidly recover. These mistakes didn’t
need to badly break your site.</p>
<p>We <em>can</em> extend the analogy to HTTPS, but we have to take a detour into one of
the more unpleasant mistakes in HTTPS’s history: <a href="https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning">HTTP Public Key
Pinning</a>, or “HPKP”.
The idea with HPKP was that you could publish a record in an HTTP header where
your site commits<sup id=fnref:2:unsigned-commits-2024-1><a class=footnote-ref href=#fn:2:unsigned-commits-2024-1 id=fnref:2>2</a></sup> to using certain certificate authorities for a period of
time, where that period of time could be “forever”. Attackers gonna attack,
and <a href="https://scotthelme.co.uk/using-security-features-to-do-bad-things/#usinghpkpforevil">attack they
did</a>.
Even without getting attacked, a site could easily commit “HPKP Suicide” where
they would pin the wrong certificate authority with a long timeline, and their
site was effectively gone for every browser that had ever seen those pins. As
a result, after a few years, HPKP was <a href="https://scotthelme.co.uk/hpkp-is-no-more/">completely removed from all
browsers</a>.</p>
<p>Git commit signing is even worse. With HPKP, you could easily make terrible
mistakes with permanent consequences even though you knew the <em>exact</em> meaning
of the data you were putting into the system at the time you were doing it.
With signed commits, you are saying something permanently, but you don’t really
know <em>what</em> it is that you’re saying.</p>
<hr>
<p><em>Today</em>, what is the benefit of signing a Git commit? GitHub might present it
as “Verified”. It’s worth noting that only <em>GitHub</em> will do this, since they
are the root of trust for this signing scheme. So, by signing commits and
registering your keys with GitHub, you are, at best, helping to lock in GitHub
as a permanent piece of infrastructure that is even harder to dislodge because
they are not only where your code is stored, but also the arbiters of whether
or not it is trustworthy.</p>
<p><em>In the future</em>, what is the possible security benefit? If we all collectively
decide we want Git to be more secure, then we will need to meaningfully treat
signed commits differently from unsigned ones.</p>
<p>There’s a long tail of unsigned commits several billion entries long. And
those are in the permanent record as much as the signed ones are, so future
tooling will have to be able to deal with them. If, as stewards of Git, we
wish to move towards a more secure Git, as the stewards of the web moved
towards a more secure web, we do <em>not</em> have the option that the web did. In
the browser, the meaning of a plain-text HTTP or incorrectly-signed HTTPS site
changed, in order to encourage the site’s operator to <em>change</em> the site to be
HTTPS.</p>
<p>In contrast, the meaning of an unsigned commit cannot change, because there are
zillions of unsigned commits lying around in critical infrastructure and we
need them to remain there. Commits cannot meaningfully be changed to become
signed retroactively. Unlike an online website, they are part of a historical
record, not an operating program. So we cannot establish the difference in
treatment by changing how unsigned commits are treated.</p>
<p>That means that tooling maintainers will need to provide some difference in
behavior that provides some incentive. With HTTPS where the binary choice was
clear: don’t present sites with incorrect, potentially compromised
configurations to users. The question was just <em>how</em> to achieve that. With
Git commits, the difference in treatment of a “trusted” commit is far less
clear.</p>
<p>If you will forgive me a slight straw-man here, one possible naive
interpretation is that a “trusted” signed commit is that it’s OK to run in CI.
Conveniently, it’s not simply “trusted” in a general sense. If you signed it,
it’s trusted to be <em>from you</em>, specifically. Surely it’s fine if we bill the
CI costs for validating the PR that includes that signed commit to your GitHub
account?</p>
<p>Now, someone can piggy-back off a 1-line typo fix that you made on top of an
unsigned commit to some large repo, making you implicitly responsible for
transitively signing all unsigned parent commits, even though you haven’t
looked at any of the code.</p>
<p>Remember, also, that the only central authority that is practically trustable
at this point is your GitHub account. That means that if you are using a
third-party CI system, even if you’re using a third-party Git host, you can
only run “trusted” code if GitHub is online and responding to requests for its
“get me the trusted signing keys for this user” API. This also adds a lot of
value to a GitHub credential breach, strongly motivating attackers to sneakily
attach their own keys to your account so that their commits in unrelated repos
can be “Verified” by you.</p>
<p>Let’s review the pros and cons of turning on commit signing <em>now</em>, before you
know what it is going to be used for:</p>
<table>
<thead>
<tr>
<th align=left>Pro</th>
<th align=left>Con</th>
</tr>
</thead>
<tbody>
<tr>
<td align=left>Green “Verified” badge</td>
<td align=left><strong>Unknown, possibly unlimited future liability</strong> for the consequences of running code in a commit you signed</td>
</tr>
<tr>
<td align=left><div style=“width:290px”></div></td>
<td align=left>Further implicitly cementing GitHub as a centralized trust authority in the open source world</td>
</tr>
<tr>
<td align=left></td>
<td align=left>Introducing unknown reliability problems into infrastructure that relies on commit signatures</td>
</tr>
<tr>
<td align=left></td>
<td align=left>Temporary breach of your GitHub credentials now lead to potentially permanent consequences if someone can smuggle a new trusted key in there</td>
</tr>
<tr>
<td align=left></td>
<td align=left>New kinds of ongoing process overhead as commit-signing keys become new permanent load-bearing infrastructure, like “what do I do with expired keys”, “how often should I rotate these”, and so on</td>
</tr>
</tbody>
</table>
<p>I feel like the “Con” column is coming out ahead.</p>
<hr>
<p>That probably seemed like increasingly unhinged hyperbole, and it was.</p>
<p>In reality, the consequences are unlikely to be nearly so dramatic. The status
quo has a very high amount of inertia, and probably the “Verified” badge will
remain the only visible difference, except for a few repo-specific esoteric
workflows, like pushing trust verification into offline or sandboxed build
systems. I do still think that there is <em>some</em> potential for nefariousness
around the “unknown and unlimited” dimension of any future plans that might
rely on verifying signed commits, but any flaws are likely to be subtle attack
chains and not anything flashy and obvious.</p>
<p>But I think that one of the biggest problems in information security is a lack
of <a href="https://owasp.org/www-community/Threat_Modeling">threat modeling</a>. We
encrypt things, we sign things, we institute <a href="https://cryptosmith.com/password-sanity/exp-harmful/">rotation
policies</a> and
<a href="https://neal.fun/password-game/">elaborate</a> useless
<a href="https://xkcd.com/936/">rules</a> for passwords, because we are looking for a
“best practice” that is going to save us from having to think about what our
actual security problems are.</p>
<p>I think the actual harm of signing git commits is to perpetuate an engineering
culture of unquestioningly cargo-culting sophisticated and complex tools like
cryptographic signatures into new contexts where they have no use.</p>
<p>Just from a baseline utilitarian philosophical perspective, for a given action
A, all else being equal, it’s always better <em>not</em> to do A, because taking an
action always has <em>some</em> non-zero opportunity cost even if it is just the time
taken to do it. Epsilon cost and zero benefit is still a net harm. This is
even more true in the context of a complex system. Any action taken in
response to a rule in a system is going to interact with all the other rules in
that system. You have to pay complexity-rent on every new rule. So an
apparently-useless embellishment like signing commits can have potentially
far-reaching consequences in the future.</p>
<p>Git commit signing itself is not particularly consequential. I have probably
spent more time writing this blog post than the sum total of all the time
wasted by all programmers configuring their git clients to add useless
signatures; even the relatively modest readership of this blog will likely
transfer more data reading this post than all those signatures will take to
transmit to the various git clients that will read them. If I just convince
you not to sign your commits, I don’t think I’m coming out ahead in the
<a href="https://en.wikipedia.org/wiki/Felicific_calculus">felicific calculus</a> here.</p>
<p>What I am actually trying to point out here is that it is useful to <em>carefully
consider how to avoid adding junk complexity to your systems</em>. One area where
junk tends to leak in to designs and to cultures particularly easily is in
intimidating subjects like trust and safety, where it is easy to get anxious
and convince ourselves that piling on more <em>stuff</em> is safer than leaving things
simple.</p>
<p>If I can help you avoid adding even a little bit of unnecessary complexity, I
think it will have been well worth the cost of the writing, and the reading.</p>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>! I am also <a href=mailto:consulting@glyph.im>available for
consulting work</a> if you think your organization
could benefit from expertise on topics such as “What <em>else</em> should I <em>not</em> apply a cryptographic signature to?”.</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:unsigned-commits-2024-1>
<p id=fn:1>Yes yes I know about <a href="https://heartbleed.com">heartbleed</a> and
<a href="https://en.wikipedia.org/wiki/Daniel_Bleichenbacher#RSA_Attacks">Bleichenbacher
attacks</a>
and adoption of <a href="https://en.wikipedia.org/wiki/Forward_secrecy">forward-secret
ciphers</a> and
<a href="https://en.wikipedia.org/wiki/CRIME">CRIME</a> and
<a href="https://breachattack.com">BREACH</a> and none of that is relevant here, okay?
Jeez. <a class=footnote-backref href=#fnref:1:unsigned-commits-2024-1 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id=fn:2:unsigned-commits-2024-1>
<p id=fn:2>Do you see what I did there. <a class=footnote-backref href=#fnref:2:unsigned-commits-2024-1 title="Jump back to footnote 2 in the text">↩</a></p>
</li>
</ol>
</div></body>Your Text Editor (Probably) Isn’t Malware Any More2024-01-22T18:05:00-08:002024-01-22T18:05:00-08:00Glyphtag:blog.glyph.im,2024-01-22:/2024/01/no-more-editor-malware.html<p>Updating a post from 2015, I briefly discuss the modern editor-module
threat landscape.</p><body><p>In 2015, I wrote one of my more popular blog posts, “<a href="https://blog.glyph.im/2015/11/editor-malware.html">Your Text Editor Is
Malware</a>”, about the sorry state of security in
text editors in general, but particularly in Emacs and Vim.</p>
<p>It’s nearly been a decade now, so I thought I’d take a moment to survey the
world of editor plugins and see where we are today. Mostly, this is to allay
fears, since (in today’s landscape) that post is unreasonably alarmist and
inaccurate, but people are still reading it.</p>
<table>
<thead>
<tr>
<th>Problem</th>
<th>Is It Fixed?</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>vim.org</code> is not available via <code>https</code></td>
<td>Yep! <code>http://www.vim.org/</code> redirects to <code>https://www.vim.org/</code> now.</td>
</tr>
<tr>
<td>Emacs's HTTP client doesn't verify certificates by default</td>
<td>Mostly! The documentation is incorrect and there are some UI problems<sup id=fnref:1:no-more-editor-malware-2024-1><a class=footnote-ref href=#fn:1:no-more-editor-malware-2024-1 id=fnref:1>1</a></sup>, but it doesn’t blindly connect insecurely.</td>
</tr>
<tr>
<td>ELPA and MELPA supply plaintext-HTTP package sources</td>
<td>Kinda. MELPA correctly responds to HTTP only with redirects to HTTPS, and ELPA at least offers HTTPS and uses HTTPS URLs exclusively in the default configuration.</td>
</tr>
<tr>
<td>You have to ship your own trust roots for Emacs.</td>
<td>Fixed! The default installation of Emacs on every platform I tried (including Windows) seems to be providing trust roots.</td>
</tr>
<tr>
<td>MELPA offers to install code off of a wiki.</td>
<td>Yes. Wiki packages were <a href="https://github.com/melpa/melpa/issues/2342#issuecomment-360607665">disabled entirely in 2018</a>.</td>
</tr>
</tbody>
</table>
<p>The big takeaway here is that <em>the main issue of there being <strong><em>no security
whatsoever</em></strong> on Emacs and Vim package installation and update has been
<strong><em>fully corrected</em></strong></em>.</p>
<h1 id=where-to-go-next>Where To Go Next?</h1>
<p>Since I believe that post was fairly influential, in particular in getting
MELPA to tighten up its security, let me take another big swing at a call to
action here.</p>
<p>More modern editors have made greater strides towards security. VSCode, for
example, has <a href="https://code.visualstudio.com/blogs/2022/11/28/vscode-sandbox">enabled the Chromium sandbox and added some level of process
separation</a>.
Emacs has not done much here yet, but over the years it has consistently
surprised me with its ability to catch up to its more modern competitors, so I
hope it will surprise me here as well.</p>
<p>Even for VSCode, though, this sandbox still seems pretty permissive — plugins
still seem to execute with the full trust of the editor itself — but it's a big
step in the right direction. This is a much bigger task than just turning on
HTTPS, but I really hope that editors start taking the threat of rogue editor
packages seriously before attackers do, and finding ways to sandbox and limit
the potential damage from third-party plugins, maybe taking a cue from <a href="https://jmmv.dev/2019/11/macos-sandbox-exec.html">other
tools</a>.</p>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>!</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:no-more-editor-malware-2024-1>
<p id=fn:1>the <em>documention</em> still says “gnutls-verify-error” defaults to <code>nil</code> and
that means no certificate verification, and maybe it does do that if you
are using raw TLS connections, but in practice,
<code>url-retrieve-synchronously</code> does appear to present an interactive warning
before proceeding if the certificate is invalid or expired. It still has
yet to catch up with web browsers from 2016, in that it just asks you “do
you want to do this horribly dangerous thing? y/n” but that is a million
times better than proceeding without user interaction. <a class=footnote-backref href=#fnref:1:no-more-editor-malware-2024-1 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
</ol>
</div></body>Okay, I’m A Centrist I Guess2024-01-22T09:41:00-08:002024-01-22T09:41:00-08:00Glyphtag:blog.glyph.im,2024-01-22:/2024/01/a-centrist-i-guess.html<p>Market simulator video game mechanics reveal the core of human soul.</p><body><p>Today I saw a <a href="https://youtu.be/8O0WszQZnmo">short YouTube video about “cozy
games”</a> and started writing a comment, then
realized that this was somehow prompting me to write the most succinct summary
of my own personal views on politics and economics that I have ever managed.
So, here goes.</p>
<p>Apparently all I needed to trim down 50,000 words on my annoyance at how the
term “capitalism” is frustratingly both a nexus for useful critque and also
reductive thought-terminating clichés was to realize that <a href="https://www.nintendo.com/us/store/products/animal-crossing-new-horizons-switch/">Animal Crossing: New
Horizons</a>
is closer to my views on political economy than anything Adam Smith or Karl
Marx ever wrote.</p>
<hr>
<p>Cozy games illustrate that the core <em>mechanics</em> of capitalism are fun and
motivating, in a laboratory environment. It’s fun to gather resources, to
improve one’s skills, to engage in mutually beneficial exchanges, to collect
things, to decorate. It’s tremendously motivating. Even merely pretending to
do those things can captivate huge amounts of our time and attention.</p>
<p>In real life, people need to be motivated to do stuff. Not because of some
moral deficiency, but because in a large complex civilization it’s hard to tell
what needs doing. By the time it’s widely visible to a population-level
democratic consensus of non-experts that there is an unmet need — for example,
trash piling up on the street everywhere indicating a need for garbage
collection — that doesn’t mean “time to pick up some trash”, it means “the
sanitation system has collapsed, you’re probably going to get cholera”. We need
a system that can identify utility signals more granularly and quickly, towards
the edges of the social graph. To allow person A to earn “value credits” of
some kind for doing work that others find valuable, then trade those in to
person B for labor which <em>they</em> find valuable, even if it is not clearly
obvious to anyone else why person A wants that thing. Hence: money.</p>
<p>So, a market can provide an incentive structure that productively steers people
towards needs, by aggregating small price signals in a distributed way, via the
communication technology of “money”. Authoritarian communist states are
<a href="https://www.theatlantic.com/magazine/archive/1990/06/inside-the-collapsing-soviet-economy/303870/">famously bad at
this</a>,
overproducing “necessary” goods in ways that can hold their own with the <a href="https://foreignpolicy.com/2020/07/27/chinese-communist-party-environment-co2/">worst
excesses</a>
of capitalists, while under-producing “luxury” goods that are politically seen
as frivolous.</p>
<p>This is the kernel of truth around which the hardcore capitalist bootstrap
grindset ideologues build their fabulist cinematic universe of cruelty.
Markets are motivating, they reason, therefore we must worship the market as a
god and obey its every whim. Markets can optimize some targets, therefore we
<em>must</em> allow markets to optimize <em>every</em> target. Markets efficiently allocate
resources, and people need resources to live, therefore anyone unable to secure
resources in a market is undeserving of life. Thus we begin at “market
economies provide some beneficial efficiencies” and after just a bit of
hand-waving over some inconvenient details, we get to “thus, we must make the
poor into a blood-sacrifice to Moloch, otherwise nobody will ever work, and we
will all die, drowning in our own laziness”. “The cruelty is the point” is a
convenient phrase, but among those with this worldview, the <em>prosperity</em> is the
point; they just think the cruelty is the only engine that can possibly drive
it.</p>
<p>Cozy games are therefore a <em>centrist</em><sup id=fnref:1:a-centrist-i-guess-2024-1><a class=footnote-ref href=#fn:1:a-centrist-i-guess-2024-1 id=fnref:1>1</a></sup> critique of capitalism. They present
a world <em>with</em> the prosperity, but <em>without</em> the cruelty. More importantly
though, by virtue of the fact that people actually play them in large numbers,
they <em>demonstrate</em> that the cruelty is <em>actually</em> unnecessary.</p>
<p>You don’t <em>need</em> to play a cozy game. Tom Nook is not going to evict you from
your real-life house if you don’t give him enough bells when it’s time to make
rent. In fact, quite the opposite: you have to take time <em>away</em> from your
real-life responsibilities and work, in order to make time for such a game.
That is how motivating it is to engage with a market system in the abstract,
with almost exclusively positive reinforcement.</p>
<p>What cozy games are showing us is that a world with tons of “free stuff” —
universal basic income, universal health care, free education, free housing —
will <em>not</em> result in a breakdown of our society because “no one wants to work”.
People love to work.</p>
<p>If we can turn the market into a cozy game, with low stakes and a generous
safety net, <em>more</em> people will engage with it, not fewer. People are not lazy;
<a href="https://www.goodreads.com/book/show/54304124-laziness-does-not-exist">laziness does not
exist</a>.
The motivation that people need from a market economy is not a constant looming
threat of homelessness, starvation and death for themselves and their children,
but a fun opportunity to get a <a href="https://www.nintendolife.com/guides/animal-crossing-new-horizons-how-to-get-a-5-star-island-rating-and-gro
w-lily-of-the-valley">five-star island
rating</a>.</p>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>!</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:a-centrist-i-guess-2024-1>
<p id=fn:1>Okay, I guess “far left” on the current US political compass, but in a
just world socdems would be centrists. <a class=footnote-backref href=#fnref:1:a-centrist-i-guess-2024-1 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
</ol>
</div></body>Annotated At Runtime2023-12-07T17:56:00-08:002023-12-07T17:56:00-08:00Glyphtag:blog.glyph.im,2023-12-07:/2023/12/annotated-at-runtime.html<p>PEP 593 is a bit vague on how you’re supposed to actually consume
arguments to <code>Annotated</code>; here is my proposal.</p><body><p><a href="https://peps.python.org/pep-0593/">PEP 0593</a> added the ability to add
arbitrary user-defined metadata to type annotations in Python.</p>
<p>At type-check time, such annotations are… inert. They don’t do anything.
<a href="https://docs.python.org/3.12/library/typing.html#typing.Annotated"><code>Annotated[int,
X]</code></a> just
means <code>int</code> to the type-checker, regardless of the value of <code>X</code>. So the entire
purpose of <code>Annotated</code> is to provide a run-time API to consume metadata, which
integrates with the type checker syntactically, but does not otherwise disturb
it.</p>
<p>Yet, the documentation for this central purpose seems, while not exactly
absent, oddly incomplete.</p>
<p>The PEP itself simply says:</p>
<blockquote>
<p>A tool or library encountering an Annotated type can scan through the
annotations to determine if they are of interest (e.g., using
<code>isinstance()</code>).</p>
</blockquote>
<p>But it’s not clear where “the annotations” <em>are</em>, given that <a href="https://peps.python.org/pep-0593/#consuming-annotations">the PEP’s entire
“consuming
annotations”</a> section
does not even mention the <code>__metadata__</code> attribute where the annotation’s
arguments <em>go</em>, which was <a href="https://github.com/python/cpython/issues/97797">only even added to CPython’s
documentation</a>. Its list of
examples just show the <code>repr()</code> of the relevant type.</p>
<p>There’s also a bit of an open question of what, exactly, we are supposed to
<code>isinstance()</code>-ing here. If we want to find arguments to <code>Annotated</code>,
presumably we need to be able to detect if an annotation <em>is</em> an <code>Annotated</code>.
But <code>isinstance(Annotated[int, "hello"], Annotated)</code> is both <code>False</code> at
runtime, and also a type-checking error, that looks like this:</p>
<div class=highlight><table class=highlighttable><tbody><tr><td class=linenos><div class=linenodiv><pre><span class=normal>1</span></pre></div></td><td class=code><div><pre><span></span><code>Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"
</code></pre></div></td></tr></tbody></table></div>
<p>The actual type of these objects, <code>typing._AnnotatedAlias</code>, does not seem to
have a publicly available or documented alias, so that seems like the wrong
route too.</p>
<p>Now, it certainly <em>works</em> to escape-hatch your way out of all of this with an
<code>Any</code>, build some <a href="https://github.com/pydantic/pydantic/blob/d7ab7d6e1f9bc694d47c82d786cf30515e58a36d/pydantic/_internal/_typing_extra.py#L99">version-specific special-case
hacks</a>
to dig around in the relevant namespaces, access <code>__metadata__</code> and call it a
day. But this solution is … unsatisfying.</p>
<h2 id=what-are-you-looking-for>What are you looking for?</h2>
<p>Upon encountering these quirks, it is understandable to want to simply ask the
question “is this annotation that I’m looking at an <code>Annotated</code>?” and to be
frustrated that it seems so obscure to straightforwardly get an answer to that
question without disabling all type-checking in your meta-programming code.</p>
<p>However, I think that this is a slight misframing of the problem. Code that
is inspecting parameters for an annotation is going to <em>do</em> something with that
annotation, which means that it must necessarily be looking for a <em>specific</em>
set of annotations. Therefore the thing we want to pass to <code>isinstance</code> is not
some obscure part of the annotations’ internals, but the actual interesting
annotation type from your framework or application.</p>
<p>When consuming an <code>Annotated</code> parameter, there are 3 things you probably want to know:</p>
<ol>
<li>What was the parameter itself? (type: The type you passed in.)</li>
<li>What was the name of the annotated object (i.e.: the parameter name, the
attribute name) being passed the parameter? (type: <code>str</code>)</li>
<li>What was the actual type being annotated? (type: <code>type</code>)</li>
</ol>
<p>And the things that we have are the type of the <code>Annotated</code> we’re querying for,
and the object with annotations we are interrogating. So that gives us this
function signature:</p>
<div class=highlight><table class=highlighttable><tbody><tr><td class=linenos><div class=linenodiv><pre><span class=normal>1</span>
<span class=normal>2</span>
<span class=normal>3</span>
<span class=normal>4</span>
<span class=normal>5</span></pre></div></td><td class=code><div><pre><span></span><code><span class=k>def</span> <span class=nf>annotated_by</span><span class=p>(</span>
<span class=n>annotated</span><span class=p>:</span> <span class=nb>object</span><span class=p>,</span>
<span class=n>kind</span><span class=p>:</span> <span class=nb>type</span><span class=p>[</span><span class=n>T</span><span class=p>],</span>
<span class=p>)</span> <span class=o>-></span> <span class=n>Iterable</span><span class=p>[</span><span class=nb>tuple</span><span class=p>[</span><span class=nb>str</span><span class=p>,</span> <span class=n>T</span><span class=p>,</span> <span class=nb>type</span><span class=p>]]:</span>
<span class=o>...</span>
</code></pre></div></td></tr></tbody></table></div>
<p>To extract this information, all we need are
<a href="https://docs.python.org/3.12/library/typing.html#typing.get_args"><code>get_args</code></a>
and
<a href="https://docs.python.org/3.12/library/typing.html#typing.get_type_hints"><code>get_type_hints</code></a>;
no need for
<a href="https://docs.python.org/3.12/library/typing.html#typing.Annotated"><code>__metadata__</code></a>
or
<a href="https://docs.python.org/3.12/library/typing.html#typing.get_origin"><code>get_origin</code></a>
or any other metaprogramming. Here’s a recipe:</p>
<div class=highlight><table class=highlighttable><tbody><tr><td class=linenos><div class=linenodiv><pre><span class=normal> 1</span>
<span class=normal> 2</span>
<span class=normal> 3</span>
<span class=normal> 4</span>
<span class=normal> 5</span>
<span class=normal> 6</span>
<span class=normal> 7</span>
<span class=normal> 8</span>
<span class=normal> 9</span>
<span class=normal>10</span>
<span class=normal>11</span>
<span class=normal>12</span></pre></div></td><td class=code><div><pre><span></span><code><span class=k>def</span> <span class=nf>annotated_by</span><span class=p>(</span>
<span class=n>annotated</span><span class=p>:</span> <span class=nb>object</span><span class=p>,</span>
<span class=n>kind</span><span class=p>:</span> <span class=nb>type</span><span class=p>[</span><span class=n>T</span><span class=p>],</span>
<span class=p>)</span> <span class=o>-></span> <span class=n>Iterable</span><span class=p>[</span><span class=nb>tuple</span><span class=p>[</span><span class=nb>str</span><span class=p>,</span> <span class=n>T</span><span class=p>,</span> <span class=nb>type</span><span class=p>]]:</span>
<span class=k>for</span> <span class=n>k</span><span class=p>,</span> <span class=n>v</span> <span class=ow>in</span> <span class=n>get_type_hints</span><span class=p>(</span><span class=n>annotated</span><span class=p>,</span> <span class=n>include_extras</span><span class=o>=</span><span class=kc>True</span><span class=p>)</span><span class=o>.</span><span class=n>items</span><span class=p>():</span>
<span class=n>all_args</span> <span class=o>=</span> <span class=n>get_args</span><span class=p>(</span><span class=n>v</span><span class=p>)</span>
<span class=k>if</span> <span class=ow>not</span> <span class=n>all_args</span><span class=p>:</span>
<span class=k>continue</span>
<span class=n>actual</span><span class=p>,</span> <span class=o>*</span><span class=n>rest</span> <span class=o>=</span> <span class=n>all_args</span>
<span class=k>for</span> <span class=n>arg</span> <span class=ow>in</span> <span class=n>rest</span><span class=p>:</span>
<span class=k>if</span> <span class=nb>isinstance</span><span class=p>(</span><span class=n>arg</span><span class=p>,</span> <span class=n>kind</span><span class=p>):</span>
<span class=k>yield</span> <span class=n>k</span><span class=p>,</span> <span class=n>arg</span><span class=p>,</span> <span class=n>actual</span>
</code></pre></div></td></tr></tbody></table></div>
<p>It might seem a little odd to be blindly assuming that <code>get_args(...)[0]</code> will
always be the relevant type, when that is not true of unions or generics.
Note, however, that we are only yielding results when we have found <em>the
instance type</em> in the argument list; our arbitrary user-defined instance isn’t
valid as a type annotation argument in any other context. It can’t be part of
a <code>Union</code> or a <code>Generic</code>, so we can rely on it to be an <code>Annotated</code> argument,
and from there, we can make that assumption about the format of <code>get_args(...)</code>.</p>
<p>This can give us back the annotations that we’re looking for in a handy format
that’s easy to consume. Here’s a quick example of how you might use it:</p>
<div class=highlight><table class=highlighttable><tbody><tr><td class=linenos><div class=linenodiv><pre><span class=normal> 1</span>
<span class=normal> 2</span>
<span class=normal> 3</span>
<span class=normal> 4</span>
<span class=normal> 5</span>
<span class=normal> 6</span>
<span class=normal> 7</span>
<span class=normal> 8</span>
<span class=normal> 9</span>
<span class=normal>10</span>
<span class=normal>11</span>
<span class=normal>12</span>
<span class=normal>13</span>
<span class=normal>14</span>
<span class=normal>15</span></pre></div></td><td class=code><div><pre><span></span><code><span class=nd>@dataclass</span>
<span class=k>class</span> <span class=nc>AnAnnotation</span><span class=p>:</span>
<span class=n>name</span><span class=p>:</span> <span class=nb>str</span>
<span class=k>def</span> <span class=nf>a_function</span><span class=p>(</span>
<span class=n>a</span><span class=p>:</span> <span class=nb>str</span><span class=p>,</span>
<span class=n>b</span><span class=p>:</span> <span class=n>Annotated</span><span class=p>[</span><span class=nb>int</span><span class=p>,</span> <span class=n>AnAnnotation</span><span class=p>(</span><span class=s2>"b"</span><span class=p>)],</span>
<span class=n>c</span><span class=p>:</span> <span class=n>Annotated</span><span class=p>[</span><span class=nb>float</span><span class=p>,</span> <span class=n>AnAnnotation</span><span class=p>(</span><span class=s2>"c"</span><span class=p>)],</span>
<span class=p>)</span> <span class=o>-></span> <span class=kc>None</span><span class=p>:</span>
<span class=o>...</span>
<span class=nb>print</span><span class=p>(</span><span class=nb>list</span><span class=p>(</span><span class=n>annotated_by</span><span class=p>(</span><span class=n>a_function</span><span class=p>,</span> <span class=n>AnAnnotation</span><span class=p>)))</span>
<span class=c1># [('b', AnAnnotation(name='b'), <class 'int'>),</span>
<span class=c1># ('c', AnAnnotation(name='c'), <class 'float'>)]</span>
</code></pre></div></td></tr></tbody></table></div>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>! I am also <a href=mailto:consulting@glyph.im>available for
consulting work</a> if you think your organization
could benefit from expertise on topics like “how <em>do</em> I do Python
metaprogramming, but, like, not super janky”.</p></body>Safer, Not Later2023-12-06T12:01:00-08:002023-12-06T12:01:00-08:00Glyphtag:blog.glyph.im,2023-12-06:/2023/12/safer-not-later.html<p>How “Move Fast and Break Things” ruined the world by escaping the
context that it was intended for.</p><body><p>Facebook — and by extension, most of Silicon Valley — rightly <a href="https://www.goodreads.com/book/show/31420725-move-fast-and-break-things">gets a lot of
shit</a>
for its
<a href="https://www.businessinsider.com/mark-zuckerberg-on-facebooks-new-motto-2014-5">old</a>
motto, “Move Fast and Break Things”.</p>
<p>As a general principle for living your life, it is obviously <em>terrible</em> advice,
and it leads to a lot of the <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">horrific
outcomes</a>
of Facebook’s business.</p>
<p>I don’t want to be an apologist for Facebook. I also do not want to excuse the
worldview that leads to those kinds of outcomes. However, I <em>do</em> want to try to
help laypeople understand how <em>software engineers</em>—particularly those situated
at the point in history where this motto became popular—actually <em>meant</em> by it.
I would like more people in the general public to understand why, to engineers,
it was supposed to mean roughly the <em>same</em> thing as Facebook’s newer,
goofier-sounding “Move fast with stable infrastructure”.</p>
<h2 id=move-slow>Move Slow</h2>
<p>In the bad old days, circa 2005, two worlds within the software industry were
colliding.</p>
<p>The old world was the world of integrated hardware/software companies, like IBM
and Apple, and shrink-wrapped software companies like Microsoft and
WordPerfect. The new world was software-as-a-service companies like Google,
and, yes, Facebook.</p>
<p>In the old world, you delivered software in a physical, shrink-wrapped box, on
a yearly release cycle. If you were really aggressive you might ship updates as
often as quarterly, but faster than that and your physical shipping
infrastructure would not be able to keep pace with new versions. As such,
development could proceed in long phases based on those schedules.</p>
<p>In practice what this meant was that in the old world, when development began
on a new version, programmers would go absolutely wild adding incredibly buggy,
experimental code to see what sorts of things might be possible in a new
version, then slowly transition to less coding and more testing, eventually
settling into a testing and bug-fixing mode in the last few months before the
release.</p>
<p>This is where the idea of “alpha” (development testing) and “beta” (user
testing) versions came from. Software in that initial surge of unstable
development was extremely likely to malfunction or even crash. Everyone
understood that. How could it be otherwise? In an alpha test, the engineers
hadn’t even <em>started</em> bug-fixing yet!</p>
<p>In the new world, the idea of a 6-month-long “beta test” was incoherent. If
your software was a website, you shipped it to users every time they hit
“refresh”. The software was running 24/7, on hardware that you controlled. You
could be adding features at every minute of every day. And, now that this was
<em>possible</em>, you <em>needed</em> to be adding those features, or your users would get
bored and leave for your competitors, who would do it.</p>
<p>But this came along with a new attitude towards quality and reliability. If
you needed to ship a feature within 24 hours, you couldn’t write a buggy
version that crashed all the time, see how your carefully-selected group of
users used it, collect crash reports, fix all the bugs, have a feature-freeze
and do nothing but fix bugs for a few months. You needed to be able to ship a
stable version of your software on Monday and then have another stable version
on Tuesday.</p>
<p>To support this novel sort of development workflow, the industry developed new
technologies. I am tempted to tell you about them all. Unit testing, continuous
integration servers, error telemetry, system monitoring dashboards, feature
flags... this is where a lot of my personal expertise lies. I was very much on
the front lines of the “new world” in this conflict, trying to move companies
to shorter and shorter development cycles, and move away from the legacy
worldview of Big Release Day engineering.</p>
<p>Old habits die hard, though. Most engineers at this point were trained in a
world where they had months of continuous quality assurance processes after
writing their first rough draft. Such engineers feel understandably nervous
about being required to ship their probably-buggy code to paying customers
every day. So they would try to slow things down.</p>
<p>Of course, when one is deploying all the time, all other things being equal,
it’s easy to ship a show-stopping bug to customers. Organizations would do
this, and they’d get burned. And when they’d get burned, they would introduce
Processes to slow things down. Some of these would look like:</p>
<ol>
<li>Let’s keep a special version of our code set aside for testing, and then
we’ll test that for a few weeks before sending it to users.</li>
<li>The heads of every department need to sign-off on every deployed version, so
everyone needs to spend a day writing up an explanation of their changes.</li>
<li>QA should sign off too, so let’s have an extensive sign-off process where
each individual tester does a fills out a sign-off form.</li>
</ol>
<p>Then there’s my favorite version of this pattern, where management decides that
deploys are inherently dangerous, and everyone should probably just stop doing
them. It typically proceeds in stages:</p>
<ol>
<li>Let’s have a deploy freeze, and not <a href="https://baselime.io/blog/you-should-deploy-on-fridays">deploy on
Fridays</a>; don’t want
to mess up the weekend debugging an outage.</li>
<li>Actually, let’s extend that freeze for all of December, we don’t want to
mess up the holiday shopping season.</li>
<li>Actually why not have the freeze extend into the end of November? Don’t want
to mess with Thanksgiving and the Black Friday weekend.</li>
<li>Some of our customers are in India, and Diwali’s also a big deal. Why not
extend the freeze from the end of October?</li>
<li>But, come to think of it, we do a fair amount of seasonal sales for
Halloween too. How about no deployments from October 10 onward?</li>
<li>You know what, sometimes people like to use our shop for Valentine’s day
too. Let’s just never deploy again.</li>
</ol>
<p>This same anti-pattern can repeat itself with <a href="https://alexgaynor.net/2016/jan/19/dont-have-environments/">an endlessly proliferating list
of “environments”</a>,
whose main role ends up being to ensure that no code ever makes it to actual
users.</p>
<h2 id=and-break-things-anyway>… and break things anyway</h2>
<p>As you may have begun to suspect, there are a few problems with this style of
software development.</p>
<p>Even back in the bad old days of the 90s when you had to ship disks in boxes,
this methodology contained within itself the seeds of its own destruction. As
<a href="https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/">Joel Spolsky memorably put
it</a>,
Microsoft discovered that this idea that you could introduce a ton of bugs and
then just fix them later came along with some massive disadvantages:</p>
<blockquote>
<p>The very first version of Microsoft Word for Windows was considered a “death
march” project. It took forever. It kept slipping. The whole team was working
ridiculous hours, the project was delayed again, and again, and again, and
the stress was incredible. [...] The story goes that one programmer, who had
to write the code to calculate the height of a line of text, simply wrote
“return 12;” and waited for the bug report to come in [...]. The schedule was
merely a checklist of features waiting to be turned into bugs. In the
post-mortem, this was referred to as “infinite defects methodology”.</p>
</blockquote>
<p>Which lead them to what is perhaps the most ironclad law of software
engineering:</p>
<blockquote>
<p>In general, the longer you wait before fixing a bug, the costlier (in time
and money) it is to fix.</p>
</blockquote>
<p>A corollary to this is that the longer you wait to <em>discover</em> a bug, the
costlier it is to fix.</p>
<p>Some bugs can be found by code review. So you should do code review. Some
bugs can be found by automated tests. So you should do automated testing.
Some bugs will be found by monitoring dashboards, so you should have monitoring
dashboards.</p>
<h2 id=so-why-not-move-fast>So why not move fast?</h2>
<p>But here is where Facebook’s old motto comes in to play. All of those
principles above are true, but here are two more things that are true:</p>
<ol>
<li>No matter how much code review, automated testing, and monitoring you have
<em>some bugs can only be found by users interacting with your software</em>.</li>
<li>No bugs can be found merely by slowing down and putting the deploy off
another day.</li>
</ol>
<p><strong><em>Once you have made the process of releasing software to users sufficiently
safe</em></strong> that the potential damage of any given deployment can be reliably
limited, it is always best to release your changes to users as quickly as
possible.</p>
<p>More importantly, as an engineer, <em>you will naturally have an inherent fear of
breaking things</em>. If you make no changes, you cannot be blamed for whatever
goes wrong. Particularly if you grew up in the Old World, there is an
ever-present temptation to slow down, to avoid shipping, to hold back your
changes, <em>just in case</em>.</p>
<p>You will want to <strong>move slow</strong>, to <strong>avoid breaking things</strong>. Better to do
nothing, to be useless, than to do harm.</p>
<p>For all its faults as an organization, Facebook did, and does, have some
excellent infrastructure to <em>avoid breaking their software systems</em> in response
to features being deployed to production. In that sense, they’d <em>already done
the work</em> to avoid the “harm” of an individual engineer’s changes. If future
work needed to be performed to increase safety, then that work should be done
<em>by the infrastructure team</em> to make things safer, not by every <em>other</em>
engineer slowing down.</p>
<p>The problem is that slowing down is not actually value neutral. To quote
<em>myself</em> here:</p>
<blockquote>
<p>If you can’t ship a feature, you can’t fix a bug.</p>
</blockquote>
<p>When you slow down just for the sake of slowing down, you create more problems.</p>
<p>The first problem that you create is smashing together far too many changes at
once.</p>
<p>You’ve got a development team. Every engineer on that team is adding features
at some rate. You <em>want</em> them to be doing that work. Necessarily, they’re all
integrating them into the codebase to be deployed whenever the next deployment
happens.</p>
<p>If a problem occurs with one of those changes, and you want to quickly know
<em>which</em> change caused that problem, ideally you want to compare two versions of
the software with the <em>smallest number of changes possible</em> between them.
Ideally, every individual change would be released on its own, so you can see
differences in behavior between versions which contain one change each, not a
gigantic avalanche of changes where any one of hundred different features might
be the culprit.</p>
<p>If you slow down for the sake of slowing down, you also create a process that
cannot respond to failures of the <em>existing</em> code.</p>
<p>I’ve been writing thus far as if a system in a steady state is inherently fine,
and each change carries the possibility of benefit but also the risk of
failure. This is not always true. Changes don’t just occur in your software.
They can happen in the world as well, and your software needs to be able to
respond to them.</p>
<p>Back to that holiday shopping season example from earlier: if your deploy
freeze prevents all deployments during the holiday season to prevent breakages,
what happens when your small but growing e-commerce site encounters a
catastrophic bug that has always been there, but only occurs when you have more
than 10,000 concurrent users. The breakage is coming from new, never before
seen levels of traffic. The breakage is coming from your <em>success</em>, not your
code. You’d better be able to ship a fix for that bug <em>real fast</em>, because
your only other option to a fast turn-around bug-fix is shutting down the site
entirely.</p>
<p>And if you see this failure for the first time on Black Friday, that is not the
moment where you want to suddenly develop a new process for deploying on
Friday. The only way to ensure that shipping that fix is easy is to ensure
that shipping <em>any</em> fix is easy. That it’s a thing your whole team does
quickly, all the time.</p>
<p>The motto “Move Fast And Break Things” caught on with a lot of the rest of
Silicon Valley because we are all familiar with this toxic, paralyzing fear.</p>
<p>After we have the safety mechanisms in place to make changes as safe as they
can be, <em>we just need to push through it</em>, and accept that things might break,
but that’s OK.</p>
<h2 id=some-important-words-are-missing>Some Important Words are Missing</h2>
<p>The motto has an implicit preamble, “<strong>Once you have done the work to make
broken things safe enough, then you should</strong> move fast and break things”.</p>
<p>When you are in a conflict about whether to “go fast” or “go slow”, the motto
is not <em>supposed</em> to be telling you that the answer is an unqualified “GOTTA GO
FAST”. Rather, it is an exhortation to take a beat and to go through a process
of interrogating your motivation for slowing down. There are three possible
things that a person saying “slow down” could mean about making a change:</p>
<ol>
<li>It is <em>broken in a way you already understand</em>. If this is the problem, then
you should not make the change, because you know it’s not ready. If you
already know it’s broken, then <strong>the change simply isn’t done</strong>. Finish the
work, and ship it to users when it’s finished.</li>
<li>It is <em>risky in a way that you don’t have a way to defend against</em>. As far
as you know, the change works, but there’s a risk embedded in it that you
don’t have any safety tools to deal with. If this is the issue, then what
you should do is pause working on this change, and <strong>build the safety
first</strong>.</li>
<li>It is <em>making you nervous in a way you can’t articulate</em>. If you can’t
describe an known defect as in point 1, and you can’t outline an improved
safety control as in step 2, then this is the time to let go, accept that
you might break something, and <strong>move fast</strong>.</li>
</ol>
<p>The <em>implied context</em> for “move fast and break things” is only in that third
condition. If you’ve already built all the infrastructure that you can think
of to build, and you’ve already fixed all the bugs in the change that you need
to fix, any further delay will not serve you, <em>do not have any further delays</em>.</p>
<h2 id=unfortunately-as-you-probably-already-know>Unfortunately, as you probably already know,</h2>
<p>This motto did a lot of good in its appropriate context, at its appropriate
time. It’s still a useful heuristic for engineers, if the appropriate context
is generally understood within the conversation where it is used.</p>
<p>However, it has clearly been taken to mean a lot of significantly more damaging
things.</p>
<p>Purely from an <em>engineering</em> perspective, it has been reasonably successful.
It’s less and less common to see people in the industry pushing back against
tight deployment cycles. It’s also less common to see the <em>basic</em> safety
mechanisms (version control, continuous integration, unit testing) get ignored.
And many ex-Facebook engineers have used this motto very clearly under the
understanding I’ve described here.</p>
<p>Even in the narrow domain of software engineering it is misused. I’ve seen it
used to argue a project didn’t need tests; that a deploy could be forced
through a safety process; that users did not need to be informed of a change
that could potentially impact them personally.</p>
<p>Outside that domain, it’s far worse. It’s generally understood to mean that no
safety mechanisms are required at all, that any change a software company wants
to make is inherently justified because it’s OK to “move fast”. You can see
this interpretation in the way that it has leaked out of Facebook’s engineering
culture and suffused its entire management strategy, blundering through market
after market and issue after issue, making catastrophic mistakes, making a
perfunctory apology and moving on to the next massive harm.</p>
<p>In the decade since it has been retired as Facebook’s official motto, it has
been used to defend some truly horrific abuses within the tech industry. You
only need to visit the orange website to see it still being used this way.</p>
<p>Even at its best, “move fast and break things” is an <em>engineering heuristic</em>,
it is not an ethical principle. Even within the context I’ve described, it’s
only okay to move fast and <em>break things</em>. It is never okay to move fast and
<em>harm people</em>.</p>
<p>So, while I do think that it is broadly misunderstood by the public, it’s still
not a thing I’d ever say again. Instead, I propose this:</p>
<blockquote>
<p>Make it safer, don’t make it later.</p>
</blockquote>
<h2 id=acknowledgments>Acknowledgments</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>! I am also <a href=mailto:consulting@glyph.im>available for
consulting work</a> if you think your organization
could benefit from expertise on topics like “how do I make changes to my
codebase, but, like, good ones”.</p></body>iOS Mail To Omnifocus Task2023-11-08T21:48:00-08:002023-11-08T21:48:00-08:00Glyphtag:blog.glyph.im,2023-11-08:/2023/11/ios-mail-to-omnifocus-task.html<p>Convert messages in the Mail app built in to iOS into tasks in
OmniFocus.</p><body><p>One of my longest-running frustrations with iOS is that the default mail app
does not have a “share” action, making it impossible to do the <em>one thing</em> that
a mail client needs to be able to do for me, which is to <a href="https://blog.glyph.im/2016/04/email-isnt-the-problem.html">selectively <em>turn
messages into tasks</em></a>. This deficiency
has multiple components which makes it difficult to work around:</p>
<ol>
<li>There is no UI to “share” a message from within the mail app, so you can’t
share it to OmniFocus or a shortcut.</li>
<li>There is no way to query for a mail message in Shortcuts and then get some
kind of “message” object, so there’s no way you can start in Shortcuts and
manually invoke something.</li>
<li>There’s no such thing as
<a href="https://developer.apple.com/documentation/mailkit">MailKit</a> on iOS, so
third-party apps can’t query your mail database either.</li>
</ol>
<p>To work around this, I’ve long subscribed to the “AirMail” app, which has a
“message to omnifocus” action but is otherwise kind of a buggy mess.</p>
<p>But today, I read that you can set up an iPad in a split-screen view and drag
messages from the built-in Mail app’s message list view into the OmniFocus
inbox, and I extrapolated, and discovered how to get
Mail-message-to-Omnifocus-task work on an iPhone.</p>
<p>I’m thrilled that this functionality exists, but it <em>is</em> a bit of a masterclass
in how to get a terrible UX out of a series of decisions that were probably
locally reasonable. So, without any further ado, here’s how you do it:</p>
<ol>
<li>Open up mail.app, and find the message you want to share in the message
list. Here, you have two choices:<ol>
<li>With one finger, press and hold on a message until you feel a haptic.
When you feel the haptic, <em>immediately</em> move your finger a little bit,
<em>before</em> you see the preview come up. This lets you operate directly from
the message list, but is very fiddly.</li>
<li>Tap the message to open the detail view, then press and hold on the sent
date in the top right.</li>
</ol>
</li>
<li>Continue holding down with your first finger. With a <em>second</em> finger, swipe
up from the bottom to enter the multitasking view, or to go back to your
home screen. While holding your first finger in place, either switch to or
launch OmniFocus.</li>
<li>With your second finger, navigate to the Inbox, drag your <em>first</em> finger to
the bottom of the list, and release it. Voila! You should have a task with
a brief summary and a link back to the message.</li>
<li>Swipe up from the bottom to switch back to Mail, then archive the message.</li>
</ol></body>Get Your Mac Python From Python.org2023-08-29T13:17:00-07:002023-08-29T13:17:00-07:00Glyphtag:blog.glyph.im,2023-08-29:/2023/08/get-your-mac-python-from-python-dot-org.html<p>There are many ways to get Python installed on macOS, but for most
people the version that you download from Python.org is best.</p><body><p>One of the most unfortunate things about learning Python is that there are so
many different ways to get it installed, and you need to choose one before you
even begin. The differences can also be subtle and require technical depth to
truly understand, which you don’t have yet.<sup id=fnref:1:get-your-mac-python-from-python-dot-org-2023-8><a class=footnote-ref href=#fn:1:get-your-mac-python-from-python-dot-org-2023-8 id=fnref:1>1</a></sup> Even experts can be missing
information about which one to use and why.</p>
<p>There are perhaps more of these on macOS than on any other platform, and that’s
the platform I primarily use these days. If you’re using macOS, I’d like to
make it simple for you.</p>
<h2 id=the-one-you-probably-want-pythonorg>The One You Probably Want: <a href="https://python.org/">Python.org</a></h2>
<p>My recommendation is to use <a href="https://www.python.org/downloads/">an official build from
python.org</a>.</p>
<p>I recommed the official installer for most uses, and if you were just looking
for a choice about which one to use, you can stop reading now. Thanks for your
time, and have fun with Python.</p>
<p>If you want to get into the nerdy nuances, read on.</p>
<p>For starters, the official builds are compiled in such a way that they will run
on a wide range of macs, both new and old. They are <code>universal2</code> binaries,
unlike some other builds, which means you can distribute them as part of a mac
application.</p>
<p>The main advantage that the Python.org build has, though, is very subtle, and
not any concrete technical detail. It’s a social, structural issue: the
Python.org builds are produced <em>by the people who make CPython</em>, who are more
likely to know about the nuances of what options it can be built with, and who
are more likely to adopt their own improvements as they are released. Third
party builders who are focused on a more niche use-case may not realize that
there are build options or environment requirements that could make their
Pythons better.</p>
<p>I’m being a bit vague deliberately here, because at any <em>particular</em> moment in
time, this may not be an advantage at all. Third party integrators generally
catch up to changes, and eventually achieve parity. But for a specific
upcoming example, <a href="https://peps.python.org/pep-0703/">PEP 703</a> will have
extensive build-time implications, and I would trust the python.org team to be
keeping pace with all those subtle details immediately as releases happen.</p>
<h3 id=and-auto-update-it>(And Auto-Update It)</h3>
<p>The one downside of the official build is that you have to return to the
website to check for security updates. Unlike other options described below,
there’s no built-in auto-updater for security patches. If you follow the
normal process, you still have to click around in a GUI installer to update it
once you’ve clicked around on the website to get the file.</p>
<p>I have written a micro-tool to address this and you can <a href="https://github.com/glyph/mopup#mopup"><code>pip install
mopup</code></a> and then periodically run <code>mopup</code>
and it will install any security updates for your current version of Python,
with no interaction besides entering your admin password.</p>
<h3 id=and-always-use-virtual-environments>(And Always Use Virtual Environments)</h3>
<p>Once you have installed Python from python.org, never <code>pip install</code> anything
globally into that Python, even using the <code>--user</code> flag. Always, always use a
<a href="https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments">virtual
environment</a>
of some kind. In fact, I recommend <a href="https://mastodon.social/@janeadams@vis.social/110777478431420994">configuring it so that it is not even
possible to do
so</a>, by
putting this in your <code>~/.pip/pip.conf</code>:</p>
<div class=highlight><table class=highlighttable><tbody><tr><td class=linenos><div class=linenodiv><pre><span class=normal>1</span>
<span class=normal>2</span></pre></div></td><td class=code><div><pre><span></span><code><span class=k>[global]</span><span class=w></span>
<span class=na>require-virtualenv</span><span class=w> </span><span class=o>=</span><span class=w> </span><span class=s>true</span><span class=w></span>
</code></pre></div></td></tr></tbody></table></div>
<p>This will avoid damaging your Python installation by polluting it with
libraries that you install and then forget about. Any time you need to do
something new, you should make a fresh virtual environment, and then you don’t
have to worry about library conflicts between different projects that you may
work on.</p>
<p>If you need to install <em>tools</em> written in Python, don’t manage those
environments directly, install the tools with
<a href="https://pypa.github.io/pipx/"><code>pipx</code></a>. By using <code>pipx</code>, you allow each tool
to maintain its own set dependencies, which means you don’t need to worry about
whether two tools you use have conflicting version requirements, or whether the
tools conflict with your own code.<sup id=fnref:2:get-your-mac-python-from-python-dot-org-2023-8><a class=footnote-ref href=#fn:2:get-your-mac-python-from-python-dot-org-2023-8 id=fnref:2>2</a></sup></p>
<h1 id=the-others>The Others</h1>
<p>There are, of course, several other ways to install Python, which you
<em>probably</em> don’t want to use.</p>
<h2 id=the-one-for-running-other-peoples-code-not-yours-homebrew>The One For Running Other People’s Code, Not Yours: <a href="https://brew.sh">Homebrew</a></h2>
<p>In general, <a href="https://justinmayer.com/posts/homebrew-python-is-not-for-you/">Homebrew Python is not <em>for</em>
you</a>.</p>
<p>The purpose of Homebrew’s python is to support applications packaged <em>within</em>
Homebrew, which have all been tested against the versions of python libraries
<em>also</em> packaged within Homebrew. It may upgrade without warning on just about
any <code>brew</code> operation, and you can’t <em>downgrade</em> it without breaking other parts
of your install.</p>
<p>Specifically for creating redistributable binaries, Homebrew python is
typically compiled only for <em>your specific architecture</em>, and thus will not
create binaries that can be used on Intel macs if you have an Apple Silicon
machine, or will run slower on Apple Silicon machines if you have an Intel mac.
Also, if there are prebuilt wheels which don’t yet exist for Apple Silicon, you
cannot easily <code>arch -x86_64 python ...</code> and just install them; you have to
install a whole second copy of Homebrew in a different location, which is a
headache.</p>
<p>In other words, homebrew is an alternative to <code>pipx</code>, not to Python. For that
purpose, it’s fine.</p>
<h2 id=the-one-for-when-you-need-20-different-pythons-for-debugging-pyenv>The One For When You Need 20 Different Pythons For Debugging: <a href="https://github.com/pyenv/pyenv">pyenv</a></h2>
<p>Like Homebrew, pyenv will default to building a single-architecture binary.
Even worse, it will not build a <a href="https://stackoverflow.com/questions/31486236/what-is-a-framework-build-of-python">Framework
build</a>
of Python, which means several things related to being a mac app just won’t
work properly. Remember those build-time esoterica that the core team is on
top of but third parties may not be? “Should I use a Framework build” is an
enduring piece of said esoterica.</p>
<p>The purpose of pyenv is to provide a large matrix of different, precise legacy
versions of python for library authors to test compatibility against those
older Pythons. If you need to do that, particularly if you work on different
projects where you may need to install some random super-old version of Python
that you would not normally use to test something on, then <code>pyenv</code> is great.
But if you only need one version of Python, it’s not a great way to get it.</p>
<h2 id=the-other-one-thats-exactly-like-pyenv-asdf-python>The Other One That’s Exactly Like pyenv: <a href="https://github.com/asdf-community/asdf-python">asdf-python</a></h2>
<p>The issues are exactly the same as with pyenv, as the tool is a straightforward
alternative for the exact same purpose. It’s a bit less focused on Python than
pyenv, which has pros and cons; it has broader community support, but it’s less
specifically tuned for Python. But a comparative exploration of their
differences is beyond the scope of this post.</p>
<h2 id=the-built-in-one-that-isnt-really-built-in-usrbinpython3>The Built-In One That Isn’t Really Built-In: <code>/usr/bin/python3</code></h2>
<p>There is a binary in <code>/usr/bin/python3</code> which might seem like an appealing
option — it comes from Apple, after all! — but it is provided as a developer
tool, for running things like build scripts. It isn’t for building
applications with.</p>
<p>That binary is <em>not</em> a “system python”; the thing in the operating system
itself is only a shim, which will determine if you have development tools, and
shell out to a tool that will download the development tools for you if you
don’t. There is unfortunately a lot of folk wisdom among older Python
programmers who remember a time when apple <em>did</em> actually package an
antedeluvian version of the interpreter that seemed to be supported forever,
and might suggest it for things intended to be self-contained or have minimal
bundled dependencies, but this is exactly the reason that Apple <em>stopped</em>
shipping that.</p>
<p>If you use this option, it means that your Python <em>might</em> come from the Xcode
Command Line Tools, or the Xcode application, depending on the state of
<code>xcode-select</code> in your current environment and the order in which you installed
them.</p>
<p>Upgrading Xcode via the app store or a developer.apple.com manual download — or
its command-line tools, which are installed separately, and updated via the
“settings” application in a completely different workflow — therefore also
upgrades your version of Python without an easy way to downgrade, unless you
manage multiple Xcode installs. Which, at 12G per install, is probably not an
appealing option.<sup id=fnref:3:get-your-mac-python-from-python-dot-org-2023-8><a class=footnote-ref href=#fn:3:get-your-mac-python-from-python-dot-org-2023-8 id=fnref:3>3</a></sup></p>
<h2 id=the-one-with-the-data-and-the-science-conda>The One With The Data And The Science: <a href="https://www.anaconda.com">Conda</a></h2>
<p>As someone with a limited understanding of data science and scientific
computing, I’m not really qualified to go into the detailed pros and cons here,
but luckily, <a href="https://pythonspeed.com/articles/conda-vs-pip/">Itamar Turner-Trauring is, and he
did</a>.</p>
<p>My one coda to his detailed exploration here is that while there are good
reasons to want to use Anaconda — particularly if you are managing a
data-science workload across multiple platforms and you want a consistent,
holistic development experience across a large team supporting heterogenous
platforms — some people will tell you that you need Conda to get you your
libraries if you want to do data science or numerical work with Python <em>at
all</em>, because Conda is how you install those libraries, and otherwise things
just won’t work.</p>
<p>This is a historical artifact that is no longer true. Over the last decade,
<a href="https://pythonwheels.com">Python Wheels</a> have been <em>comprehensively</em> adopted
across the Python community, and almost every popular library with an extension
module ships pre-built binaries to multiple platforms. There may be some
libraries that only have prebuilt binaries for conda, but they are sufficiently
specialized that I don’t know what they are.</p>
<h2 id=the-one-for-being-consistent-with-your-cloud-hosting>The One for Being Consistent With Your Cloud Hosting</h2>
<p>Another way to run Python on macOS is to not run it on macOS, but to get
another computer inside your computer that <em>isn’t</em> running macOS, and instead
run Python inside that, usually using <a href="https://www.docker.com">Docker</a>.<sup id=fnref:4:get-your-mac-python-from-python-dot-org-2023-8><a class=footnote-ref href=#fn:4:get-your-mac-python-from-python-dot-org-2023-8 id=fnref:4>4</a></sup></p>
<p>There are good reasons to want to use a containerized configuration for
development, but they start to drift away from the point of this post and into
more complicated stuff about how to get your Python into the cloud.</p>
<p>So rather than saying “use Python.org native Python instead of Docker”, I am
specifically <em>not</em> covering Docker as a replacement for a native mac Python
here because in a lot of cases, it can’t be one. Many tools require native mac
facilities like displaying GUIs or <a href="https://appscript.sourceforge.io/py-appscript/doc/appscript-manual/02_aboutappscripting.html">scripting
applications</a>,
or want to be able to take a path name to a file without <a href="https://docs.docker.com/storage/volumes/">elaborate
pre-work</a> to allow the program to
access it.</p>
<h2 id=summary>Summary</h2>
<p>If you didn’t want to read all of that, here’s the summary.</p>
<p>If you use a mac:</p>
<ol>
<li>Get your Python interpreter from <a href="https://www.python.org/">python.org</a>.</li>
<li>Update it with <a href="https://github.com/glyph/mopup#mopup"><code>mopup</code></a> so you don’t
fall behind on security updates.</li>
<li>Always use venvs for specific projects, never <code>pip install</code> anything
directly.</li>
<li>Use <code>pipx</code> to manage your Python applications so you don’t have to worry
about dependency conflicts.</li>
<li>Don’t worry if Homebrew <em>also</em> installs a <code>python</code> executable, but don’t use
it for your own stuff.</li>
<li>You might need a different Python interpreter if you have any specialized
requirements, but you’ll probably know if you do.</li>
</ol>
<hr>
<h2 id=acknowledgements>Acknowledgements</h2>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>! I am also <a href=mailto:consulting@glyph.im>available for
consulting work</a> if you think your organization
could benefit from expertise on topics like “which Python is the really good
one”.</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:get-your-mac-python-from-python-dot-org-2023-8>
<p id=fn:1>If somebody sent you this article because you’re trying to get into
Python and you got stuck on this point, let me first reassure you that all
the information about this really <em>is</em> highly complex and confusing; if
you’re feeling overwhelmed, that’s normal. But the good news is that you
can really ignore most of it. Just read the next little bit. <a class=footnote-backref href=#fnref:1:get-your-mac-python-from-python-dot-org-2023-8 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id=fn:2:get-your-mac-python-from-python-dot-org-2023-8>
<p id=fn:2><em>Some</em> tools need to be installed in the same environment as the code
they’re operating on, so you may want to have multiple installs of, for
example, <a href="https://www.mypy-lang.org">Mypy</a>,
<a href="https://pypi.org/project/pudb/">PuDB</a>, or
<a href="https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html">sphinx</a>.
But for things that just do something useful but don’t need to load your
code — such as this small selection of examples from my own collection:
<a href="https://pypi.org/project/certbot/">certbot</a>,
<a href="https://pypi.org/project/pgcli/">pgcli</a>,
<a href="https://pypi.org/project/asciinema/">asciinema</a>,
<a href="https://pypi.org/project/gister/">gister</a>,
<a href="https://pypi.org/project/speedtest-cli">speedtest-cli</a> — <code>pipx</code> means
you won’t have to debug wonky dependency interactions. <a class=footnote-backref href=#fnref:2:get-your-mac-python-from-python-dot-org-2023-8 title="Jump back to footnote 2 in the text">↩</a></p>
</li>
<li id=fn:3:get-your-mac-python-from-python-dot-org-2023-8>
<p id=fn:3>The command-line tools are a lot smaller, but cannot have multiple
versions installed at once, and are updated through a different
mechanism. There are odd little details like the fact that the default
bundle identifier for the framework differs, being either
<code>org.python.python</code> or <code>com.apple.python3</code>. They’re generally different in
a bunch of small subtle ways that don’t really matter in 95% of cases until
they suddenly matter a lot in that last 5%. <a class=footnote-backref href=#fnref:3:get-your-mac-python-from-python-dot-org-2023-8 title="Jump back to footnote 3 in the text">↩</a></p>
</li>
<li id=fn:4:get-your-mac-python-from-python-dot-org-2023-8>
<p id=fn:4>Or minikube, or podman, or colima or <em>whatever</em> I guess, there’s way too
many of these containerization Pokémon running around for me to keep track
of them all these days. <a class=footnote-backref href=#fnref:4:get-your-mac-python-from-python-dot-org-2023-8 title="Jump back to footnote 4 in the text">↩</a></p>
</li>
</ol>
</div></body>Bilithification2023-07-20T13:59:00-07:002023-07-20T13:59:00-07:00Glyphtag:blog.glyph.im,2023-07-20:/2023/07/bilithification.html<p>Not sure how to do microservices? Split your monolith in half.</p><body><p>Several years ago at O’Reilly’s Software Architecture conference, within a
comprehensive talk on refactoring “Technical Debt: A Masterclass”, r0ml<sup id=fnref:1:bilithification-2023-7><a class=footnote-ref href=#fn:1:bilithification-2023-7 id=fnref:1>1</a></sup>
presented a concept that I think should be highlighted.</p>
<p>If you have access to O’Reilly Safari, I think the video is available there, or
you can get the slides
<a href="https://conferences.oreilly.com/software-architecture/sa-ny-2018/cdn.oreillystatic.com/en/assets/1/event/281/Technical%20debt_%20A%20master%20class%20Presentation.pdf">here</a>.
It’s well worth watching in its own right. The talk contains a lot of hard-won
wisdom from a decades-long career, but in slides 75-87, he articulates a
concept that I believe resolves the perennial pendulum-swing between
microservices and monoliths that we see in the Software as a Service world.</p>
<p>I will refer to this concept as “the bilithification strategy”.</p>
<h2 id=background>Background</h2>
<p>Personally, I have long been a microservice skeptic. I would generally
articulate this skepticism in terms of
“<a href="https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it">YAGNI</a>”.</p>
<p>Here’s the way I would advise people asking about microservices before
encountering this concept:</p>
<blockquote>
<p>Microservices are often adopted by small teams due to their advertised
benefits. Advocates from very large organizations—ones that have been very
successful with microservices—frequently give talks claiming that
microservices are more modular, more scalable, and more fault-tolerant than
their monolithic progenitors. But these teams rarely appreciate the costs,
particularly the costs for smaller orgs. Specifically, there is a fixed
<em>operational</em> marginal cost to each new service, and a fairly large fixed
operational overhead to the infrastructure for an organization deploying
microservices in at all.</p>
<p>With a large enough team, the operational cost is easy to absorb. As the
overhead is fixed, it trends towards zero as your total team size and system
complexity trend towards infinity. Also, in very large teams, the enforced
isolation of components in separate services reduces complexity. It does so
specifically <strong>intentionally causing <a href="https://en.wikipedia.org/wiki/Conway%27s_law">the software architecture to mirror the
organizational structure of the team that deploys
it</a></strong>. This — at the cost of
<em>increased</em> operational overhead and <em>decreased</em> efficiency — allows
independent parts of the organization to make progress independently, without
blocking on each other. Therefore, in smaller teams, as you’re building, you
should bias towards building a monolith until the complexity costs of the
monolith become apparent. Then you should build the infrastructure to switch
to microservices.</p>
</blockquote>
<p>I still stand by all of this. However, it’s incomplete.</p>
<p>What does it mean to “switch to microservices”?</p>
<p>The biggest thing that this advice leaves out is a clear understanding of the
“micro” in “microservice”. In this framing, I’m implicitly understanding
“micro” services to be services that are <em>too small</em> — or at least, too small
<em>for your team</em>. But if they <em>do</em> work for large organizations, then at some
point, you need to have them. This leaves open several questions:</p>
<ul>
<li>What size is the right size for a service?</li>
<li>When should you split your monolith up into smaller services?</li>
<li>Wait, how do you even measure “size” of a service? Lines of code? Gigabytes
of memory? Number of team members?</li>
</ul>
<p>In a specific situation I could probably look at these questions <em>for</em> that
situation, and make suggestions as to the appropriate course of action, but
that’s based largely on <em>vibes</em>. There’s just a lot of drawing on complex
experiences, not a repeatable pattern that a team could apply on their own.</p>
<p>We can be clear that you should always <em>start</em> with a monolith. But what
should you do when that’s no longer working? How do you even <em>tell</em> when it’s
no longer working?</p>
<h2 id=bilithification>Bilithification</h2>
<p>Every codebase begins as a monolith. That is a single (mono) rock (lith).
Here’s what it looks like.</p>
<p><img alt="a circle with the word “monolith” on it" src="https://blog.glyph.im/images/monolith.png"></p>
<p>Let’s say that the monolith, and the attendant team, is getting big enough that
we’re beginning to consider microservices. We might now ask, “what is the
appropriate number of services to split the monolith into?” and that could
provoke endless debate even among a team with total consensus that it might
need to be split into <em>some</em> number of services.</p>
<p>Rather than beginning with the premise that there is a correct number, we may
observe instead that splitting the service into N services where N is more than
one may be accomplished splitting the service in half N-1 times.</p>
<p>So let’s <em>bi</em> (two) <em>lithify</em> (rock) this monolith, and take it from 1 to 2
rocks.</p>
<p>The task of splitting the service into <em>two</em> parts ought to be a manageable
amount of work — two is a definitively finite number, as composed to the
infinite point-cloud of “microservices”. Thus, we should search, first, for a
single logical seam along which we might cleave the monolith.</p>
<p><img alt="a circle with the word “monolith” on it and a line through it" src="https://blog.glyph.im/images/monolith_split.png"></p>
<p>In many cases—as in the specific case that r0ml gave—the easiest way to
articulate a boundary between two parts of a piece of software is to
conceptualize a “frontend” and a “backend”. In the absence of any other clear
boundary, the question “does this functionality belong in the front end or the
back end” can serve as a simple razor for separating the service.</p>
<p>Remember: the reason we’re splitting this software up is because <em>we are also
splitting the team up</em>. You need to think about this in terms of people as
well as in terms of functionality. What division between halves would most
reduce the number of lines of communication, to reduce the <a href="https://en.wikipedia.org/wiki/Brooks%27s_law">quadratic
increase</a> in required
communication relationships that comes along with the linear increase in team
size? Can you identify two groups who need to talk amongst themselves, but do
<em>not</em> need to talk with all of each other?<sup id=fnref:2:bilithification-2023-7><a class=footnote-ref href=#fn:2:bilithification-2023-7 id=fnref:2>2</a></sup></p>
<p><img alt="two circles with the word “hemilith” on them and a double-headed arrow
between them" src="https://blog.glyph.im/images/hemilith.png"></p>
<p>Once you’ve achieved this separation, we no longer have a single rock, we have
two half-rocks: <em>hemiliths</em> to borrow from the same Greek root that gave us
“monolith”.</p>
<p>But we are not finished, of course. Two may not be the correct number of
services to end up with. Now, we ask: can we split the frontend into a
frontend and backend? Can we split the backend? If so, then we now have four
rocks in place of our original one:</p>
<p><img alt="four circles with the word “tetartolith” on them and double-headed arrows
connecting them all" src="https://blog.glyph.im/images/tetartolith.png"></p>
<p>You might think that this would be a “tetralith” for “four”, but as they are of
a set, they are more properly a
<a href="https://www.collinsdictionary.com/us/dictionary/english/tetarto"><em>tetarto</em></a><em>lith</em>.</p>
<h2 id=repeat-as-necessary>Repeat As Necessary</h2>
<p>At some point, you’ll hit a point where you’re looking at a service and asking
“what are the two pieces I could split this service into?”, and the answer will
be “none, it makes sense as a single piece”. At that point, you will know that
you’ve achieved services of the correct size.</p>
<p>One thing about this insight that may disappoint some engineers is the
realization that service-oriented architecture is much more an <em>engineering
management</em> tool than it is an engineering tool. It’s fun to think that
“microservices” will let you play around with weird technologies and niche
programming languages consequence-free at your day job because those can all be
“separate services”, but that was always a fantasy. Integrating multiple
technologies is expensive, and introducing more moving parts always introduces
more failure points.</p>
<h2 id=advanced-techniques-a-multi-stack-microservice-environment>Advanced Techniques: A Multi-Stack Microservice Environment</h2>
<p>You’ll note that <em>splitting a service</em> heavily implies that the resulting
services will still all be in the same programming language and the same tech
stack as before. If you’re interested in deploying multiple stacks (languages,
frameworks, libraries), you <em>can</em> proceed to that outcome via bilithification,
but it is a multi-step process.</p>
<p>First, you have to complete the strategy that I’ve already outlined above. You
need to get to a service that is sufficiently granular that it is atomic; you
don’t want to split it up any further.</p>
<p>Let’s call that service “X”.</p>
<p>Second, you identify the additional complexity that would be introduced by
using a different tech stack. It’s important to be realistic here! New
technology always seems fun, and if you’re investigating this, you’re probably
predisposed to think it would be an improvement. So identify your costs
<em>first</em> and make sure you have them enumerated before you move on to the
benefits.</p>
<p>Third, identify the concrete benefits to X’s problem domain that the new tech
stack would provide.</p>
<p>Finally, do a cost-benefit analysis where you make sure that the costs from
step 2 are <em>clearly exceeded</em> by the benefits from step three. If you can’t
readily identify that in advance – sometimes experimentation is required — then
you need to treat this <em>as</em> an experiment, rather than as a strategic
direction, until you’ve had a chance to answer whatever questions you have
about the new technology’s benefits benefits.</p>
<p>Note, also, that this cost-benefit analysis requires not only doing the
technical analysis but getting buy-in from the entire team tasked with
maintaining that component.</p>
<h2 id=conclusion>Conclusion</h2>
<p>To summarize:</p>
<ol>
<li>Always start with a monolith.</li>
<li>When the monolith is too big, <em>both</em> in terms of team and of codebase, split
the monolith in half until it doesn’t make sense to split it in half any
more.</li>
<li>(Optional) Carefully evaluate services that want to adopt new technologies,
and keep the costs of doing that in mind.</li>
</ol>
<p>There is, of course, a world of complexity beyond this associated with managing
the cost of a service-oriented architecture and solving specific technical
problems that arise from that architecture.</p>
<p>If you remember the tetartolith, though, you should at least be able to get to
the right number and size of services for your system.</p>
<hr>
<p class=update-note>Thank you to <a href="/pages/patrons.html">my patrons</a> who are
supporting my writing on this blog. If you like what you’ve read here and
you’d like to read more of it, or you’d like to support my <a href="https://github.com/glyph/">various open-source
endeavors</a>, you can <a href="/pages/patrons.html">support my work as a sponsor</a>! I am also <a href=mailto:consulting@glyph.im>available for
consulting work</a> if you think your organization
could benefit from more specificity on the sort of insight you've seen here.</p>
<div class=footnote>
<hr>
<ol>
<li id=fn:1:bilithification-2023-7>
<p id=fn:1>AKA “<a href="https://blog.glyph.im/2011/06/blog-post.html">my father</a>” <a class=footnote-backref href=#fnref:1:bilithification-2023-7 title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id=fn:2:bilithification-2023-7>
<p id=fn:2>Denouncing “silos” within organizations is so common that it’s a tired
trope at this point. There is no shortage of vaguely inspirational articles
across the business trade-rag web and on LinkedIn exhorting us to
“<a href="https://www.salesforce.com/products/sales-cloud/resources/breaking-the-silo-mentality/">break</a>
<a href="https://hbr.org/2021/07/7-strategies-to-break-down-silos-in-big-meetings">down</a>
<a href="https://www.indeed.com/career-advice/career-development/breaking-down-silos">silos</a>”,
but <em>silos are the point of having an organization</em>. If everybody needs to
talk to everybody else in your entire organization, if no silos exist to
<em>prevent</em> the necessity of that communication, then you are definitionally
operating that organization at <em>minimal efficiency</em>. What people
<em>actually</em> want when they talk about “breaking down silos” is a re-org into
a functional hierarchy rather than a role-oriented hierarchy (i.e., “this
is the team that makes and markets the Foo product, this is the team that
makes and markets the Bar product” as opposed to “this is the sales team”,
“this is the engineering team”). But that’s a separate post, probably. <a class=footnote-backref href=#fnref:2:bilithification-2023-7 title="Jump back to footnote 2 in the text">↩</a></p>
</li>
</ol>
</div></body>