Unsigned Commits

I’m not going to cryptographically sign my git commits, and you shouldn’t either.

I am going to tell you why I don’t think you should sign your Git commits, even though doing so with SSH keys is now easier than ever. But first, to contextualize my objection, I have a brief hypothetical for you, and then a bit of history from the evolution of security on the web.


paper reading “Sign Here:” with a pen poised over it


It seems like these days, everybody’s signing all different kinds of papers.

Bank forms, permission slips, power of attorney; it seems like if you want to securely validate a document, you’ve gotta sign it.

So I have invented a machine that automatically signs every document on your desk, just in case it needs your signature. Signing is good for security, so you should probably get one, and turn it on, just in case something needs your signature on it.

We also want to make sure that verifying your signature is easy, so we will have them all notarized and duplicates stored permanently and publicly for future reference.

No? Not interested?


Hopefully, that sounded like a silly idea to you.

Most adults in modern civilization have learned that signing your name to a document has an effect. It is not merely decorative; the words in the document being signed have some specific meaning and can be enforced against you.

In some ways the metaphor of “signing” in cryptography is bad. One does not “sign” things with “keys” in real life. But here, it is spot on: a cryptographic signature can have an effect.

It should be an input to some software, one that is acted upon. Software does a thing differently depending on the presence or absence of a signature. If it doesn’t, the signature probably shouldn’t be there.


Consider the most venerable example of encryption and signing that we all deal with every day: HTTPS. Many years ago, browsers would happily display unencrypted web pages. The browser would also encrypt the connection, if the server operator had paid for an expensive certificate and correctly configured their server. If that operator messed up the encryption, it would pop up a helpful dialog box that would tell the user “This website did something wrong that you cannot possibly understand. Would you like to ignore this and keep working?” with buttons that said “Yes” and “No”.

Of course, these are not the precise words that were written. The words, as written, said things about “information you exchange” and “security certificate” and “certifying authorities” but “Yes” and “No” were the words that most users read. Predictably, most users just clicked “Yes”.

In the usual case, where users ignored these warnings, it meant that no user ever got meaningful security from HTTPS. It was a component of the web stack that did nothing but funnel money into the pockets of certificate authorities and occasionally present annoying interruptions to users.

In the case where the user carefully read and honored these warnings in the spirit they were intended, adding any sort of transport security to your website was a potential liability. If you got everything perfectly correct, nothing happened except the browser would display a picture of a small green purse. If you made any small mistake, it would scare users off and thereby directly harm your business. You would only want to do it if you were doing something that put a big enough target on your site that you became unusually interesting to attackers, or were required to do so by some contractual obligation like credit card companies.

Keep in mind that the second case here is the best case.

In 2016, the browser makers noticed this problem and started taking some pretty aggressive steps towards actually enforcing the security that HTTPS was supposed to provide, by fixing the user interface to do the right thing. If your site didn’t have security, it would be shown as “Not Secure”, a subtle warning that would gradually escalate in intensity as time went on, correctly incentivizing site operators to adopt transport security certificates. On the user interface side, certificate errors would be significantly harder to disregard, making it so that users who didn’t understand what they were seeing would actually be stopped from doing the dangerous thing.

Nothing fundamental1 changed about the technical aspects of the cryptographic primitives or constructions being used by HTTPS in this time period, but socially, the meaning of an HTTP server signing and encrypting its requests changed a lot.


Now, let’s consider signing Git commits.

You may have heard that in some abstract sense you “should” be signing your commits. GitHub puts a little green “verified” badge next to commits that are signed, which is neat, I guess. They provide “security”. 1Password provides a nice UI for setting it up. If you’re not a 1Password user, GitHub itself recommends you put in just a few lines of configuration to do it with either a GPG, SSH, or even an S/MIME key.

But while GitHub’s documentation quite lucidly tells you how to sign your commits, its explanation of why is somewhat less clear. Their purse is the word “Verified”; it’s still green. If you enable “vigilant mode”, you can make the blank “no verification status” option say “Unverified”, but not much else changes.

This is like the old-style HTTPS verification “Yes”/“No” dialog, except that there is not even an interruption to your workflow. They might put the “Unverified” status on there, but they’ve gone ahead and clicked “Yes” for you.

It is tempting to think that the “HTTPS” metaphor will map neatly onto Git commit signatures. It was bad when the web wasn’t using HTTPS, and the next step in that process was for Let’s Encrypt to come along and for the browsers to fix their implementations. Getting your certificates properly set up in the meanwhile and becoming familiar with the tools for properly doing HTTPS was unambiguously a good thing for an engineer to do. I did, and I’m quite glad I did so!

However, there is a significant difference: signing and encrypting an HTTPS request is ephemeral; signing a Git commit is functionally permanent.

This ephemeral nature meant that errors in the early HTTPS landscape were easily fixable. Earlier I mentioned that there was a time where you might not want to set up HTTPS on your production web servers, because any small screw-up would break your site and thereby your business. But if you were really skilled and you could see the future coming, you could set up monitoring, avoid these mistakes, and rapidly recover. These mistakes didn’t need to badly break your site.

We can extend the analogy to HTTPS, but we have to take a detour into one of the more unpleasant mistakes in HTTPS’s history: HTTP Public Key Pinning, or “HPKP”. The idea with HPKP was that you could publish a record in an HTTP header where your site commits2 to using certain certificate authorities for a period of time, where that period of time could be “forever”. Attackers gonna attack, and attack they did. Even without getting attacked, a site could easily commit “HPKP Suicide” where they would pin the wrong certificate authority with a long timeline, and their site was effectively gone for every browser that had ever seen those pins. As a result, after a few years, HPKP was completely removed from all browsers.

Git commit signing is even worse. With HPKP, you could easily make terrible mistakes with permanent consequences even though you knew the exact meaning of the data you were putting into the system at the time you were doing it. With signed commits, you are saying something permanently, but you don’t really know what it is that you’re saying.


Today, what is the benefit of signing a Git commit? GitHub might present it as “Verified”. It’s worth noting that only GitHub will do this, since they are the root of trust for this signing scheme. So, by signing commits and registering your keys with GitHub, you are, at best, helping to lock in GitHub as a permanent piece of infrastructure that is even harder to dislodge because they are not only where your code is stored, but also the arbiters of whether or not it is trustworthy.

In the future, what is the possible security benefit? If we all collectively decide we want Git to be more secure, then we will need to meaningfully treat signed commits differently from unsigned ones.

There’s a long tail of unsigned commits several billion entries long. And those are in the permanent record as much as the signed ones are, so future tooling will have to be able to deal with them. If, as stewards of Git, we wish to move towards a more secure Git, as the stewards of the web moved towards a more secure web, we do not have the option that the web did. In the browser, the meaning of a plain-text HTTP or incorrectly-signed HTTPS site changed, in order to encourage the site’s operator to change the site to be HTTPS.

In contrast, the meaning of an unsigned commit cannot change, because there are zillions of unsigned commits lying around in critical infrastructure and we need them to remain there. Commits cannot meaningfully be changed to become signed retroactively. Unlike an online website, they are part of a historical record, not an operating program. So we cannot establish the difference in treatment by changing how unsigned commits are treated.

That means that tooling maintainers will need to provide some difference in behavior that provides some incentive. With HTTPS where the binary choice was clear: don’t present sites with incorrect, potentially compromised configurations to users. The question was just how to achieve that. With Git commits, the difference in treatment of a “trusted” commit is far less clear.

If you will forgive me a slight straw-man here, one possible naive interpretation is that a “trusted” signed commit is that it’s OK to run in CI. Conveniently, it’s not simply “trusted” in a general sense. If you signed it, it’s trusted to be from you, specifically. Surely it’s fine if we bill the CI costs for validating the PR that includes that signed commit to your GitHub account?

Now, someone can piggy-back off a 1-line typo fix that you made on top of an unsigned commit to some large repo, making you implicitly responsible for transitively signing all unsigned parent commits, even though you haven’t looked at any of the code.

Remember, also, that the only central authority that is practically trustable at this point is your GitHub account. That means that if you are using a third-party CI system, even if you’re using a third-party Git host, you can only run “trusted” code if GitHub is online and responding to requests for its “get me the trusted signing keys for this user” API. This also adds a lot of value to a GitHub credential breach, strongly motivating attackers to sneakily attach their own keys to your account so that their commits in unrelated repos can be “Verified” by you.

Let’s review the pros and cons of turning on commit signing now, before you know what it is going to be used for:

Pro Con
Green “Verified” badge Unknown, possibly unlimited future liability for the consequences of running code in a commit you signed
Further implicitly cementing GitHub as a centralized trust authority in the open source world
Introducing unknown reliability problems into infrastructure that relies on commit signatures
Temporary breach of your GitHub credentials now lead to potentially permanent consequences if someone can smuggle a new trusted key in there
New kinds of ongoing process overhead as commit-signing keys become new permanent load-bearing infrastructure, like “what do I do with expired keys”, “how often should I rotate these”, and so on

I feel like the “Con” column is coming out ahead.


That probably seemed like increasingly unhinged hyperbole, and it was.

In reality, the consequences are unlikely to be nearly so dramatic. The status quo has a very high amount of inertia, and probably the “Verified” badge will remain the only visible difference, except for a few repo-specific esoteric workflows, like pushing trust verification into offline or sandboxed build systems. I do still think that there is some potential for nefariousness around the “unknown and unlimited” dimension of any future plans that might rely on verifying signed commits, but any flaws are likely to be subtle attack chains and not anything flashy and obvious.

But I think that one of the biggest problems in information security is a lack of threat modeling. We encrypt things, we sign things, we institute rotation policies and elaborate useless rules for passwords, because we are looking for a “best practice” that is going to save us from having to think about what our actual security problems are.

I think the actual harm of signing git commits is to perpetuate an engineering culture of unquestioningly cargo-culting sophisticated and complex tools like cryptographic signatures into new contexts where they have no use.

Just from a baseline utilitarian philosophical perspective, for a given action A, all else being equal, it’s always better not to do A, because taking an action always has some non-zero opportunity cost even if it is just the time taken to do it. Epsilon cost and zero benefit is still a net harm. This is even more true in the context of a complex system. Any action taken in response to a rule in a system is going to interact with all the other rules in that system. You have to pay complexity-rent on every new rule. So an apparently-useless embellishment like signing commits can have potentially far-reaching consequences in the future.

Git commit signing itself is not particularly consequential. I have probably spent more time writing this blog post than the sum total of all the time wasted by all programmers configuring their git clients to add useless signatures; even the relatively modest readership of this blog will likely transfer more data reading this post than all those signatures will take to transmit to the various git clients that will read them. If I just convince you not to sign your commits, I don’t think I’m coming out ahead in the felicific calculus here.

What I am actually trying to point out here is that it is useful to carefully consider how to avoid adding junk complexity to your systems. One area where junk tends to leak in to designs and to cultures particularly easily is in intimidating subjects like trust and safety, where it is easy to get anxious and convince ourselves that piling on more stuff is safer than leaving things simple.

If I can help you avoid adding even a little bit of unnecessary complexity, I think it will have been well worth the cost of the writing, and the reading.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics such as “What else should I not apply a cryptographic signature to?”.


  1. Yes yes I know about heartbleed and Bleichenbacher attacks and adoption of forward-secret ciphers and CRIME and BREACH and none of that is relevant here, okay? Jeez. 

  2. Do you see what I did there. 

Your Text Editor (Probably) Isn’t Malware Any More

Updating a post from 2015, I briefly discuss the modern editor-module threat landscape.

In 2015, I wrote one of my more popular blog posts, “Your Text Editor Is Malware”, about the sorry state of security in text editors in general, but particularly in Emacs and Vim.

It’s nearly been a decade now, so I thought I’d take a moment to survey the world of editor plugins and see where we are today. Mostly, this is to allay fears, since (in today’s landscape) that post is unreasonably alarmist and inaccurate, but people are still reading it.

Problem Is It Fixed?
vim.org is not available via https Yep! http://www.vim.org/ redirects to https://www.vim.org/ now.
Emacs's HTTP client doesn't verify certificates by default Mostly! The documentation is incorrect and there are some UI problems1, but it doesn’t blindly connect insecurely.
ELPA and MELPA supply plaintext-HTTP package sources Kinda. MELPA correctly responds to HTTP only with redirects to HTTPS, and ELPA at least offers HTTPS and uses HTTPS URLs exclusively in the default configuration.
You have to ship your own trust roots for Emacs. Fixed! The default installation of Emacs on every platform I tried (including Windows) seems to be providing trust roots.
MELPA offers to install code off of a wiki. Yes. Wiki packages were disabled entirely in 2018.

The big takeaway here is that the main issue of there being no security whatsoever on Emacs and Vim package installation and update has been fully corrected.

Where To Go Next?

Since I believe that post was fairly influential, in particular in getting MELPA to tighten up its security, let me take another big swing at a call to action here.

More modern editors have made greater strides towards security. VSCode, for example, has enabled the Chromium sandbox and added some level of process separation. Emacs has not done much here yet, but over the years it has consistently surprised me with its ability to catch up to its more modern competitors, so I hope it will surprise me here as well.

Even for VSCode, though, this sandbox still seems pretty permissive — plugins still seem to execute with the full trust of the editor itself — but it's a big step in the right direction. This is a much bigger task than just turning on HTTPS, but I really hope that editors start taking the threat of rogue editor packages seriously before attackers do, and finding ways to sandbox and limit the potential damage from third-party plugins, maybe taking a cue from other tools.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. the documention still says “gnutls-verify-error” defaults to nil and that means no certificate verification, and maybe it does do that if you are using raw TLS connections, but in practice, url-retrieve-synchronously does appear to present an interactive warning before proceeding if the certificate is invalid or expired. It still has yet to catch up with web browsers from 2016, in that it just asks you “do you want to do this horribly dangerous thing? y/n” but that is a million times better than proceeding without user interaction. 

Never Run ‘python’ In Your Downloads Folder

Python can execute code. Make sure it executes only the code you want it to.

One of the wonderful things about Python is the ease with which you can start writing a script - just drop some code into a .py file, and run python my_file.py. Similarly it’s easy to get started with modularity: split my_file.py into my_app.py and my_lib.py, and you can import my_lib from my_app.py and start organizing your code into modules.

However, the details of the machinery that makes this work have some surprising, and sometimes very security-critical consequences: the more convenient it is for you to execute code from different locations, the more opportunities an attacker has to execute it as well...

Python needs a safe space to load code from

Here are three critical assumptions embedded in Python’s security model:

  1. Every entry on sys.path is assumed to be a secure location from which it is safe to execute arbitrary code.
  2. The directory where the “main script” is located is always on sys.path.
  3. When invoking python directly, the current directory is treated as the “main script” location, even when passing the -c or -m options.

If you’re running a Python application that’s been installed properly on your computer, the only location outside of your Python install or virtualenv that will be automatically added to your sys.path (by default) is the location where the main executable, or script, is installed.

For example, if you have pip installed in /usr/bin, and you run /usr/bin/pip, then only /usr/bin will be added to sys.path by this feature. Anything that can write files to that /usr/bin can already make you, or your system, run stuff, so it’s a pretty safe place. (Consider what would happen if your ls executable got replaced with something nasty.)

However, one emerging convention is to prefer calling /path/to/python -m pip in order to avoid the complexities of setting up $PATH properly, and to avoid dealing with divergent documentation of how scripts are installed on Windows (usually as .exe files these days, rather than .py files).

This is fine — as long as you trust that you’re the only one putting files into the places you can import from — including your working directory.

Your “Downloads” folder isn’t safe

As the category of attacks with the name “DLL Planting” indicates, there are many ways that browsers (and sometimes other software) can be tricked into putting files with arbitrary filenames into the Downloads folder, without user interaction.

Browsers are starting to take this class of vulnerability more seriously, and adding various mitigations to avoid allowing sites to surreptitiously drop files in your downloads folder when you visit them.1

Even with mitigations though, it will be hard to stamp this out entirely: for example, the Content-Disposition HTTP header’s filename* parameter exists entirely to allow the the site to choose the filename that it downloads to.

Composing the attack

You’ve made a habit of python -m pip to install stuff. You download a Python package from a totally trustworthy website that, for whatever reason, has a Python wheel by direct download instead of on PyPI. Maybe it’s internal, maybe it’s a pre-release; whatever. So you download totally-legit-package.whl, and then:

1
2
~$ cd Downloads
~/Downloads$ python -m pip install ./totally-legit-package.whl

This seems like a reasonable thing to do, but unbeknownst to you, two weeks ago, a completely different site you visited had some XSS JavaScript on it that downloaded a pip.py with some malware in it into your downloads folder.

Boom.

Demonstrating it

Here’s a quick demonstration of the attack:

1
2
3
4
5
~$ mkdir attacker_dir
~$ cd attacker_dir
~/attacker_dir$ echo 'print("lol ur pwnt")' > pip.py
~/attacker_dir$ python -m pip install requests
lol ur pwnt

PYTHONPATH surprises

Just a few paragraphs ago, I said:

If you’re running a Python application that’s been installed properly on your computer, the only location outside of your Python install or virtualenv that will be automatically added to your sys.path (by default) is the location where the main executable, or script, is installed.

So what is that parenthetical “by default” doing there? What other directories might be added?

Anything entries on your $PYTHONPATH environment variable. You wouldn’t put your current directory on $PYTHONPATH, would you?

Unfortunately, there’s one common way that you might have done so by accident.

Let’s simulate a “vulnerable” Python application:

1
2
3
4
5
# tool.py
try:
    import optional_extra
except ImportError:
    print("extra not found, that's fine")

Make 2 directories: install_dir and attacker_dir. Drop this in install_dir. Then, cd attacker_dir and put our sophisticated malware there, under the name used by tool.py:

1
2
# optional_extra.py
print("lol ur pwnt")

Finally, let’s run it:

1
2
~/attacker_dir$ python ../install_dir/tool.py
extra not found, that's fine

So far, so good.

But, here’s the common mistake. Most places that still recommend PYTHONPATH recommend adding things to it like so:

1
export PYTHONPATH="/new/useful/stuff:$PYTHONPATH";

Intuitively, this makes sense; if you’re adding project X to your $PYTHONPATH, maybe project Y had already added something, maybe not; you never want to blow it away and replace what other parts of your shell startup might have done with it, especially if you’re writing documentation that lots of different people will use.

But this idiom has a critical flaw: the first time it’s invoked, if $PYTHONPATH was previously either empty or un-set, this then includes an empty string, which resolves to the current directory. Let’s try it:

1
2
3
~/attacker_dir$ export PYTHONPATH="/a/perfectly/safe/place:$PYTHONPATH";
~/attacker_dir$ python ../install_dir/tool.py
lol ur pwnt

Oh no! Well, just to be safe, let’s empty out $PYTHONPATH and try it again:

1
2
3
~/attacker_dir$ export PYTHONPATH="";
~/attacker_dir$ python ../install_dir/tool.py
lol ur pwnt

Still not safe!

What’s happening here is that if PYTHONPATH is empty, that is not the same thing as it being unset. From within Python, this is the difference between os.environ.get("PYTHONPATH") == "" and os.environ.get("PYTHONPATH") == None.

If you want to be sure you’ve cleared $PYTHONPATH from a shell (or somewhere in a shell startup), you need to use the unset command:

1
2
~/attacker_dir$ python ../install_dir/tool.py
extra not found, that's fine

Setting PYTHONPATH used to be the most common way to set up a Python development environment; hopefully it’s mostly fallen out of favor, with virtualenvs serving this need better. If you’ve got an old shell configuration that still sets a $PYTHONPATH that you don’t need any more, this is a good opportunity to go ahead and delete it.

However, if you do need an idiom for “appending to” PYTHONPATH in a shell startup, use this technique:

1
2
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}new_entry_1"
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}new_entry_2"

In both bash and zsh, this results in

1
2
$ echo "${PYTHONPATH}"
new_entry_1:new_entry_2

with no extra colons or blank entries on your $PYTHONPATH variable now.

Finally: if you’re still using $PYTHONPATH, be sure to always use absolute paths!

There are a bunch of variant unsafe behaviors related to inspecting files in your Downloads folder by doing anything interactive with Python. Other risky activities:

  • Running python ~/Downloads/anything.py (even if anything.py is itself safe) from anywhere - as it will add your downloads folder to sys.path by virtue of anything.py’s location.
  • Jupyter Notebook puts the directory that the notebook is in onto sys.path, just like Python puts the script directory there. So jupyter notebook ~/Downloads/anything.ipynb is just as dangerous as python ~/Downloads/anything.py.

Get those scripts and notebooks out of your downloads folder before you run ’em!

But cd Downloads and then doing anything interactive remains a problem too:

  • Running a python -c command that includes an import statement while in your ~/Downloads folder
  • Running python interactively and importing anything while in your ~/Downloads folder

Remember that ~/Downloads/ isn’t special; it’s just one place where unexpected files with attacker-chosen filenames might sneak in. Be on the lookout for other locations where this is true. For example, if you’re administering a server where the public can upload files, make extra sure that neither your application nor any administrator who might run python ever does cd public_uploads.

Maybe consider changing the code that handles uploads to mangle file names to put a .uploaded at the end, avoiding the risk of a .py file getting uploaded and executed accidentally.

Mitigations

If you have tools written in Python that you want to use while in your downloads folder, make a habit of preferring typing the path to the script (/path/to/venv/bin/pip) rather than the module (/path/to/venv/bin/python -m pip).

In general, just avoid ever having ~/Downloads as your current working directory, and move any software you want to use to a more appropriate location before launching it.

It’s important to understand where Python gets the code that it’s going to be executing. Giving someone the ability to execute even one line of arbitrary Python is equivalent to giving them full control over your computer!

Why I wrote this article

When writing a “tips and tricks” article like this about security, it’s very easy to imply that I, the author, am very clever for knowing this weird bunch of trivia, and the only way for you, the reader, to stay safe, is to memorize a huge pile of equally esoteric stuff and constantly be thinking about it. Indeed, a previous draft of this post inadvertently did just that. But that’s a really terrible idea and not one that I want to have any part in propagating.

So if I’m not trying to say that, then why post about it? I’ll explain.

Over many years of using Python, I’ve infrequently, but regularly, seen users confused about the locations that Python loads code from. One variety of this confusion is when people put their first program that uses Twisted into a file called twisted.py. That shadows the import of the library, breaking everything. Another manifestation of this confusion is a slow trickle of confused security reports where a researcher drops a module into a location where Python is documented to load code from — like the current directory in the scenarios described above — and then load it, thinking that this reflects an exploit because it’s executing arbitrary code.

Any confusion like this — even if the system in question is “behaving as intended”, and can’t readily be changed — is a vulnerability that an attacker can exploit.

System administrators and developers are high-value targets in the world of cybercrime. If you hack a user, you get that user’s data; but if you hack an admin or a dev, and you do it right, you could get access to thousands of users whose systems are under the administrator’s control or even millions of users who use the developers’ software.

Therefore, while “just be more careful all the time” is not a sustainable recipe for safety, to some extent, those of us acting on our users’ behalf do have a greater obligation to be more careful. At least, we should be informed about the behavior of our tools. Developer tools, like Python, are inevitably power tools which may require more care and precision than the average application.

Nothing I’ve described above is a “bug” or an “exploit”, exactly; I don’t think that the developers of Python or Jupyter have done anything wrong; the system works the way it’s designed and the way it’s designed makes sense. I personally do not have any great ideas for how things could be changed without removing a ton of power from Python.

One of my favorite safety inventions is the SawStop. Nothing was wrong with the way table saws worked before its invention; they were extremely dangerous tools that performed an important industrial function. A lot of very useful and important things were made with table saws. Yet, it was also true that table saws were responsible for a disproportionate share of wood-shop accidents, and, in particular, lost fingers. Despite plenty of care taken by experienced and safety-conscious carpenters, the SawStop still saves many fingers every year.

So by highlighting this potential danger I also hope to provoke some thinking among some enterprising security engineers out there. What might be the SawStop of arbitrary code execution for interactive interpreters? What invention might be able to prevent some of the scenarios I describe below without significantly diminishing the power of tools like Python?

Stay safe out there, friends.


Acknowledgments

Thanks very much to Paul Ganssle, Nathaniel J. Smith, Itamar Turner-Trauring and Nelson Elhage for substantial feedback on earlier drafts of this post.

Any errors remain my own.


  1. Restricting which sites can drive-by drop files into your downloads folder is a great security feature, except the main consequence of adding it is that everybody seems to be annoyed by it, not understand it, and want to turn it off

Careful With That PyPI

PyPI credentials are important. Here are some tips for securing them a little better.

Too Many Secrets

A wise man once said, “you shouldn’t use ENV variables for secret data”. In large part, he was right, for all the reasons he gives (and you should read them). Filesystem locations are usually a better operating system interface to communicate secrets than environment variables; fewer things can intercept an open() than can read your process’s command-line or calling environment.

One might say that files are “more secure” than environment variables. To his credit, Diogo doesn’t, for good reason: one shouldn’t refer to the superiority of such a mechanism as being “more secure” in general, but rather, as better for a specific reason in some specific circumstance.

Supplying your PyPI password to tools you run on your personal machine is a very different case than providing a cryptographic key to a containerized application in a remote datacenter. In this case, based on the constraints of the software presently available, I believe an environment variable provides better security, if you use it correctly.

Popping A Shell By Any Other Name

If you upload packages to the python package index, and people use those packages, your PyPI password is an extremely high-privilege credential: effectively, it grants a time-delayed arbitrary code execution privilege on all of the systems where anyone might pip install your packages.

Unfortunately, the suggested mechanism to manage this crucial, potentially world-destroying credential is to just stick it in an unencrypted file.

The authors of this documentation know this is a problem; the authors of the tooling know too (and, given that these tools are all open source and we all could have fixed them to be better about this, we should all feel bad).

Leaving the secret lying around on the filesystem is a form of ambient authority; a permission you always have, but only sometimes want. One of the worst things about this is that you can easily forget it’s there if you don’t use these credentials very often.

The keyring is a much better place, but even it can be a slightly scary place to put such a thing, because it’s still easy to put it into a state where some random command could upload a PyPI release without prompting you. PyPI is forever, so we want to measure twice and cut once.

Luckily, even more secure places exist: password managers. If you use https://1password.com or https://www.lastpass.com, both offer command-line interfaces that integrate nicely with PyPI. If you use 1password, you’ll really want https://stedolan.github.io/jq/ (apt-get install jq, brew install jq) to slice & dice its command-line.

The way that I manage my PyPI credentials is that I never put them on my filesystem, or even into my keyring; instead, I leave them in my password manager, and very briefly toss them into the tools that need them via an environment variable.

First, I have the following shell function, to prevent any mistakes:

1
2
3
4
function twine () {
    echo "Use dev.twine or prod.twine depending on where you want to upload.";
    return 1;
}

For dev.twine, I configure twine to always only talk to my local DevPI instance:

1
2
3
4
5
6
function dev.twine () {
    env TWINE_USERNAME=root \
        TWINE_PASSWORD= \
        TWINE_REPOSITORY_URL=http://127.0.0.1:3141/root/plus/ \
        twine "$@";
}

This way I can debug Twine, my setup.py, and various test-upload things without ever needing real credentials at all.

But, OK. Eventually, I need to actually get the credentials and do the thing. How does that work?

1Password

1password’s command line is a little tricky to log in to (you have to eval its output, it’s not just a command), so here’s a handy shell function that will do it.

1
2
3
4
5
6
function opme () {
    # Log this shell in to 1password.
    if ! env | grep -q OP_SESSION; then
        eval "$(op signin "$(jq -r '.latest_signin' ~/.op/config)")";
    fi;
}

Then, I have this little helper for slicing out a particular field from the OP JSON structure:

1
2
3
function _op_field () {
    jq -r '.details.fields[] | select(.name == "'"${1}"'") | .value';
}

And finally, I use this to grab the item I want (named, memorably enough, “PyPI”) and invoke Twine:

1
2
3
4
5
6
7
function prod.twine () {
    opme;
    local pypi_item="$(op get item PyPI)";
    env TWINE_USERNAME="$(echo ${pypi_item} | _op_field username)" \
        TWINE_PASSWORD="$(echo "${pypi_item}" | _op_field password)" \
        twine "$@";
}

LastPass

For lastpass, you can just log in (for all shells; it’s a little less secure) via lpass login; if you’ve logged in before you often don’t even have to do that, and it will just prompt you when running command that require you to be logged in; so we don’t need the preamble that 1password’s command line did.

Its version of prod.twine looks quite similar, but its plaintext output obviates the need for jq:

1
2
3
4
5
function prod.twine () {
    env TWINE_USERNAME="$(lpass show PyPI --username)" \
        TWINE_PASSWORD="$(lpass show PyPI --password)" \
        twine "$@";
}

In Conclusion

“Keep secrets out of your environment” is generally a good idea, and you should always do it when you can. But, better a moment in your process environment than an eternity on your filesystem. Environment-based configuration can be a very useful stopgap for limiting the lifetimes of credentials when your tools don’t support more sophisticated approaches to secret storage.1

Post Script

If you are interested in secure secret storage, my micro-project secretly might be of interest. Right now it doesn’t do a whole lot; it’s just a small wrapper around the excellent keyring module and the pinentry / pinentry-mac password prompt tools. secretly presents an interface both for prompting users for their credentials without requiring the command-line or env vars, and for saving them away in keychain, for tools that need to pull in an API key and don’t want to make the user manually edit a config file first.


  1. Really, PyPI should have API keys that last for some short amount of time, that automatically expire so you don’t have to freak out if you gave somebody a 5-year-old laptop and forgot to wipe it first. But again, if I wanted that so bad, I should have implemented it myself... 

Sourceforge Update

Authenticate downloaded binaries from sourceforge a little more.

When I wrote my previous post about Sourceforge, things were looking pretty grim for the site; I (rightly, I think) slammed them for some pretty atrocious security practices.

I invited the SourceForge ops team to get in touch about it, and, to their credit, they did. Even better, they didn't ask for me to take down the article, or post some excuse; they said that they knew there were problems and they were working on a long-term plan to address them.

This week I received an update from said ops, saying:

We have converted many of our mirrors over to HTTPS and are actively working on the rest + gathering new ones. The converted ones happen to be our larger mirrors and are prioritized.

We have added support for HTTPS on the project web. New projects will automatically start using it. Old projects can switch over at their convenience as some of them may need to adjust it to properly work. More info here:

https://sourceforge.net/blog/introducing-https-for-project-websites/

Coincidentally, right after I received this email, I installed a macOS update, which means I needed to go back to Sourceforge to grab an update to my boot manager. This time, I didn't have to do any weird tricks to authenticate my download: the HTTPS project page took me to an HTTPS download page, which redirected me to an HTTPS mirror. Success!

(It sounds like there might still be some non-HTTPS mirrors in rotation right now, but I haven't seen one yet in my testing; for now, keep an eye out for that, just in case.)

If you host a project on Sourceforge, please go push that big "Switch to HTTPS" button. And thanks very much to the ops team at Sourceforge for taking these problems seriously and doing the hard work of upgrading their users' security.