Like performance, designing software for security is something that requires at least a little bit of forethought. (Or rather, it requires being a consideration throughout the design process, since design isn’t really a thing you do fully ahead of time.) Charging ahead without regard to performance or security is something that’s likely to land you with significant investment in unworkable approaches, or mis-designs in system boundaries that you can no longer actually fix.

And like a lot of other design considerations, there are traps we can fall into that lead us to make poor decisions.

A laundry list of common and mistaken beliefs

“Security is just about fixing bugs.” Usually this idea is contradicted by pointing to larger aspects of building secure systems, but this is supposed to be an essay specifically about software design. The trouble with seeing security bugs as just like any other kind of bug is that it fundamentally misses the most important part of security engineering. Bugs will happen. The best thing you can do besides fix them is to try to ensure they won’t be catastrophic. That is a design problem.

“Our strategy for building more secure software is to be really smaht and not make mistakes.” You’re human, you’ll fail.

“Everything is an absolute garbage fire and security is impossible!” Good design for security is absolutely possible, but it does take actual doing. This is an industry where bugs are expected, so it’s inevitable that things will be forever “broken” (in the sense that there will be more bugs), so you do have to have a frame of mind that doesn’t see that as an absolute failure. The actual engineering here is in carrying on well despite all that.

“Security is impossible man, NSA backdoored your CPU!” Telling the difference between paranoid conspiracy theory and actual fact can actually be somewhat difficult at times in the security industry, but there’s a simple razor to cutting through the stupid noise: security is economic. Locking the door to your house does next to nothing at all to stop someone from breaking in, but it raises the cost of attacking your house just enough to be effective. It’s nice that someone out there will look into whether CPUs have backdoors hidden away, but you don’t have to worry about it because if the NSA wanted something from you, they’d show up and ask for it. (And you’d give it to them.) That’s way cheaper and easier, so of course that’s how they’d do it. A massive conspiracy to modify everyone’s CPUs and keep it quiet is expensive as all hell. Don’t sweat it.

“Something is either secure or it isn’t!” It’s not a binary. Again, it’s economic. You’re doing well when the potential benefits of a compromise are outweighed by the costs/risks of compromising the system. Making this judgment requires threat modeling, though. If you underestimate the value of a compromise, you could be in for a surprise. Attackers are fond of lateral movement and creative exploitation, but doing those things involves risk and cost, too.

Sometimes we have to pretend like something is “secure or not,” but that’s where best practices come in. If you’ve done your duty on hashing passwords, preventing SQL injections and updating dependencies, and someone still runs off with your password database, you’re still doing better than most! (But much more likely… that will have been enough to prevent that from happening.)

“And for EXTRA security, we’ll do something bizarre!” A good example are the people who “salt AND PEPPER passwords!” This is a fundamental failure to appreciate what practices actually have security benefits, and instead is just flailing around at anything that looks, at a cursory glance, like it might help. There’s a double failure in “password peppering.” First, the only situation it helps in, really, is SQL injection vulnerabilities, because if they can compromise the software more thoroughly, they can just get the pepper too and nothing has changed. But SQL injection is one of those things that’s an absolute failure of good engineering practice, because it’s 100% solvable. So go solve it.

Second, it erroneously believes that “more is better.” Peppering is like putting the biggest, baddest deadbolt known to man on your front door. Okay, so an attacker just uses the window, problem solved. Proper salting and hashing (like bcrypt) and preventing use of the most common passwords (there’s a service for that now) is enough to solve the problem. You’ve locked the door. Your security vulnerabilities are now elsewhere, don’t waste more time on this one.

“Lol, even passwords are just security by obscurity, man.” No they aren’t. Don’t reason by glib reference to popular sayings. Think.

“We’re a Fortune 500 company, of course we’re the most important people around and we have the biggest concerns and need the best to-“ You know the type. They’ve got a security command center with lots of computer screens on walls and hanging from the ceiling. Very fancy looking. Meanwhile, their whole C-suite gets phished randomly, not even by a concerted effort.

Equally important to not underestimating the importance of a target is not over-estimating. And the ego of many people and businesses leads them to do the stupidest shit imaginable. Combine this with a general failure to actually employ the basics, and you get ridiculous displays that accomplish nothing useful. When Russia wanted access to some computers to mess with elections in the US, they sent some phishing emails. Cost, benefit. It’s astounding the degree to which many decisions about security can be driven by looking like the decision makers are doing something important, instead of actually doing anything useful at all.

“Ugh, well obviously that’s the user’s fault, we’ll just have to train them better.” Every single widespread failure of “users” is actually a technology failure. The users didn’t update? Why? The root cause is your fault, not theirs. The users fell for phishing? Why? The root cause is your fault, not theirs.

Taking updates seriously—making sure your updates actually work and that you roll them out automatically (usually) without user intervention—is a solution to the updates problem. Complaining “ugh, why do you people suck?” isn’t. Rolling out $15 U2F security keys to all your employees is a solution to the phishing problem. Hiring people to deliberately try to phish your own employees and sending them to training when they fall for it… isn’t.

The zeroth rule of security engineering is: it’s always the simple stuff

These are some of the things I’ve seen screwed up. This is a blog about software design, and this post grew until I had to split it into two parts. So we’ll talk next week about things to do better. But let’s conclude this week with a few things specifically about designing software that you should NOT do:

  • Don’t ignore security (or performance for that matter) until the very end. So many of the best tools we can use are things we can do early. The most important question in software modularity is what modules are NOT dependencies. If we include thinking about privileges, we can further clarify our understanding of the system. What a modules cannot do is just as important as what it does. The more we constrain a system, the easier it is to understand, and the less valuable it is to an attacker.

  • Don’t create “game over man” designs where one bug ends the world. The biggest security innovation of the past 20 years has been creating more security layers. Hardware (like TPMs) can protect itself from software. The kernel can protect itself even from root (e.g. secure boot). Root can protect itself from users. Users can protect their own accounts from their own software with sandboxing. Reduce the value of a compromise. Increase the cost of an attack.

  • Don’t ignore visibility. What’s your system doing? That good for detecting bugs (logging exceptions and such). It’s good for monitoring performance (e.g. profiling). And it’s good for post-incident forensics. And heck, it’s good for finding out you were even hacked in the first place.

  • Don’t ship insecure defaults. Don’t have a default password, have a default process for randomly generating a new temporary password. Don’t bind to an open port with no authentication, even if it’s only on the loopback interface. (Telsa got hit by an open Kubernetes admin service. No authentication by default, accidentally put it on the public internet, and oops!)

  • Don’t neglect the basics. Update dependencies. Deploy your own updates. Authenticate and encrypt communications. Properly store passwords. Support 2-factor authentication, especially hardware keys like U2F. Use least privilege, especially for the obviously dangerous code.

Next week

This week is a little bit on the negative side. Here are mistakes people make; here are things commonly done that you should avoid. Next week we’ll talk a little bit more on the positive side of how to think about security during the design process.