Happy Bear Software

Why automated tools won't help you write secure code

Whenever I mention to other developers that I'm interested in web application security, they almost always bring up automated scanning tools that check your config and dependencies for reported vulnerabilities.

These are good tools to be using. They are worth your time to investigate and will, if used regularly, reliably defend you from vulnerabilties discovered in your dependencies. This is a good thing.

And yes, in the short term, your infrastructure will be more secure. But no one ever talks about the security vulnerabilities in their own code.

Automated tools won't make you stop writing insecure code.

Keeping your software up to date with security patches is relatively easy. All it requires from you is that you pay attention and apply patches promptly. It's the bare minimum required to not have a trivially exploitable infrastructure.

Much more difficult is to keep the software that you write free from security flaws. A lot of developers believe they don't write code with security bugs, yet a whole industry exists whose sole occupation is to find and exploit insecure software.

Someone has to be writing the insecure software. The security bugs reported in your dependencies must have been introduced by a real-life developer somewhere.

All software has security vulnerabilities in it. If you disagree, I'd be happy to debate this with you, but for now I'd point out that history is not on your side. Even security experts who go out of their way to make secure software produce code with new classes of security vulnerability.

You write insecure software too. We all do. You can either ignore that fact and continue to write bad software, hoping that some automated tool will save you having to think about it, or you can get serious about making your code better.

How do I know my software is secure?

The only yardstick we have for measuring the security of a bit of software is how much time and attention has been spent on trying to find and exploit vulnerabilities in it.

That means that ideally, the security of your software should be measured in time/attention spent on finding bugs in it. Instead of flaunting how many bits your encryption uses, I'd feel much more secure about your software if you said you spent three days a month actively looking for vulnerabilities.

Cryptographic standards work in the same way. After a standard is released, it requires time for experts to attempt to break it before it can be recommended for general use. bcrypt is a good example of this. It was created as a key derivation function in 1999 but it's only more recently its use has become more widespread.

Look for vulnerabilities in your code

Focusing on automated tools allows developers to avoid the one thing they hate doing the most: actually thinking about the problem at hand.

While security researchers often hit a web application once in a specific state, as developers we preside over software over time. It grows as more developers are added to the team and is used for more and more features. As it grows, so does the attack surface. As you produce software, you produce vulnerabilties.

Your mission, should you choose to accept it, is to fix more vulnerabilties than your produce.

For every new feature you implement, think about situations in which it could be twisted into abusing your application. If you're ever sending emails to users with content they've provided, could your application be used to relay spam? If you have users with priveleges in your application, can they upgrade or modify those priveleges by modifying request parameters?

What if I don't find any vulnerabilities? Does that mean my code is secure?

If you don't find any vulnerabilities in your code, then I'd put forward that you aren't looking hard enough.

Charles Babbage and Bruce Schneier make a better case than I do. While they're talking about crypto-systems in particular, the same applies to the security mechanisms in your software:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher. Charles Babbage, inventor of the mechanical computer

Or rephrased as "Schneiers Law":

Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something. Bruce Schneier - computer security specialist

There are vulnerabilities in your software, and given enough time and attention, someone will find them. You have to be the judge of when looking for more exploits has diminishing returns, but if you've only just started looking for security errors, you're probably not there yet.

I'm also an amateur at this. Exploits that would take a security researcher hours to put together take me weeks or months to formulate. Often the exploits I come up with make use of multiple features put together in a totally non-obvious way that no one would have thought to combine.

The point is: for developers, finding vulnerabilities is hard.

Destroy more software

As developers, our natural inclination is to create rather than destroy. Finding creative and interesting ways to exploit software does not come naturally to us.

I'm not suggesting that you change specialty and worry about security full-time. All I'm asking from you is the following:

  1. That you accept that most software you write is probably insecure.
  2. That you occasionally spend a few cycles thinking about ways to exploit your software.

I think if more developers did this, we would have ever so slightly more secure software out in the production today. Not a huge improvement, but an improvement all the same.