Happy Bear Software

Falsehoods developers believe about security

I talk to many software developers about security. The following is a set of falsehoods that I find a lot of developers seem to believe...

Developers that create insecure software are uniformly stupid

You might follow a news aggregator called Hacker News. Occasionally a startup founder who has just learned to code will request feedback their first ever public-facing project. The projects they build are almost always vulnerable to forced browsing, i.e. being able to view/modify the data of other users by changing the URL parameters and/or form fields. I've personally helped such founders fix these errors over email on at least three occasions.

Bob, another developer who takes an interest in web application security, was working with a large organisation that employed some of the smartest developers he had ever worked with. The developers were the sort that speak at conferences and write white-papers on industry best practices. These developers were obviously smarter, more experienced and simply better programmers than Bob, by any metric we have for measuring the expertise of developers. Bob found a forced browsing vulnerability that enabled him to drop the users table on day three.

Even when Bob explained in detail how to fix the vulnerability with a proof of concept, the developer that picked up the task to fix it failed to do so. Eventually Bob had to go outside process to get the commit that fixed the vulnerability into the codebase.

The failure was exactly the same, down to the line of code. In the first instance it was written by developers with months of experience, in the second it was by developers with careers spanning decades. Based on Bob's experiences, I find it difficult to entertain that the likelihood of writing secure software goes up with developer skill and experience. How good you are at creating software has little to do with how good you are at exploiting it.

It is possible to create completely secure software

Maybe it's possible that completely secure software exists. But it would take willfull ignorance of the history of our field to believe anyone has ever written anything like it.

All open source software of significance has been found to contain security vulnerabilities. Google for any open source software you can think of with 'security vulnerabilities' appended and try to find a single example that is free from security flaws.

Now think about your application software. Has it received anywhere near the level of peer-review that popular open source projects have? Do you really think that software worked on by hundreds of people is going to be less secure than what you and your team put together?

Believing that you write perfectly secure software is like believing that you write software with no bugs in it. It's a nice fantasy but it's not borne out by material reality. All developers, regardless of experience or skill level, write insecure software. Once you realize this you can begin taking steps to limit the damage it causes and to find and fix the security vulnerabilities in your code.

If only this software had security 'baked into it from the start' it would be secure

There is only one way to build secure software, and that is to regularly apply your creative and intellectual faculties to attacking it in production installations.

The security of running software is not measured in how much time the developers took to think about security. There's a single, cold, brutal metric that determines how secure it is: the presence of exploitable security flaws.

Whether you're a brand new developer fresh out of university or a grizzled industry veteran, you are going to write insecure software at some point or another. The amount of time you spend thinking about security isn't going to change that. Your software either contains vulnerabilities or it doesn't, and it probably does.

If you're thinking about the lowest hanging security issues from the start then of course your software will be more secure than otherwise. But this is no guarantee that the final product will be free from security errors.

All we have to do is use X new service or technology and our software will be secure

You should use all means at your disposal to detect vulnerabilities in your software. This includes static analysis, automated scanning and tools that check your dependencies for reported security vulnerabilities.

If you've ever found yourself watching late night television, you'll probably have seen infomercials for various exercise and weight loss products. You'll probably have noticed them all prefaced with the phrase "to be used alongside a balanced diet and exercise regimen".

The balanced diet and healthy exercise regimen for your codebase is regular auditing by people who understand the common ways that software security fails.

I would trade all of the above mentioned tools for two or three developers on your team that are knowledgeable about security and occasionally take the time to probe the software for vulnerabilities. Nothing will highlight potential weaknesses faster and more thoroughly, aside from a team of on-call penetration testers.

No one will attack my little cat photo sharing application

If your product is unheard of and you've only just launched, then most developers assume that a determined and sophisticated attacker wouldn't think to target you.

Reality isn't as simple as that. No matter how insignificant you think your site is, at the very least it is worth an automated scan just to add your application server to a botnet. Whenever a new vulnerability in server software is released, attackers scan the entire range of IP addresses for exploitable web servers, including those that your app runs on.

Being added to a botnet means that the attacker likely has complete control over your app server. That means access to your database and the ability to run arbitrary code on your server. Aside from being used for denial of service attacks, there are many other nasty things that your server might end up being used for. Attackers don't care what you were doing with your computers.

The opinion of application developers on software security matters

Bruce Schneier says it better than I could ever hope to:

Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something.

Bruce Schneier, Schneiers Law

In your career as a developer, I'm willing to bet you've come accross people who are confident even though their skills leave much to be desired. We're all probably guilty of it at one point or another when we're starting out. This is a well documented phenomenon that you may have heard of, the Dunning-Kruger effect.

If you ever find yourself uttering the sentence "this software is secure" or some variant of it, unless you've spent at least a few years as a software security specialist, you probably don't know what you're talking about.

I'm writing a book on security, and the highest endorsement I can come up with of any software is "I couldn't find a vulnerability in it". That doesn't mean I believe it's secure. It means a security hobbyist had a crack at it and didn't find anything. I'm not qualified to assert anything further, and if you're reading this then you probably aren't either.

Wrap Up

A healthy attitude to security assumes that all software you build has security vulnerabilities in it somewhere. If you take the time to understand how software security mechanisms fail, you can protect yourself from the easiest forms of attack by regularly probing your applications for security flaws and promptly applying security patches to any software in your stack.