Web Application Security Talk at LRUG

Video of the talk on Skills Matter.

Intro

I'm Ali, a web application developer who's been building websites for nearly seven years. I'm not a security expert, more of a hobbyist, and this talk is a culmination of me reading around the subject for a few years and applying to work where I can.

Motivation

As a developer, I don't think you can be extrinsically motivated to build secure software.

Imagine the following two scenarios:

  • You do everything right. You regularly probe for vulnerabilities and carefully consider the impact of each code change on your applications security. You welcome researchers to poke holes in your software and fix vulnerabilities as soon as possible. You keep your software stack patched and are able to roll out security updates at a moments notice.
  • Your system is thoroughly compromised. You're completely oblivious to issues of security and your servers are completely compromised by attackers who leave no trace.

Both of these scenarios, from a non-technical stakeholders point of view, look exactly the same. That is, you don't get a carrot for getting security right, and you don't get a stick for messing it up.

There are however a few intrinsic motiviations for wanting to build more secure software:

  • The proud craftsman. Taking a feeling of pride in the work you produce might lead you to focus on making your software more robust.
  • A moral obligation towards your users. Users place their trust in the systems we build and expect that their data will remain confidential and untampered with.
  • Finding security vulnerabilities is fun. Finding and exploiting security flaws is an intellectual battle with you and whoever wrote that software. As a battle, it also happens to be really easy to win, most of the time.

Mindset

As developers our mindset tends to be focused on analyzing problems, boiling them down to their core ideas and delivering solutions. We try to do this in a way that solves the problem, keeps accidental complexity in check and hopefully produces reusable software. This is the developers mindset, and its a good mindset to have.

In his book Cryptography Engineering Bruce Schneier talks about the Adversarial Mindset. In it he gives a metaphor of an engineer designing and building a bridge. To do so he has to work within certain tolerances: the bridge should be able to withstand high winds, to take a reasonable load and perhaps be struck by lightning.

The engineer does not have to consider that the wind might intentionally work towards destroying the bridge. He doesn't need to worry about it attempting to probe for weaknesses at certain points in the day. He also doesn't need to worry that the wind and the lightning will conspire to destroy the bridge.

In the military and in software, we do have to consider an adversary who wants to intentionally compromise the integrity of the systems we build. It's this adversarial mindset that you'll need to adopt when looking for vulnerabilities in your system.

I used to work at a company called moo.com, and it was the first time I worked with dedicated QA. Whenever I marked a story as ready for QA, the QA staff would find some fault with it that I would have to fix. Over time, I figured out the majority of the classes of errors they were looking for and managed to pre-empt their objections before they had them.

This gave them a lot more time to find more interesting classes of bugs. An example might be that in a certain browser, if one were to to the third stage of the order process, open a new tab and change their store and language, then flip back to the original tab and complete their order, they had their order delivered to the UK with US shipping and handling. The QA had to adopt an adversarial mindset to find this bug, above and beyond the attention a developer gives to finding and mitigating edge cases.

The end result of this is very robust software that handles edge cases gracefully. This is a good thing, and a direct result of the adversarial mindset in action.

Application Responsibilities

A web application usually has three primary security responsibilities:

Authentication

"Is this user who they claim to be?"

We typically implement this with a registration form for users to sign up with our application, and then authenticate users with an email/password combination that they select.

Intrinsically this system has a few conceptual problems with it, but assume it serves our purposes for now. In Rails land we tend to use Devise to implement authentication, and I think that if you stick to the devise defaults then for the most part your authentication will be OK.

If you have hand-rolled your authentication, then a particular place to watch out for vulnerabilities is in your forgotten password functionality.

Session Management

"Is this the same user that just logged in?"

Since HTTP is ostensibly a stateless protocol and we don't want to present users with a username/password challenge at every request, we need some way of authenticating a user for a session.

One way that this is achieved is by generating a random number called a session id, and then storing data about that users session in some sort of hash table (be that memcached, database session store etc.) We send the session id in a cookie to the user, and then the user is expected to send the session id in subsequent requests.

This means that if you can guess a session id or find some way to trick a user into giving you their session id, you can masquerade as that user. You can then go on to modify information, their password/email address, and go on to further compromise the system.

Access Control

"Is this user allowed to do what he's attempting?"

Access control (a.k.a. Authorization) is when you decide if a user should or shouldn't be allowed to perform a given action. In Ruby on Rails applications, this tends to be implemented through a combination of calling methods on current_user, whitelisting parameters (note: do more of that please!) and, if you're lucky, cancan.

By far, this is where I see the majority of application-level vulnerabilities in Ruby on Rails apps. This ranges from just forgetting to do access control at all, to allowing users to modify fields and completely bypass authentication. The case studies we're about to go through are mostly about errors here.

Case Studies

Icebox

Icebox is a company that allowed you to backup your files on dropbox to amazon glacier. To do so you had to sign up with a username and password, and then enter your dropbox and AWS credentials.

This involved a regular Rails form for editing your AWS credentials with a URL structure of /aws_credentials/:id. However, no access control checking was done. By changing the URL you could view and edit other peoples AWS credentials.

This meant that you could grab other peoples credentials and use computer resources using their accounts. Somewhat more insiduously, you could also secretly modify their data and have copies of their dropbox files on your AWS account.

You might write this off as a newbie error or just carelessness on the part of the developers, but this is the most common way I see developers messing up access control (i.e. by not doing it at all).

Moral of this story: You should be able to point to the code that stops users from doing things they're not allowed to.

Github

Disclaimer: I'm a huge Github fanboy and have nothing against them. This is in no way an indictment of Github, but it provided for an interesting case study.

Like most web applications, Github requires that you authenticate with a username and password. However, when working with git from the command line, Github also allows you to authenticate with an RSA keypair through ssh.

To make this work, as an authenticated user you can register public keys through a web-based UI. If you have the corresponding private key, you can authenticate as the user who registered the public key through github.

Internally, Users have many Public keys, i.e. there's a database table called public_keys that has a column called user_id on it.

It turns out that through the web-based UI for managing your own public keys, you could modify the user_id field on any of your public keys. Since you own the private keys of all of your keypairs, being able to switch the user_id of any of your public keys allows you to authenticate as the user with the user_id you switched to.

A guy called Egor Homakov discovered this, set the user_id of one of his public keys to that of DHH, and made a commit to rails core to demonstrate the attack.

Moral: Always whitelist request parameters, even when users are modifying data they own. If there's no good reason a user should be allowed to modify a given bit of their own data, don't let them.

Anonymous todo-list Application

Imagine a todo-list application. The founders of this startup are very user-focused and in their infinite wisdom decide that they will allows users to register for the app without verifying their email address.

The app launches, and three months later a new feature comes through the pipeline. They want users to be able to set a deadline on any given todo list application, and once the deadline is a day or so away, the user should receive an email reminder of the task containing the text of the task.

Now taken on its own, this feature doesn't represent much of a security risk. But since we don't verify user emails, it would be trivial to make a bash script that takes an email address per line on STDIN and for each email, creates an account, sets up a todo list item with the text "buytastyviagra.com" and then sets the deadline to a few days from now. The result is that you can use the todo list application to send spam to whoever you want (as long as they're not registered).

Moral: Pay attention to how new features can impact the security of your application.

Recommendations

1. Find vulnerabilities, fix them if you can

  • Spend time actually looking for vulnerabilities. Code review usually highlights the low hanging fruit.
  • Think about access control when you build software. If a user shouldn't be allowed to perform a certain action, you should be able to point to the code that stops him doing so. If you can't, its likely that no one else can either.

2. Log strangeness, review regularly, build exploits

  • Log any security risks. These tend to be things that seem odd but you can't think of a way they could be used to compromise your security just yet.
  • Over time, develop exploits. In isolation these risks don't tend to be that big a deal. Over the course of a project however, coupled with one or two other wierd things, they sometimes flower into more interesting vulnerabilities.
  • Build a POC (if required), and fix them. For more interesting vulnerabilities, its definitely a good idea to build a proof of concept and run it before and after you make a fix, making sure it no longer works after the code gets updated.

3. Keep your stack up to date (or at least patched)

  • Periodically check for security updates. Some of these happen without much fanfare in a pull-request or changelog of one of your libraries. Once a week/month, check security mailing lists all of the software in your stack and make sure any security patches are applied.
  • Be ready to apply patches by hand. You don't want to be left at the mercy of your OS's package manager to get security updates. You should be comfortable compiling your stack from source and applying patches. Hint: the time to learn how to do that is now, not when you have a remote code execution vulnerability in nginx and have to patch before someone think to scan port 80 on the IP range your infrastructure is in.

4. Get help

  • Have a page on your website called 'Security'. On it, describe your responsible disclosure policy and invite security researchers to investigate vulnerabilities in your software.
  • Start a bug bounty. Where you offer a reward of between X and Y amounts depending on the severity of uncovered bugs. You can stipulate some rules of engagement e.g. no DOS's and please be gentle with user data.
  • Publicly thank researchers who find vulnerabilities. Have a 'special thanks to' page that calls out researchers who've found vulnerabilities in your software.

Cryptography

My opinion on cryptography in general comes from three activities:

  • Reading about cryptography. The number theory went somewhat over my head but understanding the basic primitives and how they fit together to do things I think is important for using cryptography effectively.
  • Breaking cryptography. I'm just getting started with breaking commonly used cryptography implementations, but this provides a lot of vital context. You can know how cryptography works by studying its internals, but to know why crypto works the way it does, you need to learn the attacks.
  • Observing other developers use cryptography. Developers who are not me tend to be very gung-ho about their use of cryptography. This must mean they are either extremely knowledgeable about cryptography, or hopelessly misguided.

Without learning common attacks on cryptography, its very easy to get a feeling of security from being in the proximity of big scary-looking blocks of Base64 encoded data.

Once you start learning how to break cryptography though, your reaction is to start analayzing ciphertext to see if you can get any information out of it at all. If you can draw one or two conclusions from the ciphertext, you can build that up to an attack that compromises it entirely.

My advice on using cryptography:

  • Don't use cryptography. It is fraught with peril, and the chances of you getting it right are tiny. If your goal is to feel more secure in the presence of big blocks of Base64 encoded data, you can achieve the same effect by calling SecureRandom.base64, printing out the results and posting them up around your office.
  • No seriously, don't use cryptography. That includes HMACs and it especially includes data you stuff into cookies you encrypt with AES.
  • If you must use cryptography... use TLS (with client-side verification) for data in motion and PGP for data at rest. If your problem doesn't fit those solutions, then refactor your problem until it does. You're still probably going to fuck it up.

Further Reading