Happy Bear Software

Web Application Security Talk at LRUG

Video of the talk on Skills Matter.

Intro

I'm Ali, a web application developer who's been building websites for nearly seven years. I'm not a security expert, more of a hobbyist, and this talk is a culmination of me reading around the subject for a few years and applying to work where I can.

Motivation

As a developer, I don't think you can be extrinsically motivated to build secure software.

Imagine the following two scenarios:

Both of these scenarios, from a non-technical stakeholders point of view, look exactly the same. That is, you don't get a carrot for getting security right, and you don't get a stick for messing it up.

There are however a few intrinsic motiviations for wanting to build more secure software:

Mindset

As developers our mindset tends to be focused on analyzing problems, boiling them down to their core ideas and delivering solutions. We try to do this in a way that solves the problem, keeps accidental complexity in check and hopefully produces reusable software. This is the developers mindset, and its a good mindset to have.

In his book Cryptography Engineering Bruce Schneier talks about the Adversarial Mindset. In it he gives a metaphor of an engineer designing and building a bridge. To do so he has to work within certain tolerances: the bridge should be able to withstand high winds, to take a reasonable load and perhaps be struck by lightning.

The engineer does not have to consider that the wind might intentionally work towards destroying the bridge. He doesn't need to worry about it attempting to probe for weaknesses at certain points in the day. He also doesn't need to worry that the wind and the lightning will conspire to destroy the bridge.

In the military and in software, we do have to consider an adversary who wants to intentionally compromise the integrity of the systems we build. It's this adversarial mindset that you'll need to adopt when looking for vulnerabilities in your system.

I used to work at a company called moo.com, and it was the first time I worked with dedicated QA. Whenever I marked a story as ready for QA, the QA staff would find some fault with it that I would have to fix. Over time, I figured out the majority of the classes of errors they were looking for and managed to pre-empt their objections before they had them.

This gave them a lot more time to find more interesting classes of bugs. An example might be that in a certain browser, if one were to to the third stage of the order process, open a new tab and change their store and language, then flip back to the original tab and complete their order, they had their order delivered to the UK with US shipping and handling. The QA had to adopt an adversarial mindset to find this bug, above and beyond the attention a developer gives to finding and mitigating edge cases.

The end result of this is very robust software that handles edge cases gracefully. This is a good thing, and a direct result of the adversarial mindset in action.

Application Responsibilities

A web application usually has three primary security responsibilities:

Authentication

"Is this user who they claim to be?"

We typically implement this with a registration form for users to sign up with our application, and then authenticate users with an email/password combination that they select.

Intrinsically this system has a few conceptual problems with it, but assume it serves our purposes for now. In Rails land we tend to use Devise to implement authentication, and I think that if you stick to the devise defaults then for the most part your authentication will be OK.

If you have hand-rolled your authentication, then a particular place to watch out for vulnerabilities is in your forgotten password functionality.

Session Management

"Is this the same user that just logged in?"

Since HTTP is ostensibly a stateless protocol and we don't want to present users with a username/password challenge at every request, we need some way of authenticating a user for a session.

One way that this is achieved is by generating a random number called a session id, and then storing data about that users session in some sort of hash table (be that memcached, database session store etc.) We send the session id in a cookie to the user, and then the user is expected to send the session id in subsequent requests.

This means that if you can guess a session id or find some way to trick a user into giving you their session id, you can masquerade as that user. You can then go on to modify information, their password/email address, and go on to further compromise the system.

Access Control

"Is this user allowed to do what he's attempting?"

Access control (a.k.a. Authorization) is when you decide if a user should or shouldn't be allowed to perform a given action. In Ruby on Rails applications, this tends to be implemented through a combination of calling methods on current_user, whitelisting parameters (note: do more of that please!) and, if you're lucky, cancan.

By far, this is where I see the majority of application-level vulnerabilities in Ruby on Rails apps. This ranges from just forgetting to do access control at all, to allowing users to modify fields and completely bypass authentication. The case studies we're about to go through are mostly about errors here.

Case Studies

Icebox

Icebox is a company that allowed you to backup your files on dropbox to amazon glacier. To do so you had to sign up with a username and password, and then enter your dropbox and AWS credentials.

This involved a regular Rails form for editing your AWS credentials with a URL structure of /aws_credentials/:id. However, no access control checking was done. By changing the URL you could view and edit other peoples AWS credentials.

This meant that you could grab other peoples credentials and use computer resources using their accounts. Somewhat more insiduously, you could also secretly modify their data and have copies of their dropbox files on your AWS account.

You might write this off as a newbie error or just carelessness on the part of the developers, but this is the most common way I see developers messing up access control (i.e. by not doing it at all).

Moral of this story: You should be able to point to the code that stops users from doing things they're not allowed to.

Github

Disclaimer: I'm a huge Github fanboy and have nothing against them. This is in no way an indictment of Github, but it provided for an interesting case study.

Like most web applications, Github requires that you authenticate with a username and password. However, when working with git from the command line, Github also allows you to authenticate with an RSA keypair through ssh.

To make this work, as an authenticated user you can register public keys through a web-based UI. If you have the corresponding private key, you can authenticate as the user who registered the public key through github.

Internally, Users have many Public keys, i.e. there's a database table called public_keys that has a column called user_id on it.

It turns out that through the web-based UI for managing your own public keys, you could modify the user_id field on any of your public keys. Since you own the private keys of all of your keypairs, being able to switch the user_id of any of your public keys allows you to authenticate as the user with the user_id you switched to.

A guy called Egor Homakov discovered this, set the user_id of one of his public keys to that of DHH, and made a commit to rails core to demonstrate the attack.

Moral: Always whitelist request parameters, even when users are modifying data they own. If there's no good reason a user should be allowed to modify a given bit of their own data, don't let them.

Anonymous todo-list Application

Imagine a todo-list application. The founders of this startup are very user-focused and in their infinite wisdom decide that they will allows users to register for the app without verifying their email address.

The app launches, and three months later a new feature comes through the pipeline. They want users to be able to set a deadline on any given todo list application, and once the deadline is a day or so away, the user should receive an email reminder of the task containing the text of the task.

Now taken on its own, this feature doesn't represent much of a security risk. But since we don't verify user emails, it would be trivial to make a bash script that takes an email address per line on STDIN and for each email, creates an account, sets up a todo list item with the text "buytastyviagra.com" and then sets the deadline to a few days from now. The result is that you can use the todo list application to send spam to whoever you want (as long as they're not registered).

Moral: Pay attention to how new features can impact the security of your application.

Recommendations

1. Find vulnerabilities, fix them if you can

2. Log strangeness, review regularly, build exploits

3. Keep your stack up to date (or at least patched)

4. Get help

Cryptography

My opinion on cryptography in general comes from three activities:

Without learning common attacks on cryptography, its very easy to get a feeling of security from being in the proximity of big scary-looking blocks of Base64 encoded data.

Once you start learning how to break cryptography though, your reaction is to start analayzing ciphertext to see if you can get any information out of it at all. If you can draw one or two conclusions from the ciphertext, you can build that up to an attack that compromises it entirely.

My advice on using cryptography:

Further Reading