By Najaf Ali
In multiple projects accross various industries, I've been the paranoid security guy that brings up potential risks and vulnerabilities.
The following are made-up examples that combine and remix elements from real-life conversations I've had with developers and clients on actual projects. The vulnerabilities are real, but my reactions are somewhat abridged.
We don't give a shit about our users
Me: So /orders/ABC361727 is a publicly available URL that shows you all the data about a given order, including email address, full name, products and price. We should probably authenticate the user and make sure they're allowed to view those orders.
Client Stakeholder: Yeah, so MVP, we don't really care about security. Startups!
Client Developer: Besides, what can you do with the order information? It's not like you can change the order. And the URL isn't available anywhere so there's no way you could find it. I think you're being overly paranoid.
Me: It may seem paranoid but the order key is just ABC followed by a six digit number. Anyone mapping the website who made two orders could see that. The data could be collated and put together to create a spear-fishing attack on your users.
Client Developer: Well in my experience I've never seen that sort of attack so it's not really worth focusing on at the moment. It's not like we're a bank and we're displaying sensitive data.
Client Stakeholder: This is how McDonalds do it, so we should be fine.
Me: Wow, OK.
We don't understand the difference between Authorization and Authentication
Me: So on this edit profile screen we don't whitelist the parameters on update, the user can change anything about their record in the database.
Client Developer: Yes that's fine, they're authenticated so they should be allowed to change their record.
Me: So they should be able to set their
role to admin and change their
account_id? Doesn't that mean that you can gain admin rights on any account by
modifying your profile?
Client Developer: Ah good point, I think
attr_accessible takes care of that though.
Me: Not quite, because admins can edit other admins on the same account, so it makes sense in some cases. Since we're using different controllers we can whitelist the parameters as required in those controllers and we should be OK.
Client Developer: Actually, I don't want to clutter up controller code for an edge case like this. There's no way to modify those parameters through the UI so I'm not too worried about it.
Me: Very well.
Playing the ball where it lies
Based on my experience, I doubt the stories above will resonate with most web developers. For me at least, the amount of creative effort put into reasons why these security risks were alright to let go was at once astonishing and frustrating.
I've been in so many situations like this that I've come to the realization that this is the state of security concerns in modern web application development and it's not worth getting too upset about.
If you're not a security researcher, there's no way you can visibly provide or lose business value by focusing on application security. If you strive to build an application that has no obvious vulnerabilities, no one will ever hear about it. If you instead build an application that leaves the front door open and broadcasts all of your users data, then in the worst case, no one will never hear about it.
I'm not sure where to draw the line on my obligations as a professional. How far should I go to make sure that security vulnerabilities are dealt with? Doctors and lawyers have licenses that can be revoked in cases of malpractice or gross negligence, but as far as I know there exists no such license for programmers.
Strategies for sneaking in better security
In my experience, security is never a first class concern when developing web applications. In any case, I've found a combination of the following strategies can be good for picking off low-hanging fruit:
Keep the stack up to date
Periodically make sure that your stack, including web server software and any dependencies are updated to their latest versions. Vulnerabilities in any part of your stack are vulnerabilities in your application, so it pays to merge in upstream security fixes in a timely manner
This is by far the easiest to sneak past other developers, as it sounds like the right thing and is a job that most other devs avoid if they can.
Where possible, quietly fix vulnerabilities
For things like parameter filtering that are a one-line change in a rails application, write tests, write the patch, commit it and then have the discussion about whether or not it's a good idea. While other devs may disagree, they normally don't disagree enough to revert a commit.
Document known risks
For larger, logic-level security issues that you can't and aren't allowed to fix quickly, keep a list of known security risks on file in a project wiki somewhere. For each risk, list:
- The nature of the risk
- How it could be fixed
Over the course of six months to a year, this file will tend to grow to include three or four risks that individually may not amount to a security vulnerability. Put together though, with a bit of creativity you'll likely be able to develop an exploit.
With an actual exploit in hand, it becomes easier to force the issue and convince other developers and stakeholders to let you fix long-standing security problems.
This is no guarantee however. Even with a fully-fledged exploit in hand, "Security is not a priority for us" is a line I've heard all too often.
The above strategies are not a substitute for a few developers that are vigilant about security (to date I've worked on exactly one team like this). With a bit of forethought however you can make your application resilient against the most basic attacks without getting buy-in from everyone on your team.