Bug bounties are the new(ish) hotness in security assessment, and bounty companies are making fairly bold claims about how great and effective their crowd-sourced services are. Whilst I think they have well and truly earnt their place amongst the classic broad spectrum of assessment products and services, there’s one particular area they face challenges that I haven’t seen tackled in any sensible form yet: IoT (the S in IoT stands for security).
Here are a small handful of things that are preventing IoT succeeding in the bug bounty space:
- Cost of entry: If bounty hunters have to shell out $300 to have an IoT widget sent to them before they can get started, you’ve diminished the pool of talent to a tiny percent. If the bounty hunter earns $300 off a bug, they’ve only cut even; or are in the negative if they value their time.
- Patch cycle duration: A lot of bounty schemes only pay out when the bug is fixed, not when it is confirmed. In the IoT/embedded space, development lifecycles commonly extend for months, unlike online services which are known to be patched in hours. Bounty hunters might be without rent money for half a year. A further problem is that duplicate submissions go through the roof. If a bug is being “remediated” for 3 months, the chances that someone else spends time re-discovering that bug is fairly significant.
- Skillsets: The learning curve for picking up a copy of burp and web app hackers handbook is fairly gentle; the talent pool for people who understand web development is fairly sizable. The same cannot be said for embedded systems or reverse engineering native compiled binaries. Chances are, if you can reverse ARM binaries, you’ve got a decent enough job that a few hundred dollars for a days effort bounty hunting doesn’t light you up with excitement.
- Systemic/Design vulnerabilities: Anyone who has taken a look at a typical IoT device knows they’re full of bad choices: by the time you’ve tripped over the 154th call to system(), you realize that what these developers need is not a list of bugs to patch ad-hoc, but secure development training and to redesign the product. An optimist might argue that they’re using the bug bounty a feedback learning loop for incremental improvements, but really they’re just taking the slow road to learning a bunch of lessons the rest of the tech industry learnt in 1994.
Fortunately, for most of the above I think there are improvements that can be made; some lie directly with the bug bounty companies, but most of the pressure should be on the vendor.
If vendors stopped treating security as an ambulance at the bottom of the cliff and involved security teams earlier on in the process, they’d have a lot less bugs coming out during bounties. That would allow them to increase the bounty and attract a lot more high-quality submissions.