Medical technology is better than it’s ever been. Where once the tools of the medical trade were limited to bone-saws and the occasional jar of leeches, these days millions of people have high-tech computerised equipment placed inside their bodies, running sophisticated software and getting complex, useful feedback.
However, this new technology brings new vulnerabilities, and the unregulated nature of the industry means that people are placing themselves in positions where hackers could harvest their data or, ultimately, in the worst case scenario, commit murder.
The idea of someone tapping away at a keyboard and causing you to drop dead might initially seem a bit far-fetched, but it’s nothing of the sort. Cybersecurity expert Evgeny Chereshnev, CEO at Biolink.Tech, talked us through the problem as he sees it, and how he sees the world ultimately solving it.
Related:Everything you didn’t know you wanted to ask a lawyer
Hi Evgeny! So what kind of devices are we talking about here?
Pacemakers, wireless insulin pumps that automatically inject insulin into people’s bodies, smart inhalers for asthmatics… all kinds of medical devices.
Why are these devices capable of going online? Does a pacemaker really need internet access?
Pacemakers, and artificial hearts with cybernetic elements, are often connected to external servers for software updates and data exchange purposes. For this they need wireless capability. And as soon as you have a device with wireless, it’s potentially a gate for a hacker.
How would a hacker use your device to kill you?
About five years ago I went to Black Hat conference [a large computer security symposium] and a white hat [ethical] hacker showed how he could hack into an insulin pump he himself was wearing. He could have used that access to administer a lethal dose. A pacemaker is a medical device, controlling the heart. It has brains in it, and if you control its brains, you can use it to deliver a shock - pacemakers have such a function - or you can play with someone’s heartbeat. It’s not sci-fi, it could happen tomorrow.
This all sounds kind of familiar…
In 2007 or so, US Vice President Dick Cheney had the wireless capabilities of his pacemaker disabled, because he was convinced that someone might try to hack into it. In fact, they ended up using that as a storyline on the TV show Homeland.
“There’s such huge potential for crimes. I’m really scared.”
Why is this worth worrying about now?
It wasn’t a big problem before because they were so rare - there were just so few of them that it wasn’t worth hackers’ time. Before there were millions of internet-connected medical devices out there, nobody really cared. Last year, a pacemaker manufacturing company announced that they were recalling half a million pacemakers. It was proven to them that they could be hacked, so they recalled them to administer updates and fix the vulnerability. But that was a half a million lives at stake. It’s pretty important.
Is making your device kill you the only way hackers can put you in danger?
There’s another problem, beyond internal devices. There are many other medical devices out there, and a huge amount of people’s confidential data. I did a TEDx talk about digital DNA:
If you’re a guy with bad intentions, there’s a lot of data out there. There’s such huge potential for crimes. I’m really scared. If you find out from someone’s data about a deadly allergy someone has, you own them. That person is your hostage now.
But it’s not just about allergies or physically harming people. You can find embarrassing information about people - STIs, erectile dysfunction, a lot of information that is kept private for a reason. Hackers have started to attack not just corporations and highly wealthy individuals, but normal people. The rise of cyberlockers, where hackers will encrypt your data until you pay a fee to unlock it, shows that they’re widening their target. Digital blackmail scares the shit out of me.
Why do all these devices have such vulnerabilities?
These devices are designed with cybersecurity as a minor concern, and I think that is a huge mistake. Everything else in the medical world is regulated, for a reason. Think about x-rays and all of the safety regulations surrounding them. Everybody understands that in order for something to be used on a human being it has to be clinically tested multiple times. But nobody seriously tests cybersecurity on implants, pacemakers, smart inhalers, insulin pumps. There are no clear standards that have to be adhered to.
What’s the solution?
To me it’s absolutely clear what has to be done, and it has to be enforced and regulated. The way to do it is to create a global cyber authority which would regulate this market and at least enforce certain standards. It can’t be done company by company - there aren’t enough specialists - so we need something close to an open-source solution.
Cybersecurity has to become something like the World Bank or the World Health Organisation - credible and powerful enough to make things mandatory for the whole industry. It’ll affect the pricing, as doing anything properly will, but it’ll end up being, what, 5% more expensive? It’s a reasonable cost that should be non-negotiable because lives are at stake.
It’s a mind-boggling thing to think about, the idea that so many people are vulnerable to attack due to the unregulated nature of the industry. Evgeny compares it to the idea of buying a car without a seatbelt, which is unthinkable. It sounds as though his solution isn’t out of the question at all, but placing rules upon an industry after it’s established is always going to be hard. Hopefully the powers that be recognise the seriousness of the situation, or a hacking murder feels like an inevitability.