Georgios Selimis, Senior Security Engineer |
Wearable devices are creating an increasingly large footprint in the healthcare world. Increasing demand for remote monitoring and diagnostics is driven in part by healthcare cost inflation and an aging global population within developed countries – projections indicate 20 percent of the United States population is expected to be age 65 or older by 2030 . So the efficiency and convenience advantages provided by such devices are clear.
What is not so clear is whether that efficiency and convenience are worth the tradeoff for security. A lot can happen between a remote heart monitor and the doctor intended to read the results.
Medical Wearable Devices: The Challenge
Remote human body monitoring allows users to track their own conditions, eliminates the need for repeated visits to the doctor, and supports customized treatment plans. A broad range of physiological signals, including pulse/electrocardiography, motion and orientation, glucose levels, blood pressure, temperature, brain activity and skin conductance can all be measured by tiny medical wearable devices attached to the human body. Such devices digitize the sensory inputs, perform digital signal processing and transmit the result to gateways such as smartphones, tablets and computers. Then the gateways forward the data to a cloud service where the authorized users (patient, healthcare providers, insurance companies, etc.) remotely access the data via a dashboard. This communication scenario is presented in Figure 1.
In the current concept of medical wearable devices, security is limited to the protection of the data stream from the medical device to the “application/data consumer.” But is it secure?
Consider a common scenario: A medical wearable device is connected using a low-range/low-rate connectivity protocol to the patient’s smartphone. Bluetooth Low Energy (BLE) connectivity technology is a popular choice due to its low energy performance and its broad adoption within the healthcare industry. BLE comes with data security (data confidentiality and data integrity), and it uses pairing- (or bonding-) based authentication schemes between the device and the smartphone.
As we see in Figure 2, the flow spanning from medical device to application lacks an end-to-end data communication security mechanism. Although data protection is provided in each individual link, the information is conveyed unencrypted through the intermediate participants and if some of them are outside the chain of trust (e.g. cloud) they could have access to the unencrypted data, or could modify it. This point of vulnerability is one of the criticalities in medical wearable device monitoring ecosystems.
Lack of End-to-End Security Mechanisms – Data Protection and Data Authentication
Figure 2 highlights that an end-to-end security mechanism from the device to the application is missing. This means that data is exposed to unauthorized users and can be manipulated easily. Moreover, the data recipient (e.g. a doctor) is unable to verify that the authentic medical device is the origin of the data and must assume that the entire link is trustworthy—not at all a desirable situation. A real-world example is when a physician decides to prescribe a certain treatment plan or drug dosage based on potentially falsified data. Further, the recipient could manipulate the information for their own advantage. For example, a health insurance company could impose a higher premium – or, even worse, deny a claim – based on potentially falsified data.
In addition to the lack of end-to-end security, two other critical security issues must be addressed, namely device security and device authentication mechanisms.
Weak Device Security
In this article, device security refers to all design countermeasures that protect software programs, sensitive medical data, software intellectual properties and keys that are on IoT platform.
Microsoft’s 2017 report, “The Seven Properties of Highly Secure Devices” , says “Security by definition should be based upon a framework of trust, where everything in the system ultimately agrees that some root point in the system can be trusted, providing a secure foundation upon which the rest of the system can be securely built.“ Secure boot is missing from most medical wearable devices, and this allows someone to hack into a device and replace existing software with software containing malware. The system might run malicious code, so the result is a compromised device.
Some will claim this is an extreme scenario due to the difficulty of physical attacks, but large-scale remote attacks are very practical. Since these devices have communication interfaces, they are connected to the Internet and they are part of a larger, interconnected device ecosystem. This fact makes them attractive to attackers, because hackers can remotely steal data or conduct a large-scale DDoS attack.
Real-world examples include attackers disclosing patient medical data to the world, or holding it ransom. Even worse, they could hold health/life ransom by threatening to turn off or otherwise disrupt the operation of medical implants such as pacemakers and insulin pumps.
Apart from secure boot, a strong requirement is intellectual property protection and anti-cloning mechanisms for software. Medical devices come with sophisticated algorithms for processing the human body signals and their integration on embedded devices makes them vulnerable to intellectual property stealing for cloning or unauthorized overproduction.
Finally, medical devices should come with key management which gives access to multiple application providers without revealing sensitive information to competitors and secure storage to protect sensitive information such as cryptographic keys and medical data.
Vulnerable Device Authentication Methods
As medical wearable technology is quickly expanding in applications and volume, there is an increasing need for strong, scalable and cost-effective authentication between devices: Machine-to-Machine (M2M) authentication. There are several reasons for this urgent call to action. First, the data in question is highly sensitive and private in nature. Second, M2M communication cannot depend on the user being available to authenticate 24×7; the device should operate seamlessly to the user and should authenticate itself without human actions before critical procedures such as secure software updates. And finally, passwords or other authentication mechanisms based on shared secrets can be stolen or even forged by the device user.
Validation Through INSTET Project
Intrinsic ID and Maastricht Instruments are working together in the context of the European project INSTET to add advanced security in the medical wearable devices and to solve the security challenges for medical wearables in an easy way. As a reference design to strengthen it on security, we selected Maastricht Instruments’ MOX2, a 24×7 physical activity monitor. MOX2 uses the chipset STM32L496QGI6 32-bit Cortex M4 microcontroller.
Intrinsic ID’s security solution is built around the company’s core technology, SRAM Physical Unclonable Function (PUF). SRAM PUFs use the unpredictable start-up behavior of uninitialized standard SRAM cells, available in any digital chip, to differentiate chips from each other. SRAMs can be found in any microprocessor (MPU) or microcontroller (MCU) and they are practically impossible to duplicate or predict. This method is an alternative to key storage solutions such as ROM, OTP, flash and Secure Element (SE) which require specific hardware on chip and a separate source of entropy. SRAM PUF technology has proven to be more secure. Flash memories, e-fuses, and other memories are the traditional approaches to store the key on the chip.
There are two very serious security issues related to the incumbent approach. First, the data in the memory may be read out by an adversary, even if the chip is not powered up. Second, in the case of an external SE chip, the exposed key has to be transferred to the MCU chip. An embedded SRAM PUF in the MCU can be a giant step toward cheaper and more robust security.
As you can see, security risks exist in a range of forms, with significant negative potential outcomes. Now that we’ve identified the problem scenario and motivation for better security, in Part 2 of this blog series we’ll learn the steps to make medical wearable devices secure using SRAM PUF technology from Intrinsic ID.
References Connected Wearable Devices in Healthcare: Wearables in Medical, Wellness, and Fitness Markets by Device Type, Body Area, Solution Type (Prevention, Monitoring, and Treatment), and Health Concerns 2019 – 2024. ResearchAndMarkets.com  Hunt, G., Letey, G., & Nightingale, E. (2017). The Seven Properties of Highly Secure Devices. tech. report MSR-TR-2017-16.
Georgios Selimis is a Senior Security Engineer at Intrinsic ID, working on embedded security projects and serving as technical leader on a range of R&D projects. His expertise includes applied cryptography, PKI, embedded systems security and IoT secure connectivity. Prior experience includes serving as Research & Development Engineer at the Imec/Holst R&D Centre in Eindhoven, working on IoT projects including lightweight implementations for secure connectivity, PUF and digital signal processing. He studied Electrical & Computer Engineering at the University of Patras in Greece and later obtained his PhD from the same university.