skip to Main Content

Pros and Cons of Secure Elements

Q&A with Anton Sabev |

We sat down with Anton Sabev, Principal System Security Architect at Intrinsic ID, to discuss different approaches to securing IoT devices and protecting confidentiality.


Q: One approach to security is the use of a secure element, which is a standalone security chip with secret keys programmed on it. When it comes to assessing the advantages and disadvantages of this approach, what are the nuances we should look at?

Anton: Let’s focus on typical applications of secure elements for IoT devices, usually a microprocessor accompanied by a discrete separate secure element chip. In that context, from a pure logistics standpoint with a secure element you just add the chip and it looks as if you’ve solved most of your security problem. But the reality is that while the secure element in a chip can do a reasonably good job of bringing pre-provisioned keys and protecting those keys, there are still complications left unaddressed. I’m referring to the separate CPU or MCU – separate SoCs – which discretely talk to that secure element. So the rest of the system, in order to use these keys, has to talk to that chip and ask it to do operations with the keys. And this brings us to the problem: The channel to the secure SoC is not protected by the secure element.


Q: So the use of a secure element does not address security for the connection between the microprocessor and the secure element. And so even though the chip is encrypting data, the actual connection between the chip and the source of that data is not protected?

Anton: Correct. You can say it’s electrically exposed. So at that point, how well protected that secure element is becomes irrelevant because it’s created a much lower bar for breaching the device. The device is still vulnerable to physical and side-channel attacks on the device.

Tent And Vault
“A good analogy would be putting a bank vault door on a tent. An adversary trying to get into the tent wouldn’t even bother trying to get past the door, he’d just use a knife and cut through the tent wall. In the same way, the secure element could be super secure but it’s not protecting the perimeter.”

Q: In other words, that risk, that point of exposure, has not been addressed at all because, even though it might not be the first place an adversary would think of to attack, it’s still a way in – there is a way around it.

Anton: Yes. A good analogy would be putting a bank vault door on a tent. An adversary trying to get into the tent wouldn’t even bother trying to get past the door, he’d just use a knife and cut through the tent wall. In the same way, the secure element could be super secure but it’s not protecting the perimeter. So it leaves some operations on the microprocessor unprotected.


Q: What are situations where the secure element approach does make sense?

Anton: It could make sense for an operation that can be localized within that secure element. For example, a retinal scanner application, where a secure element could hold the private key and the sensor could encrypt your biometric data so only the secure element could read it and make the comparison inside. That way it can store, and be the only thing that understands, your data.

But the minute you start distributing security decisions outside the secure element you end up with a safe door on a tent.


Q: If secure elements don’t ensure security, there must be reasons product managers and designers have been using secure elements and continue to.

Anton: One argument is that a secure element is very convenient and easy – “just add this chip and you’re done.” As a result they don’t change. But the reality is that to talk to that secure element, a discrete piece of hardware, requires drivers on the MCU or CPU. You need drivers to be ported to every CPU, and they need to be integrated into the operating system and environment. So you actually need to do software work on every new design to adapt to the use of the secure element. It’s not as simple as just adding the hardware and that’s it.


Q: So this hardware approach still requires software. Porting firmware to a CPU, for instance.

Anton: Yes, plenty of software work. And once you look at secure elements that way the advantages of SRAM PUF become clearer. Porting our piece of software is just as easy and you avoid that easy exposure point. It provides robust physical security and the overall result is still better – there is no physical exposure left. The component that makes operations on the keys, the keys, and the actual user of the keys – they’re all software within the same device. None of these operations show up on the pins of the chip.


Q: Even though it’s software.

Anton: Yes, but that’s a misconception, because SRAM PUF is rooted in hardware – it’s not just software. We’re still actually device unique rooted in hardware. Our software actually firmly attaches to the SRAM PUF of the device. It cannot just be taken and used on another device.


Q: So is it accurate to say that at least one big advantage of an SRAM PUF approach is there is no physical exposure? Whereas with a secure element there is no way to avoid a physical exposure?

Anton: A secure element will always leave an exposure at the electrical interface, exactly. With SRAM PUF you avoid that.


Q: Back to logistics for a moment – another issue for device makers who are considering secure elements is the fact that if they choose to go with a secure element their board manufacturing must anticipate adding the secure element.

Anton: Yes. And with SRAM PUF of course you completely avoid that.


Q: And along with that, the device maker who might need to upgrade security after deployment cannot use a secure element.

Anton: No, they can’t. One thing that could happen with a secure element is it could display negative characteristics that were not known ahead of time. In certain heavy-use applications a chip’s memory might be defected in just over a year – a short lifespan for most deployments. So that’s a point of failure they’d be adding.


Q: Given such scenarios, why do some cling to use of secure elements? Is it just something they’re comfortable with? Is it legacy designs?

Anton: For some it’s the legacy aspect, the “we already went through the complications of figuring out how to use secure elements” argument. That thinking will – not “can” – fall apart in certain situations. One situation would be a device maker who is producing units in low volume, and for him using a secure element has not been a problem. He or she might have a $100 part shipping in the thousands, or a $1,000 part shipping in the hundreds, and is thinking “who cares how much my secure element costs?” But then that product becomes a big success all of a sudden and production has to scale. And they’re starting to look at the bill of materials, starting to count pennies. By getting rid of SE you completely eliminate not only that part but also its power supply requirements. And any surrounding parts such as a resistor or capacitor. All this cost goes away, and it might even be more savings than the secure element savings itself.

Another situation would be for a new product where you’re looking at security for the first time. And you think “I want something simple, how about a secure element?” Well SRAM PUF implementation is even simpler. If you’re looking at producing a product you can avoid tying up hardware engineers and board layout people who would be needed with a secure element. Both secure elements and SRAM PUF need some software adaptation. But with a secure element you also need to tie up some hardware people, you need to have a hardware design, a new layout. Secure element is much more involved, and so would slow deployment of a new product.


Q: The use case would seem to drive the choice for security.

Anton: Yes. For certain low-end applications – think of key fobs to open doors – SRAM PUF might not make sense. But in many applications where a secure element might be considered, SRAM PUF would be at a lower price point. And definitely high on security – we are rooted in hardware, so you can move all the software somewhere else but the SRAM’s not there. It’s hardware, and without it – the key to the lock – you can’t breach the device.


Anton Sabev is Principal System Security Architect at Intrinsic ID and has extensive experience in cryptography, computer security and embedded digital signal processing. Prior experience includes positions with LSI Logic, ST Microelectronics and Intel. He is also a licensed pilot and conducts pilot training.


Do you have thoughts on the use of secure elements? Let us know in the Comments section below.

This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top