posted by Bryon Moyer
If you’re building a Thing for the Things’ Internet (consumer-edition – i.e., the CIoT), then, even though you may do your heavy computing work in the Cloud, you’ll still need something to make your thing act more intelligent than the assembly of metal and plastic that it is.
Perhaps you’ll need it for management, perhaps for sensor fusion; it’s not likely to be a difficult computing challenge, but you’ll need something. To address this need, Microchip recently tossed a new PIC device into the fray: their PIC24F “GB2” family. Consistent with a growing IoT trend to integrate and make things simple, this one incorporates two critical elements for Thing computing: security and low power.
For security, they’ve built an encryption engine into the device, with one-time-programmable (OTP) key storage. Critically, the key is inaccessible to anything except the encryption engine itself, as shown in the drawing below. In fact, this version of the drawing is my Microchip-approved edit to their original drawing, which showed (for simplicity) both the key and the random number generator (RNG – note, this is true random, not pseudo-random) as also hanging off the peripheral bus, which would be a big security hole.
(Click to enlarge)
Image courtesy Microchip
Programming the key is done… programmatically. (Duh!). That is, it’s not some separate port that you plug a programmer or something into; the CPU does it.
There is an example in their documentation showing code that would do this. I assume it’s for explanatory purposes only, since, in that code, the desired key is effectively defined as a constant. If you actually used that code, stored in the Flash, then the unsecured Flash could happily dish up the key to anyone wishing to explore the memory. But it got me thinking: exactly how do you program this OTP without exposing the key in the process?
I asked Microchip, and they provided a number of scenarios. Having digested that, it seems to me that there are three considerations:
- Does every unit end up with the same key, or does each unit have its own unique key?
- Where does the key come from?
- What happens to the code that does the programming?
Let’s take those in order.
Microchip recommends that, for best security, each unit have its own key. That way any hacking (which is destructive) results only in a key to the now destroyed part. For your newbie hacker only.
One scenario with every unit having the same key occurs for folks wanting to secure boot code. The easiest way to do that is to encrypt the .hex file once with a single key and then use that image on all units. More complex approaches could allow unique keys, but, for instance, the factory would need to keep a database with key/serial number pairs in it so that, if the customer requested an updated version, it could be sent encrypted with that unit’s individual key.
Of course, if you’re a customer and have to update several units, then you’d receive one update image per unit (vs. one total if they all had the same key). And you’d have to make dang sure that the right image went in the right unit!
That moves the point of failure, of course, to that database – how secure is it? An alternative is to generate the image encryption key by encrypting the serial number (which is unique per device) with a secret factory key that only developers know. That can be calculated on the fly in the factory, eliminating the database. But, of course, it also assumes no disgruntled employees will divulge the key. Not airtight.
Coming back to the three considerations, the source of the key is important. If you store it in the Flash code, as in the example, it can be read in the clear. If you deliver it via some communication from a host PC or something similar, it is traveling in the clear, and is therefore vulnerable.
You can certainly operate that way, but Microchip recommends something different: use the RNG. That way each unit generates its own unique key, and no one knows what that key is. It simply works. This is airtight (except for the newbie hacker and his pyrrhic victory).
Finally, the programming code. Here’s the scenario: when you program the OTP key, it also sets a bit saying that the OTP has been programmed. Once set, that bit can never be cleared. So the first time the unit is powered on, it may not have a key yet, and the first thing you want to do is quick program the key. By checking for that flag, you know whether or not you need to program the key before moving on to the rest of the application.
Simple enough, but this is a one-time application. At the very least, you’ll be allocating scarce code space for a function (a few hundred bytes) that runs only once both in Flash and in RAM when executing.
If that’s an issue, another option is to have two applications: one for programming the key and the other being the main application. Store them on separate Flash pages and run them separately (so they’re not in RAM at the same time). On first power-up, run the key programming code and then erase that Flash page, destroying that code and freeing the page up for other use. Then load the main application, and off you go.
So, as you can see, there are a number of ways of handling this, some more airtight than others, and, as with anything having to do with security, you can make it as complicated as you want.
As to that other critical IoT function, low power, the chip uses their “XLP” (Xtra Low Power) technology, with Idle, Doze, Sleep, and Deep Sleep modes that monkey with what’s on or off and the clock rate. In Deep Sleep mode, it can draw as little as 40 nA.
You can get more info in their announcement.
By the bye, at the same time, they also released a new Bluetooth module, the RN4020. While they already have modules for various other Bluetooth flavors, this one supports Bluetooth Low Energy (BLE). You can find more about it their other announcement.
posted by Bryon Moyer
As noted in today’s article on some of the characteristics of the DDS data transport standard, it’s missing a rather important component: formalized security. Proprietary schemes have been layered on top of it, but the OMG has a beta standard that they’re now finalizing (a process that could take up to a year).
But that doesn’t stop early adoption. RTI has announced an implementation of the new OMG security standard for DDS – something likely made easier since, by their claim, they contributed much of the content of the standard.
There are a couple of particular challenges with respect to security on DDS. First, due to its decentralized nature, there are no brokers or single-points-of-security (which would be single points of failure). This means that each device or node has to handle its own security.
Second, DDS runs over many different transport protocols, some of which may or may not have their own security. Because of that, you can’t rely on the underlying transport security for protection. This means adding DDS-level security (which may complement security at a lower level).
We usually think of security as protecting the privacy of a message so that only the intended receiver can read it. While this is true, RTI points out that, in many cases, the content isn’t really secret – you just want to be sure that it’s authentic. They use as an example a weather data transmission: you may not care if anyone else sees it, but you want to be sure you’re getting the real thing and not some spoofed message that’s going to send your boats out into the heart of a hurricane. (I hear that competition amongst fishermen is fierce!)
So RTI’s Connext DDS Security includes authentication, access control, encryption (using encryption standards), data tagging (user-defined tags), and logging.
(Click to enlarge)
Image courtesy RTI
If all you’re interested in is authentication, you can improve performance by taking a hash of the message (much faster than encrypting) and then encrypting only the hash (much smaller – hence quicker – than the entire message). Full encryption (needed to obscure the entire payload) can be 100 times slower.
You can also customize your own encryption and authentication code if you wish.
They claim that this is the first “off the shelf” security package; the prior proprietary approaches ended up being written into the applications explicitly. Here it’s provided as a library for inclusion in the overall DDS infrastructure.
You can find more in their announcement.
posted by Bryon Moyer
Any new foundry would want to grow up to be a megalith like TSMC, right? Isn’t that how you prove you’ve “made it”? Well, not if you’re Novati. They’re a different sort of foundry, one you don’t hear about so often over the noise of the Big Guys.
Here’s the thing: when you’re in the foundry mainstream, you do one thing: you chase Moore’s Law and try to keep it going. You figure out what the masses want, and you trim everything extraneous away so that you can sate the masses in enormous volumes at competitive costs.
But what if you’re in the market for something that can’t be made using the techniques that suit the masses? That’s where smaller… ok, I’m going to use the dreaded word (investors: please cover your ears): niche players can find plenty of business, even if, by so doing, they can maybe achieve only kilolithic or decalithic status.
I met with them at Sensors Expo. Sensors are a typical opportunity for a more flexible fab, since they may use unusual techniques and materials, and each one may be slightly different, making it hard to put everyone onto one high-volume recipe.
Novati does CMOS and MEMS (particularly silicon microfluidics) – jointly and severally. When jointly, with both on the same wafer, they typically do MEMS-last, placing the MEMS elements above the CMOS circuitry. They can do this either by growing more silicon epitaxially over the CMOS or by stacking a separate wafer.
They also work on silicon photonics projects and 2.5D (silicon interposer) and 3D integration.
Most of what they do leverages a common set of equipment (largely for 200-mm wafers, with some 300-mm ones), but where the diversity really comes in is with materials. They can work with 60 different elements – far more than would be found in your average foundry.
Most foundries want to keep the number of elements they allow through the door to the absolute minimum. A new material, if not handled carefully, brings with it the risk of unexpected contamination with potentially calamitous results – something that’s just not worth messing with if you’re spinning oodles of wafers an hour.
But smaller guys need to be more flexible, and a willingness to work with more materials can be a boon to developers trying new ideas. Gold is the one element that Novati is particularly careful with: They segregate that in a separate room. For all the others, they study each one under consideration and develop specific protocols to ensure that the material goes only where they want it to. Which may be limited to some nanolayer a few atoms thick laid down by atomic layer deposition (ALD) on a wafer.
Once a project gets to production volumes, they can handle it to an extent, but they may also hand off to a partner that can handle higher volumes. Of course, if the volume production involves odd materials, then they’ll need to work with someone willing to handle that material.
As with any business, there’s always opportunity on the fringes of the mainstream. In this case, they’re entertaining many of those opportunities; they’re just being careful not to step on Moore’s toes.
You can find out more on their site.
(Image courtesy Novati)