Dec 18, 2014

IoT Business Objects

posted by Bryon Moyer

We do this thing here where we try to take occasional stock of the structure of the Internet of Things (IoT) to try to make sense out of the various pieces that come together to work or compete with each other. And I usually try to generalize or abstract some of the mess into some broader structure that’s hopefully easier to parse (or becomes an easier entry point).

We did that a while ago when looking briefly at Xively. Well, another opportunity came about when I was contacted by a company called Zebra regarding their IoT infrastructure offering called Zatar (not sure if that comes from za’atar [that apostrophe representing an unusual pharyngeal unrepresentable in Latin script], which would give it a flavorful veneer). And my usual first question is, “Where does this fit in the high-level scheme of things?”

Zatar would appear to implement business objects, although they use a different vocabulary, referring to their abstractions of devices as “avatars.” So they would appear to play at a higher level than, say, Xively. As with any high-level entity, however, it’s built on a stack below it. One of the top-level supporting protocols they use is OMA’s Lightweight M2M protocol (LWM2M).

I did some brief digging into LWM2M, and I’m glad they have a whitepaper, because they don’t have a single protocol doc. They have a collection of chapters (dozens of them) all sorted in alphabetical order, so it’s really tough to tell which (if any) is a top-level document from which to get started. I may dig into this protocol more in the future.

But, at a high level, with Zatar and LWM2M, I’m refining how I think of the “business objects” layer. In general, this layer is where specific object semantics exist: thermostats vs. door locks vs. washing machines. Below it, only generic messages exist, with meaning that’s opaque to the protocol.

It appears that LWM2M enables the notion of an object without standardizing specific objects. So it lets you create an abstract entity and give it properties or interactions – essentially, an API – without saying what the specifics should be.

Zatar comes pre-equipped with a base avatar from which users can define their own specific ones. This is done without any explicit coding. By contrast, other folks (like Ayla Networks, from a while back) include pre-defined objects. So I’ve split the “business objects” concept into two layers: generic and specific. The generic layer simply enables the concept of a business object; the specific layer establishes the details of an object.

So, for instance, given a generic capability, three lighting companies could go and define three different models or objects representing lighting, each of which would adhere to the generic protocol. If someone wanted to standardize further – say office management folks got tired of having to figure out which lighting protocol various pieces of equipment followed – then someone could go further and standardize a single lighting protocol; this would be a specific standard.

It’s important to keep in mind, however, that LWM2M is a protocol standard, while Zatar is not; it’s a product that implements or builds over that and other protocols.

Biz_object_drawing.png

The other thing that Zatar has is an enterprise focus. We’ve peeled apart a bit the notions of the consumer IoT vs. the industrial IoT, but the notion of yet a third specialized entity, the enterprise IoT, is something I haven’t quite come to grips with. Part of it is simply a matter of scale – large entities with lots of data that has to be shared globally. This bears further investigation; watch these pages over the next few months for more on that.

One other last point: saying that these products and standards simply implement business objects is a gross over-simplification. As you can see if you go browse the OMA docs or even with the following figure from Zebra, there are many, many details and supporting services and applications that get wrapped up in this. For LWM2M, in includes lower-level concepts of interaction through various networking media and how, for instance, browsers should behave. For Zatar, there’s the cloud service and other applications. I’m almost afraid to try to abstract some of this underlying detail. We’ll see…

Zatar_figure.png

Meanwhile, you can learn more about the specifics of Zatar here; you can learn more about OMA’s LWM2M protocol here.

Tags :    0 comments  
Dec 17, 2014

Beefed-Up Sensor Subsystem

posted by Bryon Moyer

You may recall that, about a year ago, Synopsys released a sensor subsystem. You could think of it as the IP needed to implement sensors in an SoC.

So this year they announce a “Sensor and Control IP Subsystem.” And the obvious question is, “How does this relate to last year’s announcement?”

Well, at the top level, you can think of it as an upgrade. When available in January, it will essentially replace last year’s edition.

So what’s different about it? They listed the following as some of the enhancements:

  • They’ve beefed up the DSP options, including their (ARC) EM5D and EM7D cores. Last year’s subsystem could handle basic sensor processing, whereas the new one can do voice and audio and facial recognition, all of which take substantially more horsepower. They’ve also added support for the EM6 for customers that want caching for higher performance.
  • They’ve added IEEE 754 floating-point math support. In case you’ve got floating point code (for instance, generated by MatLab).
  • More peripherals. In addition to the I2C, SPI, and ADC interfaces that they had last year for connecting to sensors, they’ve addressed the actuator side of things by including PWM, UART, and DAC support. They also support a tightly-coupled AMBA Peripheral Bus (APB) interface.
  • A big part of this whole actuator focus is motor control. So they’ve added a library of software functions for motor control. This includes “’Clarke & Park’ transforms (and inverse versions), vector modulation, PMSM decoupling and DC bus ripple elimination routines.” I honestly have no idea what those are; in this moment, I’m simply your humble (humiliated?) reporter.

sensor_and_control_subsystem_block_diagram.jpg

Image courtesy Synopsys

You can find out more in their announcement.

 

Tags :    0 comments  
Dec 16, 2014

The Power of the Pen

posted by Bryon Moyer

This year’s recent Touch Gesture Motion (TGM) conference had a surprising focus on pens. Which I like, actually. While most of my professional time is with a keyboard, I still take notes manually on paper. Partly it’s because, in an interview situation, I feel like it’s rude and impersonal to be typing away as if I’m some bureaucrat entering data into a form.

But, even though I’m a fast typist (on a real keyboard, not a virtual one), I can write even faster (depending on how long a legibility relaxation time I want). So it seems more efficient to write. But I write in a book, and I then need to keep track of which pages have notes for which topics when I come back to turning them into some kind of piece. I’d love to be able to write on a digitizing surface and then simply save note files.

N-Trig outlined other reasons why handwriting is useful:

  • Annotating other work
  • Expressing things other than text: art, graphs (in lab notebooks, e.g.)
  • Math formulas, for instance… can you imagine trying to futz with, say, the Microsoft Word equation capabilities – which are nice when you want clean typeset formulas – when frantically taking notes from a quantum mechanics lecture? It would never work. You need to be able to scribble them.

That, unfortunately, isn’t practical today, and some of the challenges that remain were highlighted by N-Trig at the TGM conference.

This is a topic you might expect to see at such a conference. What I wasn’t expecting was information on studies that link handwriting to brain function. It appears that the process of writing activates various parts of the brain that help solidify information. Studies suggest, among other things, that:

  • “Students without consistent exposure to handwriting are more likely to have trouble retrieving letters from memory; spelling accurately; extracting meaning from text or lectures; and interpreting the context of words and phrases.”
  • “Elementary-age students who wrote compositions by hand rather than by keyboarding, one researcher found, wrote faster, wrote longer pieces, and expressed more ideas.”

Source: N-Trig presentation. He listed numerous sources, although it was tough to ascribe specific sources to specific points.

Now… this is from a company that sells digital pens, so the information serves them well, but it didn’t feel like simply self-serving research.

What he also confirmed is that pens still have some work to do to provide the kind of writing experience that we’re used to with real pens and pencils on paper. And that appears to be the gold standard. It’s honestly not clear to me if that’s an arbitrary standard that comes from what we’ve gotten used to or if there’s something more fundamental.

But there are a number of dimensions that have to be optimized for it all to work, including:

  • You’ve got to be able to hold a pen at various angles and have it work properly.
  • You want just the right amount of friction – and what’s “right” depends on what kind of pen you think you’re working with – felt tip, ballpoint, rollerball, etc.
  • The digitizer response has to be fast – latency screws up eye-hand coordination.
  • The digitizer has to be precise so as to capture all of the correct data points.
  • The pen tip has to be long-wearing under continual usage.
  • The palm has to be rejected accurately.  Trying to write while keeping the palm up simply doesn’t work – it’s not how we’ve learned to write, and the large-motor “noise” swamps the fine motor fingers and you end up with writing that looks way worse than mine (which is hard to imagine, believe me).

They’re studying pen performance by measuring speed and accuracy – and even comparing pen-and-paper to electronic pens. One way of doing that was to use copy paper under an electronic pen/pad and then compare the digital result against the actual imprinted writing on the copy paper. An example is shown below, and you can see where the two align and where they miss.

Pen_figure.png

Image courtesy N-Trig

As the picture suggests, while there’s a fair bit of green, indicating agreement between the digital and copy versions, there’s still a lot of blue and gray. The latter are near each other, so it’s mostly not a total miss, but ideally you want all green. And they’re not quite there yet.

And that’s just to get the pixels all in the right place. Once that’s in place, then handwriting recognition would be a nice bonus. Then again, if I can’t always recognize my own writing, I can’t reasonably expect a computer to.

So, in summary, the bad news: there’s still work to be done. The good news: it would appear that N-Trig and others are taking this seriously. We may yet get usable electronic pens that rival the real thing. Which would make my life easier, but, more importantly, would allow us to proceed with going digital in school and out without giving up handwriting.

Tags :    0 comments  
Get this feed  
« Previous123456...190Next »

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register