Oct 23, 2014

Elliptic Labs’ 3rd Gesture Dimension

posted by Bryon Moyer

Some time back we briefly introduced Elliptic Labs’ ultrasound-based gesture technology. They’ve added a new… layer to it, shall we say, so we’ll dig in a bit deeper here.

This technology is partially predicated on the fact that Knowles microphones, which are currently dominant, can sense part of the ultrasonic range. That means you don’t necessarily need a separate microphone to include an ultrasound gesture system (good for the BOM). But you do need to add ultrasound transmitters, which emit the ranging signal. They do their signal processing on a DSP hub, not on the application processor (AP) – important, since this is an always-on technology.

With that in place, they’ve had more or less a standard gesture technology, just based on a different physical phenomenon. They see particular advantage for operation in low light (where a camera may be blind), full sun (which can also blind a camera), and where power is an issue: they claim to use 1/100th the power of a camera-based gesture system. So… wearables, anything always-on. As long as you don’t need the resolution of a camera (which, apparently, they don’t for the way they do gestures), this competes with light-based approaches.

Image_240.jpg

What they’ve just announced is the addition of a 3rd dimension: what they’re calling multi-layer interaction (MLI). It’s not just the gesture you perform, but how far away from the screen you perform it. Or what angle you are from the screen.

For instance, starting from far away, with your hand approaching, at one point it would wake up. Come in further and it will go to the calendar; further still takes you to messages; and finally on to email. Of course, Elliptic Labs doesn’t define the semantics of the gestures and positions; an equipment maker or application writer would do that.

And it strikes me that, while this adds – literally – a new dimension to the interface, the semantic architecture will be critical so that users don’t have to mentally map out the 3D space in front of their screen to remember where to go for what. There will have to be a natural progression so that it will be “obvious.” For example, if you’re gotten to the point of email, then perhaps it will show the list of emails, you can raise and lower your hand to scroll, and then go in deeper to open a selected email. Such a progression would be intuitive (although I use that word advisedly).

A bad design might force a user to memorize that 1 ft out at 30 degrees left means email and at 30 degrees right means calendar and you open Excel with 90 degrees (straight out) 2 ft away and… and… A random assignment of what’s where that has to be memorized would seem to be an unfortunate design. (And, like all gesture technologies, care has to be taken to avoid major oopses…)

Note that they don’t specifically detect a hand (as opposed to some other object). It’s whatever’s out there that it registers. You could be holding your coffee cup; it would work. You could be using your toes or a baseball bat; it would work.

You can also turn it off with a simple gesture so that, for example, if you’re on your phone gesticulating wildly, you don’t inadvertently do something regrettable in the heat of phone passion. Or in case you simply find it annoying.

You can find out more in their announcement.

 

(Image courtesy Elliptic Labs)

Tags :    0 comments  
Oct 21, 2014

What Does ConnectOne’s “G2” Mean?

posted by Bryon Moyer

ConnectOne makes WiFi modules. And they recently announced a “G2” version. Being new to the details of these modules, I got a bit confused by the number of products bearing the “G2” label as well as the modes available – were they all available in one module, or were different modules for different modes? A conversation with GM and Sales VP Erez Lev helped put things in order.

As it turns out, you might say that ConnectOne sells one WiFi module into multiple form factors. Of the different modules I saw, it was the form factor – pins vs. board-to-board vs. SMT; internal vs. external antenna – that was different, not the functionality.

There are multiple modes that these modules can take on – and these are set up using software commands that can be executed in real time. So this isn’t just a design-time configuration; it can be done after deployment in the field.

The modes available are:

-          Embedded router

-          Embedded access point

-          LAN to WiFi bridge

-          Serial to LAN/WiFi bridge

-          Full internet controller

-          PPP emulator

But what about this “G2” thing? Their first-generation modules were based on Marvell’s 8686 chip. And that chip has been end-of-lifed. Or, perhaps better said, it’s been 86ed. So in deciding where to go next, they settled on a Broadcom baseband chip – something they said gave Broadcom a boost in an area they’re trying to penetrate.

G2N2_Top_and_bottom_400.png

But the challenge was in making this change transparent to users. Existing software invokes the new chip just like it did the old one, and this took a fair bit of work. They say they were successful, however, so that upgrading from the older to the newer version takes no effort; it just plugs in.

So “G2” reflects this move to the Broadcom chip as their 2nd-generation module family. From a feature standpoint, the big thing it gets them is 802.11n support. But they also have a number of unexposed features in their controller. Next year they’ll be announcing a “G3” version, with higher performance and… well, he didn’t share all of what’s coming. But G3 will have all of the same pinouts, form factors, APIs, etc. for a seamless upgrade from G2 (or G1, for that matter).

You can get more detail in their announcement.

 

Image courtesy ConnectOne

Tags :    0 comments  
Oct 16, 2014

IPSO Alliance Provides IoT Objects

posted by Bryon Moyer

Some time back we took a look at Internet-of-Things (IoT) communications in an attempt to digest some of the vague marketing messages from various companies participating in that business. I identified three layers: formal protocols overlaid by abstract messaging overlaid by business objects.

The “formal protocols” layer is typically referred to generically as the “transport” (even though it may or may not contain formal OSI transport-layer functionality). When IoT comms folks talk about being standards-based, this is typically where most of the standards lie, whether it’s TCP/IP or Zigbee or whatever.

Above that is the generic messaging layer, and it’s simply a way to encode information for shipment elsewhere. There are no semantics, and the receiving entity needs to understand how the message was built in order to unpack it properly. The contents themselves aren’t standardized. We identified Xively as an example of this, but there are other standards working here as well; MQTT would be an example. DDS is another one. Note that, at this level, there may be some level of prescribed format, but there are no prescribed semantics for specific types of endpoints.

Such semantics belong in the layer above, where we find business objects. Just to clarify the difference here, let’s take 3 examples:

-          In one case, a generic message protocol might be leveraged to carry, say, instructions to a thermostat. Let’s say there are a couple header bytes and then a message field. The designer could use the first byte of the message field to identify that this is going to a thermostat and the second byte could identify which thermostat, and then the following bytes would carry the instruction and any data (for instance, “set temperature to 72”). This structure has been defined for this system only, and both ends of the system need to know what the various bytes signify in order to communicate.

-          In another case, more like DDS, there may already be a provision for generic “topics.” So, in this case, the designer could encode instructions just like above, but rather than having to build in a field for “thermostats,” he or she could simply use a thermostat topic to which interested subscribers could subscribe. The specific thermostat instructions would still be custom, but some of the infrastructure for getting the messages to interested parties is built into the protocol.

-          Those prior two fall short of containing business object semantics because the thermostat-specific instructions are custom. The third example would, by contrast, include a thermostat object, and that object would have a pre-defined API. You wouldn’t “invent” codes for “set temp,” “turn on,” “turn off,” etc.; they would be part of the protocol. The benefit here is that any system talking to a thermostat from any vendor supporting the protocol would work. There’s no vendor lock-in (which may not be viewed as a benefit by some vendors).

The IPSO Alliance has put together a “starter pack” of IoT objects. They’ve done so given that their main objective, proliferation of IP, can be extended to include constrained-resource Things via IPv6 and 6LoWPAN. The reference implementation leverages the Lightweight M2M protocol, designed for device management and services, which is itself based on the new CoAP protocol, which provides messaging for low-bandwidth, low-power devices with constrained resources.

That said, the objects can be implemented over other protocols as well. There’s nothing about them that constrains them to IP-based transport.

IPSO_Object_Drawing.png

 

They’ve created 18 different objects. Some of them are rather generic:

  • Digital input
  • Digital output
  • Analog input
  • Analog output
  • Generic sensor
  • Power measurement
  • Actuation
  • Set point
  • Load control

So, for instance, while there’s not a specific thermostat object, the “actuation” object allows for turn on/off, and the “set point” object allows for setting a value, like the temperature.

Then there are some specific objects:

  • Illuminance sensor
  • Presence sensor
  • Temperature sensor
  • Humidity sensor
  • Light control
  • Power control
  • Accelerometer
  • Magnetometer
  • Barometer

Each object has defined “resources.” For example, the Illuminance sensor object has the following resources:

  • Sensor value
  • Units
  • Min measured value (since last reset)
  • Max measured value (since last reset)
  • Min range value
  • Max range value
  • Reset min/max measured values

Each resource has its own ID. Names and IDs are registered through the Open Mobile Alliance (the group that defined LWM2M) Name Authority.

You can read more about the announcement here, and the “starter pack” guidelines are available here.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register