Feb 19, 2013

Multicore Best Practices

posted by Bryon Moyer

The hardest thing that multicore has had going for it is the perception that it’s hard. OK, that plus the fact that it is, in fact, hard. Or it can be. Although familiarity and tools are improving that. Nevertheless, it’s been a slow slog as multicore has gradually made its way into the embedded consciousness.

Part of the problem is that there is no one right answer for multicore anything. No one right architecture, no one right core, no one right set of tools, no one right way to write software. It all depends on what you’re trying to do. So it’s impossible simply to say, “Here’s how you do it, now off you go.”

The alternative, as envisioned by the Multicore Association, has been to compile a set of best practices, assembled by early adopters, for the benefit of relative newbies. And, frankly, the not-so-newbies – there’s always something more to learn. (And perhaps even debate.) That compilation has just been announced: a snapshot of multicore dos and don’ts summarized in a mere 100-odd pages.

After some basic overviews, it deals with high-level design, then low-level design, followed by debug and performance tuning. As you might suspect, covering so many topics in such a succinct fashion would make this less of a multicore primer and more of a hand-up once you’ve got your arms around multicore basics. In fact, it’s probably one of those things that’s best to take on after you’ve screwed up a project or two (hopefully as learning exercises, not as business disasters). OK, “screwed up” is possibly too strong; let’s say that some months of struggle with various non-obvious multicore issues will make this a rather more accessible document. And probably one that bears re-reading from time to time, since you’ll probably pick up more each time.

You can find your way to more information and the document itself via their announcement.

Tags :    0 comments  
Feb 14, 2013

Navigation extremes

posted by Bryon Moyer

I have been exposed to two navigational extremes over the last month or so. These aren’t specifically competing approaches (although I suppose they could be), but rather represent navigation with a minimal set of sensors and with a full complement of assistance.

On the more minimal side, Movea put together a demo for CES that led me on a pedestrian voyage, courtesy of the guidance of a cell phone. The phone had 10 sensor axes (3X accelerometer, gyroscope, and magnetometer, plus pressure). They had also mapped out the hotel they were in based on blueprints they got. (That must have been a fun one for security to vet…)

The idea was that we’d go from near the entrance of the building to the elevator, up to the right floor (OK, the phone didn’t try to push elevator buttons…), and then continue on to the room. We used the phone as a guide or orienting device, holding it out in front as it showed us the way.

The sensor results and map mostly worked together to factor out errors, although there appeared to be a couple of “checkpoints” where the phone “viewed” a poster or image (I frankly don’t remember what the specific icon was). Such a checkpoint, if accurately placed on the map database, could zero out accumulated errors and give the sensors a restart.

If the TV had been on and properly set when we entered the room, then the phone would have automatically coupled with the TV to provide a welcome message or something.

The trip wasn’t without incident; the route was rife with magnetic anomalies (like inside the elevator), but, as an early demonstrator, we did make it through using this minimum of information.

The other extreme is a chip from CSR called SiRFstarV. It can work with a broad set of inputs to provide navigation. Its focus appears to be satellite, including GPS and GLONASS as well as other GNSS systems, satellite augmentation (which appears to me to be a side-system that sends what I would call meta-data between satellites to improve the quality of calculation), and “extended ephemeris” (being able to download ephemeris (star chart) data for dates as much as a month out).

But they also handle IMU and pressure sensor inputs as well as cellular and WiFi signals for triangulation, and they have a cloud-based CSR Positioning Center from which the device can obtain other information to assist in determining position.

The idea here is also to allow constant navigation, indoors and out, in open terrain and surrounded by tall buildings, relying on every possible source of data, implementing this in an SoC.

Part of the reason you can’t directly compare these two examples as competing is the fact that the Movea demo was specifically about indoor navigation, and so the GNSS data simply doesn’t apply. It highlights the challenges and progress trying to exploit and augment the IMUs so many of us already own.

Indoor and pedestrian navigation are getting their fair share of development effort these days, as numerous different companies (and certainly more than the two just mentioned) tune algorithms in different ways to optimize cost, power, and flexibility.

Another recent conversation further illustrated some of the nuances of IMU-based navigation; I’ll talk about that in a future post or two.

You can find out more about Movea on their site and about the SiRFstarV on the CSR site.

Tags :    0 comments  
Feb 12, 2013

A New Verb for Hardware Engineers

posted by Bryon Moyer

Ever since malloc() (and it’s other-language counterparts), software engineers have had an extra verb that is foreign to hardware engineers: “destroy.”

Both software and hardware engineers are comfortable with creating things. Software programs create objects and abstract entities; hardware engineers create hardware using software-like notations in languages like Verilog. But that’s where the similarity ends. Software engineers eventually destroy that which they create (or their environment takes care of it for them… or else they get a memory leak). Hardware engineers do not destroy anything (unless intentionally blowing a metal fuse or rupturing an oxide layer as a part of an irreversible non-volatile memory-cell programming operation).

So “destroy” is not in the hardware engineer’s vocabulary. (Except in those dark recesses perambulated only on those long weekends of work when you just can’t solve that one problem…)

This is mostly not a problem, since software and hardware engineers inhabit different worlds with different rules and different expectations. But there is a place where they come together, creating some confusion for the hardware engineer: interactive debugging during verification.

SystemVerilog consists of much more than some synthesizable set of constructs. It is rife with classes from which arise objects, and objects can come and go. This is obvious to a software engineer, but for a hardware engineer in the middle of an interactive debug session, it can be the height of frustration: “I know I saw it, it was RIGHT THERE! And now it’s gone! What the…”

This was pointed out by Cadence when we were discussing the recent upgrades to their Incisive platform. The verification engineers that set up the testbenches are generally conversant in the concepts of both hardware and software, but the designer doing debug may get tripped up by this. Their point being, well, that hardware engineers need to remember that the testbench environment isn’t static in the way that the actual design is: they must incorporate “destroy” into their vocabulary.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register