Mar 27, 2014

Cleaning Up the Verification Shop

posted by Bryon Moyer

It’s one thing if different tools from different divisions of the same company don’t talk seamlessly together. Generally considered poor form. While that used to be common, EDA folks have cleaned that up a lot over the years.

It’s generally better accepted when tools from one company don’t necessarily integrate well with tools from another company. If there are good strategic reasons, it will happen. If not, then, as a designer or EDA manager, you’re on your own for patching the tools together.

But what about when, as a company, you go on a multi-year shopping spree? Now tools that used to be made by different companies have magically transformed into tools from different – or even combined – divisions within the company. So what might have looked tolerable amongst multiple companies starts to look messy within a single company.

Of course, we know who our intrepid EDA shopper is: They of the Endlessly Open Purse, Synopsys. They recently announced that they are bringing their various verification technologies together under the unified moniker “Verification Compiler.” This unites, to a degree,

  • Static and formal analysis
  • Simulation
  • Coverage management/analysis
  • Verification IP
  • Debug

The nature of how this comes together seems to have a couple forms, and more is yet to come. To a certain extent, this is a packaging/licensing thing, where what used to be separate products can now be purchased and managed together as a bundle.

From an outside user’s view, however, you will still run the tools as you always did – this isn’t an integration into a seamless, consistent, unified GUI – although that’s the part that’s likely to come in the future. For now, use models will remain similar.

But it’s not only a marketing thing. Underneath, these tools have had engines upgraded, and, in particular, they have been made to talk much more efficiently to each other using native integration rather than slower, less efficient (but more portable) approaches like PLI. The entire suite of tools can be scripted into a unified flow, rather than the current situation where each tool has a distinct flow.

The big win here thanks to these nuts-and-bolts improvements is performance. They post some pretty impressive gains – summarizing them as being 5 times faster (yielding 3 times the productivity). One formal project run by an unnamed customer ran 21 times faster. Capacity has also improved – in some cases by as much as 4 times.

One important message in the face of this inter-tool bonding: Verdi is remaining open. You may recall that one of the items in Synopsys’s shopping cart was SpringSoft, and the Verdi debug tool has a popular open interface and ecosystem. Even though they’re tightening their internal integration with Verdi, they’re not closing off access to outsiders.

In case you’re bringing out your checkbook right now, heads-up: unless you are amongst the anointed, you probably can’t get it yet. This is targeted for end-of-year broad availability; for now, it’s being wrung out by “limited customers.” I’ll leave it to you and Synopsys to decide whether you’re one of them.

And you can find out more about this in their release.

Tags :    0 comments  
Mar 26, 2014

Ban Power Consumption

posted by Bryon Moyer

“How much power does it consume?”

This has been a key question ever since I started work as a product engineer many years ago. Heck, back then we even published power consumption numbers, although we used ICC as a proxy – we didn’t actually publish power, but you could easily do the multiplication with VCC to get it. (Yes, this was bipolar.)

These days, the concept is even more important, what with all the focus on battery-powered whats-itses. But in deconstructing a lot of what’s going on now, there’s an interesting nuance coming to the fore: energy vs. power.

  • Energy is a “thing.” It’s something physical that has a measurable quantity.
  • Power, by contrast, is not a thing; it represents the rate of flow of a thing, namely energy.

This is more than just an academic difference. Batteries and fuel cells can store more energy than a supercapacitor can, but they release that energy at a slower rate than the supercap. So one is capable of higher energy capacity; the other of higher power. The distinction actually matters.

So I find myself tripping more and more over the familiar phrase, “power consumption.” Power isn’t a “thing,” so it can’t be consumed. Energy is a “thing,” and it can be consumed.

So “power consumption” makes no conceptual sense; “energy consumption” makes a ton of sense.

An electronic device consumes energy, but, from a practical standpoint, you can’t know the energy consumed until you know how long you’ve run the device. And you have to be able to serve up the energy to the device from your energy store at the rate the device expects, or else you’ll starve it. So “power” is ultimately involved as a critical device requirement; energy consumption not so much.

So if “power consumption” is off the table, “power requirement” seems a suitable replacement.

I will therefore labor to use either “energy consumption” or “power requirement” henceforth.

And no, I don’t expect the world to follow. (One of my many quixotic attempts to apply logic to language… like the perennial abuse of the plural of “die” and the silly overuse of @...)

Tags :    2 comments  
Mar 25, 2014

Wide-Ranging Approaches to Ranging

posted by Bryon Moyer

As I’ve mentioned before, there are constants at ISSCC (e.g., sessions on image processing and sensors) and then there are the circuits-of-the-month. Ranging seemed to be one of the latter, showing up in both image-processing and sensor sessions. So I thought I’d summarize some of the widely differing approaches to solving issues related to ranging for a variety of applications.

For those of you following along in the proceedings, these come from sessions 7 and 12.

Session 7.4 (Shizuoka University, Brookman Technology) offered a background-cancelling pixel that can determine the distance of an object using time-of-flight (ToF). As you may recall, ToF is more or less like light radar (LIDAR?) where the arrival of the reflection of a known emitted light gives you the distance.

There are four lateral gates in this pixel, directing charge from impinging light into one of three floating diffusion areas (the fourth gate simply discharges the pixel).

Background cancellation has historically been done by comparing adjacent frames, but quick motion can create strange artifacts. So at the beginning of the capture cycle for this work, the background is measured and stored in the first diffusion for subtraction. Then the emitter turns on and collection moves to the second diffusion. The reflection may also return during that time; when the emitter shuts off, then collection changes to the third diffusion. The difference between those two charge amounts gives the distance.

Session 7.5 (Shisuoka University) addresses the challenge of doing high-precision ranging for the purposes of, say, modeling an object. The problem is that, to get higher resolution, you ordinarily need to separate the light source from the imager by a wide angle. That’s hard to do in a small device. Such devices typically have resolution in the few-cm range, which isn’t much use for object modeling; this work achieved 0.3-mm resolution.

The keys were three:

  • They use an extremely short (< 1 ns) light pulse.
  • They used a drain-only modulator (DOM) – by eliminating the lateral pass gate, they get a faster response. The pixel itself can only accumulate or drain.
  • They capture all of the pixels at once, but the tight timing brings another issue: skew between pixels is no longer noise, but can screw up the measurement. So they implemented a column deskew circuit and procedure.

Microsoft weight in in Session 7.6 (they couldn’t help putting a flashy brand on their opening slide – something that you generally don’t see at ISSCC, but I guess the marketing guys need something to prove their value, even if it meant being tasteless). This was an improved Kinect ranging system where the challenge is in accommodating both distant low-reflectivity (i.e., low-light) and close-in high-reflectivity (i.e., high-light) objects. Pretty much your classic dynamic range issue complicated by the distance thing.

They have decoupled the collection of charge in a floating diffusion and an “A or B” assignment that will be used to calculate the distance. They use A and B rows as inputs to a differential cell. A high-frequency clock alternates A and B activation during collection; this means that the assignment to A or B, determined by the clock, happens simultaneously with charge collection. The transfer to a floating diffusion can then happen afterwards, at a leisurely pace (to use their word).

They also implemented a common-mode reset to neutralize a bright ambient. And each pixel can set its gain and shutter time; this is how they accommodate the wide dynamic range.

Meanwhile, over in Session 12, folks are using other sensors for ranging. In Session 12.1 (UC Berkeley, UC Davis, Chirp Microsystems), they built a pMUT (piezoelectric micro-machined ultrasonic transducer) array to enable gesture recognition. Think of it as phased-array radar on a miniscule scale. They process the received signals by phase-shifting – basically, beamforming – in an attached FPGA.

Within the array, some pMUTs (think of them as ultrasonic pixels, sort of) are actuated to send a signal, others listen to receive the reflection, and some do both. They can decide which of these to do for optimization purposes on a given application.

They also want to sample at 16x the resonant frequency of the sensors to lower in-band quantization noise and simplify the cap sizing. (No relation to an unfortunate boating incident.) But that means they need to know the actual, not approximate, resonant frequency for a given device – natural variation has to be accommodated, as does response to changing environmental conditions like temperature.

To do this, they have a calibration step where they actuate the sensors and measure their ring-down, using the detected frequency to set the drive frequency of the actuator. This calibration isn’t done with each capture; it can be done once per second or minute, as conditions for a given application warrant.

As always, the details on these sessions are in the proceedings.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register