ReadyIP Announcement has Bigger Implications
While the word “ecosystem” is happily bantered about by major FPGA vendors, history would indicate that FPGA companies are less than perfect participants in the care and feeding of “ecosystems” to support their products. The turmoil associated with the love/hate, competitor/partner, customer/supplier relationships between FPGA companies and others providing various products and services to the FPGA community are well documented.
Commercial EDA companies are a perfect case-in-point. While trying to make a business creating and selling design tools to FPGA designers, they need to cooperate closely with FPGA companies in the creation of their tools and supporting libraries, and then they have to compete with those same FPGA companies who are providing competitive tools directly to their customers at virtually no cost.
The Multicore Expo Bulks Up
A couple years ago a small ragtag conference took place in Santa Clara just before the relative behemoth Embedded Systems Conference. “Ragtag” might be a bit unfair, but it seemed that way only when compared to the much larger and better-funded conferences; perhaps “scrappy” is a better characterization. This was just a start, the first edition of the Multicore Expo, and at the time, many in multicore seemed to be grasping for relevance. The participants were all sure multicore was guaranteed in the future, but there was no swagger in the strut. Would multicore go mainstream this year? Next year?
The 2008 edition just finished, and one step into the presentation room provided a very different feel. According to organizer Markus Levy, this year’s conference was about double the size of last year’s. And the atmosphere was much more confident; this is the real deal. They had to add chairs to some of the conference rooms to accommodate the number of people in attendance.
The Multicore Expo, as the name suggests, focuses on multicore issues exclusively. In particular, embedded multicore is the central issue; there’s not so much interest in discussing how to get Microsoft Excel to run faster on your quad-core desktop box. While multicore has historically been used in embedded applications more than elsewhere, those have typically been very specialized applications using very specialized processors, with a small cadre of very specialized programmers that knew how to get performance using some very complex specialized programming models. However, now that you pretty much can’t buy a sophisticated microprocessor that consists of a single core anymore, everyone is being forced into multicore.
A Quick Look at isQED
Nestled amongst the big noisy conventions like CES, ISSCC, and DAC can be found some more modest, highly focused conferences. These shows may cast a smaller shadow, but they may also benefit from the lack of attendant hoopla, since marketing pays less attention and engineers can focus on the business at hand. One such show that just took place was isQED, or the International Symposium on Quality Electronics Design. Now in its ninth year, isQED focuses on the interactions between design, test, quality, and manufacturing disciplines in the effort to improve such aspects as yield, quality, and robustness.
The technical sessions were dominated by university presentations and were highly focused. Sharing time with these were a number of higher-level industry presentations that were clearly trying to tread a fine line between presenting a topic of general relevance and featuring the companies’ products. Somewhat surprisingly, Microsoft was the first up. This isn’t a company that one usually expects to see at a smallish conference focused on semiconductors. But from their presentation it’s clear that they’re looking to develop a unified collaboration platform to bring together all aspects of system design and sales, including engineering (of course), manufacturing, marketing, field personnel, and even customers (via appropriate firewalls, presumably). Whether they’re able to leverage success in this market remains to be seen, but it appears that, one way or another, they plan to make some noise.
FPGA Design Diversifies
About a decade ago, FPGA design followed in the footsteps of ASIC and went language-based. For a very long time, the only question we asked ourselves was “VHDL or Verilog?” It was reminiscent of the “Paper or Plastic?” scenario in the grocery checkout line. Gradually, however, people sneaked into the FPGA-designing fold that weren’t FPGA designers. Who are these folks anyway? We’ve got DSP engineers, embedded systems designers, board designers, supercomputing folks… the list goes on and on.
Apparently all those new engineers didn’t get the memo about conforming to our established design methodologies, or else they just didn’t feel like becoming experts in VHDL and Verilog. Compounding the problem was the fact that FPGA and EDA companies – money-grubbing monsters that they are -- decided to actually cater to these interlopers by giving them gold-plated, easy-as-pie design entry mechanisms that allowed them to almost completely forego the time-honored traditions of entities and architectures.
Aonix Makes Real Time Almost Safety Critical
It happened just like that. In the middle of a conversation, he got a kind of misty look in his eyes, like something wasn’t quite right. His breathing became more labored, he hunched forward a bit, and the next thing you knew, he was in full heart attack mode. An ambulance was quickly called for; this is where seconds count. As the ambulance was en route, efforts were made to clear the way for the EMTs so that they could get to work as quickly as possible. The main door was propped open, and an attempt was made to reserve the spot in front for the ambulance. But just as the ambulance was getting close, a garbage truck came by and blocked access. The garbage men casually jumped down from the truck and started collecting the garbage. Attempts to get them to back off even for a moment were in vain; they were scheduled to collect the garbage, and by George, that’s what they were gonna do. The ambulance would just have to wait.
And this is why standard old off-the-shelf Java isn’t used in real-time or safety-critical applications. While its more restrictive design, as compared to C or C++, actually helps system reliability in many respects, reducing the chances for system failure, it’s got a few unpredictable, non-deterministic characteristics that just won’t fly. Literally.
A few weeks ago, we started looking at ways of reducing power consumption when designing SoCs. We divided the world into the front-end, where the big payoff is, and the back-end, with useful techniques that have less dramatic impact. We looked at architecture and system design, hardware/software allocation and C-to-RTL, multicore, Multi-Voltage Supply (MVS), power switching, Dynamic Voltage/Frequency Scaling (DVFS), and Adaptive Voltage Scaling (AVS). These are techniques that can give power savings in the range of 30-50%. Having addressed those, there are numerous back-end techniques that can give more modest, but nonetheless valuable, power savings. We’ll look at some of those here, not necessarily in any specific order. The savings from these techniques will vary widely by application but will generally be in the 5-15% range.
Design it right from the beginning
One technique that has been used for quite a while is to provide different transistors with different thresholds in the design kit -- so-called multi-VT design. Low-threshold transistors are faster but also leak more. Not all transistors need to be the same speed – in fact, a majority of the transistors are not likely to be in the critical path, so higher-VT transistors can be used. While in the past extra speed meant extra breathing room, today extra speed means wasted power. So if a path is faster than it needs to be, it can be slowed down by swapping out transistors (among other things).