If you were able to record the development of a town as it grew into a city over years and decades and then speed up the film in a super-fast-mo replay, you’d notice, assuming you weren’t thrown into an epileptic seizure by the rapid day/night flashing, that things start in a small center and move out for a while. Farmlands are replaced by tract homes, forests are cut down, hills may be leveled or developed, and the town inexorably creeps outward like mold in a Petri dish.
At some point, a limit starts to impede the amoebic outward spread. The constraining factor may be geographical; perhaps the extent of a valley has finally been covered, or open space or a greenbelt was declared, halting further encroachment. It may be sociological; commute times from the outskirts to where the jobs are may have become intolerable. If nothing else, the community may decide they’ve become dull as dirt and want to inject a little urban spirit into their wan suburban style. The reasons may vary, but little by little, the outward push will give way to an upward push.
This doesn’t come without a cost; clearly it costs more to dig into the ground in order to put in a parking garage with two underground floors and three above-ground floors than it does simply to pave over a chunk o’ mud and call it parking. Making a tall building earthquake-proof and stabilizing it against winds is harder than throwing together some ugly tilt-up walls and slapping a Wal-get sign on it. But building up eventually becomes cheaper than the alternatives.
There is an old story about two shoe sales people sent to a desert island. The first looks around and sends a message back to head office, “No one here wears shoes. Coming home on next ship.” The second sent a message to his head office, “No one here wears shoes. Send several hundred pairs on next ship.” The mood at the Paris International Automotive Electronics Congress (IAEC), earlier this month, was more like that of salesman two. The tribulations of the mainstream auto market, particularly the US, was recognised, but the view was that the tier one suppliers (those who supply the car manufacturers directly) and their specialist subcontractors were likely to continue, at least for the time being, to invest in R and D. The reasons are many. They include: the recession may be short lived; there are other manufacturers outside Detroit and Western Europe (Analyst Ian Richards of Strategy Analytics said that he was tracking at least 30 Asian car manufacturers outside Japan); the desire for people in what we think of as developing economies to get off two wheels and into four; the legal requirements coming into force over the next few years that can be solved by electronics; environmental pressures on vehicles are going to be solved only with the use of electronics, and if the market is hugely competitive, electronics is a way of providing differentiation and choice.
Design reviews conger images of engineers carrying reams of code printouts, filing single-file and head down into a room to be judged by others. The positive impact of design reviews has been proven though many studies, but does the preparation and process of the review have to be so painful? This paper provides a practical approach to design reviews that soothes the process and actually results in a positive experience.
The key to a painless design review process is to use techniques and tools as code is developed that contribute to a quality solution, minimizing the time spent in the review. Review time can then be spent on important project debates, such as algorithmic or architecture implementation decisions, instead of drowning in the minutia of line-by-line code discussions. Striving for perfection, automated design review techniques take the place of most manual techniques. In reality, there is always a mix of automated and manual techniques as Figure 1 shows.
Case Study with Altera's ASIC
Altera has carved a unique niche in the market with their HardCopy ASIC offering. As we all know, FPGAs offer some compelling benefits when compared with traditional ASICs - short design cycles, zero-NRE, and in-field re-programmability are the ones most often cited. For low- to medium-volume applications, FPGAs can be a wise choice compared with a high-risk, high-NRE, low-volume ASIC run.
FPGAs are not a panacea, however. Low-cost FPGAs don't have the speed, capacity, or rich feature set of their high-end brethren. If you need advanced capabilities, you'll pay an advanced price, and the economics of FPGA use change completely. There has long been a hole in the market for designs that require high-capacity, high-performance devices at moderate production levels, where FPGAs are too expensive on a unit cost basis and the high NRE costs of ASIC make amortizing that expense over a modest production run almost impossible.
A Closer Look at MRAMs
Some time back we took a brief look at MRAM technology, mostly from the standpoint of contrasting it with FRAMs, triggered by a specific paper regarding an MRAM-based flip-flop. That was a somewhat unusual implementation of the MRAM concept that happened to provide a quick technology contrast but didn’t really get to the heart of what’s going on with MRAMs. So we’re going to dive deeper here in the hopes that spin technology ends up meaning more than technology that causes my head to spin. Something at which it excels.
First of all, let’s review why this even matters. MRAM has the potential to challenge SRAMs, DRAMs and the various FLASH-like technologies. The hope is that it be fast, inexpensive, and non-volatile, with infinite endurance. It could be used in stand-alone memories or for embedded memories in SoCs. The promise of MRAM has attracted a variety of participants over the last decade, but it’s taken longer to come to fruition than many hoped, causing some companies to fall back and stop development.
But work continues, and several flavors of MRAM exist, either in production or in research (or somewhere between). Understanding the different kinds requires us to go back to some fundamentals and depart from the more comfortable IC concepts. And bear in mind that this is an area of research; there are aspects of this technology that aren’t yet fully explained by physical models, so people way smarter than I are still scratching their heads. Which makes me feel better. Some.
Choices of Interconnect in Embedded, New and Old
The number of options for getting from point A to point B keeps growing. It’s one of those areas where the concept of “standard” is somewhat loose, since there are so many of them you might wonder if the word even applies. Connectivity in larger embedded systems historically took advantage of backplane standards that allowed different cards to communicate with each other; smaller form factor devices often didn’t need the kind of data transfer rates that would warrant a complex protocol.
As miniaturization has shrunk erstwhile cabinets into our palms, the whole concept of plugging in boards or putting modules on baseboards starts to fall apart. But because many of these new small systems have ancestry rooted in the larger systems, they bring forward the legacy of those interconnect schemes. So while different data rates, stacks, physical media, and form factors add up to a smorgasbord menu that can be a challenge to navigate if you’re just starting from scratch, in reality, legacy and affinities tend to dictate a smaller range of options for a given system.