Designing Code, Breaking Code, and the Verification in Between
Like the venerable Kenny Rogers once said, “You have to know when to hold ‘em, know when to fold ‘em…” In the verification game, much is the same. You have to know how to make the code, and you have to know how to break it. In this week’s Fish Fry, David Hsu (Synopsys) joins us to discuss the challenges of static verification and formal verification, how to “shift left”, and how to make code just to break it. Also this week, we investigate how hierarchical timing analysis may solve your sign-off timing troubles once and and for all.
Helpful Hot-Rodding Hints
Most of us engineers are at least closet hot-rodders. It’s in our DNA. No matter how good a contraption is from the factory, we just can’t resist the temptation to tweak a few things in our own special way, and often that’s all about speed.
FPGA design, it turns out, is a big ‘ol blank canvas for hot-rodding. Even though we (fortunately) don’t have glossy convenience-store magazines adorned with scantily-clad models standing next to the latest tricked-out dev boards, FPGAs have all the tools we need to rev our creative motors in the never-ending quest for that extra little bit of personalized performance.
But, where do we start? Do FPGAs have a set of go-to hop-ups? Is there a “chopping and channeling” baseline for programmable logic design?
It turns out the answer is “yes.” And, just to get you started, here are five tips for turning up the boost on your next project:
Ecosystem for Interposer-based Design?
We’ve talked a lot lately in these pages about the impending demise of Moore’s Law. Consensus is that, somewhere around the half-century mark, one of the most astounding prophecies in human history will have finally run its course. Next year, we’ll have a round of FinFET devices that will be so exotic and expensive that only a handful of companies will be able to use them. In the decade that follows, we may or may not reach 10nm and 7nm production - using either esoteric unlikelies like EUV or extreme-brute-force multi-patterning techniques - to solve just some of the multitude of barriers to continued downscaling.
Sci-fi techniques like carbon nanotubes, graphene-based devices, quantum computing, and that other-one-you-read-about are so far from production practicality that we may not see any of them in widespread use in our lifetimes. While incredible research shows great promise for many of these ideas, they are all back in the silicon-equivalent of the early 1960s in their evolution. The time and engineering it will take them to catch up with and eventually surpass what we can do with silicon today is substantial.
HLS and Sub-atomic Particle Jitter
Dateline: The 5th of September. Time: 2100 hours. We're on the hunt. No, we’re not hunting the mysterious Yeti, the Loch Ness monster, or heck even the ever-elusive EUV. This time, we're looking for some HLS. My guest this week is Mark Milligan from Calypto. Mark joins Fish Fry for the very first time to bring HLS into the light, into the world, and into the caring hands... of Google? Oh yes. Also this week, we delve into the deeply nerdy realm of sub-atomic particle jitter and investigate how the U.S. Department of Energy's Fermi National Accelerator Laboratory is hoping to solve an age-old existential question: How many dimensions do we really live in? (Spoiler alert: The space-time continuum may actually be a quantum system made up of countless tiny bits of information.)
Mentor’s RealTime Designer Rises to RTL
There are a lot of reasons why we can create so much circuitry on a single piece of silicon. Obvious ones include hard work developing processes that make it theoretically doable. But someone still has to do the design. So if I had to pick one word to describe why we can do this, it would be “abstraction.” And that’s all about the tools.
In fact, my first job out of college came courtesy of abstraction. Prior to that, using programmable logic involved figuring out the behavior you wanted, establishing (and re-establishing) Boolean equations that described the desired behavior, optimizing and minimizing those equations manually, and then figuring out which individual fuses needed to be blown in order to implement those equations. From that fuse map, a programmer (the hardware kind) could configure a device, which you could then use to figure out… that it’s not working quite like you wanted, allowing you to throw the one-time-programmable device away and try again.
Advanced vs. Established Process Geometries
It's time to saddle up and ride into the semiconductor sunset! Whether you're hitchin' your wagon to a young whipper-snapper node, or lassoin' a long-in-the-tooth workhorse process, the time it takes to get your IC design up and out of the corral may depend more on the software you use to verify your design than on the silicon itself. In this week's Fish Fry, Mary Ann White (Synopsys) and I get down to the very heart of semiconductor design: process geometries. We have ourselves a good ol' time chatting about challenges of FinFET designs, the tricky bits of working with both advanced and established process nodes, and how the right tools can make all the difference when it comes to winning the big product-to-market rodeo.
What if it Happened Again?
We sit here in our dazed, progress-drunk technology buzz looking back at the half-century rocket ride that transformed not only our industry and engineering profession, but also all of modern civilization. Nothing in recorded history has had as much impact on the world as Moore’s Law. It has re-shaped global culture, dramatically altered politics, and even affected fundamental aspects of the ways human beings work, think, feel, and relate to each other. If this weren’t the single biggest change driver in the history of civilization, it was right up there with democracy, monotheism, combining caramel and chocolate, and some other really heavy-hitters. Innovation in electronics has spilled over into just about every other aspect of our collective lives, and the change is profound.
But, what if it happened again - not in electronics this time, but somewhere else?
To answer that question, we should look at what caused Moore’s Law in the first place. It was a single innovation, really. Just one idea.
Are FPGAs Harbingers of a New Era?
The title may have put you off. In fact, it probably should have. After all, most of us in the press/analyst community have - at one time or another during the past decade or two - been walking around like idiots wearing sandwich signs saying, “The End is Nigh!” And, we got just about as much attention as we deserved. “Yawn, very interesting, press and analysts, and now back to planning the next process node…”
It gets worse. Predicting that Moore’s Law will end is pretty much a no-brainer. It’s about as controversial as predicting that a person will die… someday. There is obviously some point at which the laws of physics and the reality of economics will no longer allow us to double the amount of stuff we put on a single chip every two years. The question is - when will we reach that point, and how will we know we are there?
What’s Coming When?
As we continue to try (and succeed at) stuffing more circuity into a tiny space than physics allows without great cleverness, we are drifting more and more into the use of multiple patterning. We’ve looked at this a number of times, starting with the simplistic view of litho-etch-litho-etch (LELE) approach and then digging deep into the far less intuitive self-aligned (or spacer-assisted) double-patterning (SADP).
As we’ve mentioned here and there, these technologies are, to some extent, in production – and more is coming. What’s a bit confusing is what’s coming when and why. Today’s musings attempt to sort that out.
But before we do that, let’s do a quick review (with more details available in the prior pieces linked above). Multiple patterning is a trick we play so that we can place features closer together than can be done with a single exposure. The solution? Split the mask pattern in half and do two exposures.
Education Meets High Tech
This week Fish Fry is all about technological innovation in education. From kindergarten to college, from Malaysia to Texas, we look into recent technological advances that aim to even the educational playing field in the United States and across the globe. My first guest is Scott McDonald (Rorke Global Solutions). Scott unveils Rorke’s new digital learning system and discusses with me how Rorke was motivated to break ground on this high tech education revolution. (We also throw in some basketball trash talk.) Keeping with our education theme, Silicon Cloud International CEO Mojy Chian joins Fish Fry to explore the future of cloud computing and how Silicon Cloud International's educational cloud centers hope to create a whole new generation of chip designers.
Cadence Rolls New Protium Platform
System on Chip (SoC) design today is an incredibly complicated collaborative endeavor. By applying the label “System” to the chips we design, we enter a realm of complex, interdisciplinary interactions that span realms like analog, digital, communications, semiconductor process, and - with increasing dominance - software. Since the first SoCs rolled out a mere decade or so ago, the composition of design teams has shifted notably, with the percentage of cubicles occupied by software developers increasing much more rapidly than those of any of the other engineering disciplines. In most SoC projects today, software development is the critical path, and the other components of the project are merely speed bumps in the software development spiral.
Pulsic Automates Analog Layout
You are now entering the “It can’t be done” zone. But, at least for the moment, I’ll ask that you relax that axiom, even if only slightly, to something less absolute, like “We’re pretty sure it can’t be done.”
That’s because we are approaching the Holy of Holies, Mystery of Mysteries, Most Unapproachable of That Which is Unapproachable: analog design automation.
Before we dive in, let’s set up the contrasts first by revisiting the highly automated world of digital design. Heck, digital designers practically don’t need to know what a transistor looks like. They can specify their logic in text format, send that into a toolchain, and voilà: a completed layout.
Mentor’s Power Tester Accelerates Diagnosis
We talk a lot about transistors in these pages. But usually our discussions center around billions of microscopic transistors acting in concert. This article is not about those. Today, we are going to discuss transistors (and diodes and other components) about the size of your smartphone. These BATs (or “IGBTs” - Insulated-Gate Bipolar Transistors), as the industry seems to insist on calling them) are used in power electronics applications - like electric and hybrid cars, wind and solar power, and that amplifier the kid across the street is building in his basement (the one you’ll be able to hear two states away).
Typically, devices in these high-power applications are subjected to large thermal loads and repeated heating/cooling cycles. Also typically, we want them to last a long time. Nobody wants to be climbing up wind turbines every few months to replace power electronic components. Unfortunately, these repeated thermal cycles cause expansion and contraction, which puts mechanical stresses on the components, the substrates, and the connections. When we design a system, we want to have a pretty good idea how well and how long our components will operate - under the conditions we are expecting in our application.
Or Is It Just Another Step in Evolution?
It used to be so simple. A group of chip designers would sit around drinking coffee and gently mulling things over when one would say, "You know what would be really cool? If we add a backward splurge feature to the K11 widget, it would allow users to do some awesome things."
After a bit of engineering discussion, the sales team would go off and chat to a few friendly customers, come back and say, "They aren't against it." After this, management would buy into the project. When the device was launched, the marketing team would make a lot of noise about user input and then the company would sit back and wait for orders. Sometimes they came, sometimes they didn't.
Ansys’s Latest Redhawk Has to Work Harder
There’s promise, and then there’s reality.
The promise of FinFETs has been one of higher performance with lower power than would have been possible if we had stayed on the same track as before and tried to keep scaling. This promise seems, more or less, to be realizable as companies start integrating these new devices into their aggressive-node designs.
The accompanying reality, in this case, has to do with all of the other details that you get along with the benefits. In other words, those benefits come at a price – and one of the costs has to do with power noise. Ansys has released a new version of their Redhawk power analysis tool (you may think of them as Apache, but they’re now owned by Ansys), and much of what they’ve done in this version has been due to the needs of designs incorporating FinFETs.