posted by Bryon Moyer
Right at the beginning of this last year, we took a look at WiSpry; they make a MEMS-based tuning product for dynamically adjusting the antenna matching characteristics dynamically as things change. Things like grabbing the phone in the natural (but wrong) place. Not that that would ever happen; I’m sure such phones would always “just work.” But… just in case…
Such capabilities sound great in theory, but WiSpry recently ran an actual test – one that can’t be accused of being a best-case idealized one either. They purchased two phones and literally opened one of them up, replacing the stock tuner with theirs, and then compared the bandwidths between the two phones. The stock phone had 25 MHz bandwidth; the modified phone? 150 MHz.
At issue specifically here is LTE data, which requires a separate diversity antenna in the phone. It’s one of about six antennas (four for cellular, one for GPS, and one for WiFi/Bluetooth). Something there’s not a lot of room for. If the bandwidth for the voice antenna, for instance, can be broadened to make it effective at data as well, then one antenna can be removed.
The lower frequency bands are the most troublesome because they require larger antennas. According to WiSpry, prior to LTE, the total bandwidth range was from 824 to 2170 MHz; their product can extend the lower range down to 700 MHz. LTE’s top end goes to 2700 MHz; their next product will support the entire 700-2700 MHz range.
You can see more on their release…
posted by Bryon Moyer
We talked in a separate piece today about Synopsys’s multi-source clock synthesis technology, but that was only one of several pieces of new technology in their new package. Among other parts of the release were the following non-trivial items:
- Double patterning: it’s real now. It’s not used for every layer, just the bottom few layers. But it’s no longer something that will come someday: it’s here.
- In the past, metal was metal. When routing, each layer was useful, and all layers were considered to be the same. That’s not the case anymore: lower layers have skinny, resistive lines; upper layers have broader, less-resistive lines. So now it matters which layer gets used. If the router is going to pick a layer, it has to consider the performance implications.
- IP blocks are standard today. The problem is, they don’t usually fit together particularly well, leaving narrow routing channels that tend to get congested. They’ve improved their router so that it restricts that area only to signals that really need to be there; everything else is routed a different way to make the best use of that limited space.
You can find a brief discussion about what’s new for 20 nm in IC Compiler can be found in their release announcing their work with Renesas; and what’s new in their other tools is outlined briefly in their release announcing their work with Samsung.
posted by Bryon Moyer
We love the underdog. David slays Goliath. All of that. And we love the myth that hard work and a better idea will always win. When we win, we take credit for deserving the win due to our hard work. (We tend not to credit any accompanying luck or support from others or the existence of infrastructure for any of our success.)
So if that’s the case, then anyone should be able to knock us off our pedestal with yet harder work and a yet better idea, right? Well, not in real life. If any of you have tried to leave the comfort of working for an established company (so tempting, but I just can’t bring myself to use the phrase “the man”…) to challenge those incumbents with a new company or even as an individual, you know what I mean. There are structural barriers built into the system that give a sizeable edge to those currently on the pedestal.
It’s like an energy barrier thing. It’s really hard to get there, but, once you’re there, you don’t have to work as hard to stay there as you did to get there.
When talking to Synopsys about their new multi-source clock technology, they described a situation very much like this evolving in EDA. It used to be that all of the players had a more or less equal shot at getting foundry attention when a new process node comes up. But not so much anymore.
The new requirements of each node have become so demanding that tool development has to start earlier and earlier, and the foundries can really only work with one company to get everything sorted – it’s just too hard to manage multiple partners.
Which means that whoever was the leader at the prior node for a given piece of the toolchain (physical design tends to come first) becomes the lead for the next node. There’s actually a pretty good rationale for this: the companies with the highest market share get more input from customers as to what their requirements are at the next node, and there’s more opportunity for feedback on the tools being developed.
So it makes sense. But it certainly does reward the winner and make it easier for the winner to keep winning in the future, possibly locking out any contenders.