posted by Bryon Moyer
MEMS elements are delicate. They sit there in their little cavities, expecting to operate in some sort of controlled environment – perhaps a particular gas or pressure (or lack of it). And if they’re collocated with CMOS circuitry, then they need to be protected from any further processing steps. In other words, they need to be sealed off from the rest of the world. And wafer bonding is a common way to do that: bring another wafer (perhaps with etched features) face-to-face with the working wafer and get them to bond.
Covalent molecular bonds are the strongest; if you bring two silicon wafers together, for example, the ideal is to have the silicon atoms at the surface of each wafer bond covalently with their counterparts on the other wafer so that the whole thing starts to look like a continuous crystal. That’s the ideal.
Doing this isn’t trivial, of course, since the surfaces are likely to have imperfections and contaminants. So surface preparation has been an important part of the wafer bonding process. It has also involved intermediaries like water that establish a preliminary bond; an anneal then precipitates the reactions that result in the appropriate covalent bonds and out-diffusion of any extraneous elements.
Initially, high temperatures were required for the annealing. But, of course, anything over 450 °C won’t sit well with any CMOS that might be in place, so various surface preparation techniques have been devised to get the anneal temps down below that threshold.
But even these temperatures can be an issue for bonding unlike materials, or for wafers that have unlike materials in the stack, where stresses can result from differing rates of thermal expansion during the anneal process.
EVG has recently announced a new way of preparing the surface so that covalent bonding occurs immediately, at room temperature. To be clear, they have announced that they have this new process; they haven’t announced what it is; they’re still being coy on that. This eliminates the annealing step completely, and therefore the thermal expansion issue as well.
Equipment using this new technique should ship sometime this year. You can find out more in their release.
posted by Bryon Moyer
A while back we covered CEVA’s move to multicore for their communications-oriented XC architecture. One of the motivating elements was the complexity of requirements for features like MIMO, the ability to use more than one antenna – and multiple channels formed by the product of the number of sending and receiving antennas. They say that using a software approach provides the flexibility needed for the variety of options, that there are too many differences between options to implement in hardware: there would be too much unshared hardware, and it would be inefficient.
Sounds reasonable. But then came a completely separate announcement from Quantenna. They’re also doing MIMO, but in hardware. They can handle up to 4x4 MIMO (that is, 4 antennas sending, 4 receiving; 16 channels). And they say that it’s not reasonable to expect to be able to meet the performance requirements without doing it in hardware.
Both companies seem to agree on the complexity of the standards they’re implementing. The thing about such WiFi communication is that the environment is constantly changing, and you have to constantly re-evaluate which channels are working best and where to send things. This re-optimization is checked every 100 ms.
In fact, Quantenna says that, if the radar band is unpopulated, it can also be used, although they claim that most boxes don’t take advantage of this, remaining within the crowded non-radar portion, even though the radar portion has the bulk of the available bandwidth.
There is also beam-forming to be done – including “blind” beam-forming, where only one end of the channel can do it. Channel stability has to be rock solid since there’s no buffering for streaming video. Equalization has to be optimized. And in a higher layer, there’s quality-of-service (QoS) for video.
And most of this isn’t established at design time; it’s a constant real-time re-jiggering of parameters to keep things working as efficiently as possible. And it has to work alongside the earlier 802.11n and below standards. And Quantenna says they can handle all of this in hardware, without blowing the silicon budget.
You can imagine that being able to do it in software might be quite convenient and space-efficient. You can also imagine that hardware would provide much higher performance. So which is best?
Rather than get into the middle of adjudicating this myself, I offer both sides the opportunity to state their cases in the comments below. And any of the rest of you that have something constructive to contribute to the discussion, please do.
Meanwhile, you can get more details on Quantenna’s announcement in their release.
posted by Bryon Moyer
A new hat has been tossed into the MEMS ring: Maxim. You may be familiar with them as an analog name, or you might be familiar with Sensor Dynamics. Which was bought by Maxim, and which is now the source of Maxim’s first announcement: a gyroscope aimed at the consumer market. (Read phones and tablets.)
Why a gyroscope? Well, partly because accelerometers have become too inexpensive to be interesting on their own. And because they’re hard: as they said in a conversation I had with them, “get it right and the rest is easy.”
And they’re doing it all themselves. While an accelerometer wasn’t their first device, they will be including one in a 9-axis combo in the future – which means they’re also tackling magnetics. And they say they’ll have a couple more devices specifically aimed at the OIS market shortly.
You can find out more about their initial product in their release…