posted by Bryon Moyer
With all the delicacy involved in the advanced lithography techniques we use for patterning exquisitely small features onto wafers, occasionally we come back to a brute-force approach: nanoimprint lithography (NIL). Instead of painstakingly exposing patterns onto a photoresist, we simply press a patterned die (PS this is the kind of die whose plural is “dies,” not the singulated silicon bits whose plural is “dice”) into a bed of moosh to create a pattern as if making an old-school vinyl record. Harden the material, and we’re good.
While already used for hard drives, we’ve also seen it combined with DSA for even more aggressive hard drives. But that’s all still research stuff.
EVG recently announced a high-volume production SmartNIL process. It’s a UV-cured approach, although any of you wondering why they get to use UV while EUV is stuck at the starting gate have no reason to be jealous. Unlike EUV, you don’t need a carefully collimated beam of UV. You can just bathe your wafer in incoherent swaths of UV light.
The obvious question then might be, why can’t I use this? And the answer is, maybe you can! From a target-technology standpoint, your odds are good. (From a number-of-designers standpoint, not so much). It’s easier to answer the question, “What can’t this be used for?” than, “What can it be used for?”
The answer to the easier question is, “Transistors.” There are two issues with NIL for advanced transistors: feature size and defectivity.
- Yes, according to EVG’s Gerald Kreindl, advanced research work in John Rogers’ group at Illinois has actually replicated a carbon nanotube (CNT) using imprint. (Which is interesting since a CNT is a 3D feature…) The point being, there’s not a fundamental limit to feature size. (OK, there is, but I don’t think anyone is going to try to replicate a quark using NIL) Realistically speaking, SmartNIL is for features in the 20-100-nm (or bigger) range (more like 40 and up in high volume). That would leave out fins, for example.
- The other issue is defectivity: a slight glitch in a microfluidics channel isn’t going to cause any pain. That same glitch in a transistor may send valuable electrons in the wrong direction.
So if transistors are out, what does that leave? Lots: Optics, photonics, LEDs, bioelectronics…
You can find out more in their announcement.
posted by Bryon Moyer
A couple months ago I did a survey of Internet of Things (IoT) standards – or, more accurately, activities moving in the direction of standards, since it’s kind of early days yet.
And in it, I was a bit harsh with one standard… oneM2M. I found it dense and somewhat hard to penetrate, with language that didn’t seem clear or well-explained. The status at the time – and currently (for a bit longer) was as a candidate release, taking input.
To their credit, they accepted my cantankerous grumblings as input. I had a conversation with their Work Programme Management Ad-Hoc Group Chairman Nicolas Damour, at his suggestion, and we talked about some of the specific questions I had raised in my coverage. The general take-away was that the language could be made a bit more expansive for readers not from narrow domains.
Doing this can actually be tricky, since standards tend to have two kinds of content:
- “Normative” content: this is the standard itself, the rules. It says what you “must” and “will” and ‘”shall” and “may” do. Changes to this must be well thought out and voted on. You can’t make changes willy-nilly.
- “Informative” content: this is background material intended to give context or examples or perhaps even discuss the thinking that went into the standard: why was one approach approved over another? It’s much easier to make changes here. And if there’s any confusion between what the informative and normative sections say? The normative language always trumps.
A glossary is one good example of informative content, and we agreed that it was a reasonable place to make some clarifications. There might even be room for some glosses concerning how some tough decisions were arrived at. Overall, it was a productive conversation – showing a flexibility that’s not always a hallmark of standards organizations. (After several years of hard-fought work, it’s understandable that a group might resist a bit when outsiders propose last-minute changes… I didn’t perceive this during our talk.)
There were two specific things that I raised in my coverage.
- One was the missing definition of a “reference point.” It turns out that, for people in the telecom world, this is a familiar term, codified by the ITU. It’s what the rest of us might call an “interface.” Problem is, the word “interface” means a lot of different things, so in ITU-land, it refers to an API or a specific physical interface. A reference point indicates an interface between systems, but in a more generic way, and one that could admit multiple protocols. Perhaps “boundary” is a better word than “interface.”
- I questioned the definitions of “field” vs. “infrastructure” domains. In retrospect, this seems clearer: the field refers to deployed devices, and infrastructure means the Cloud or servers. The reason this seems clear now is because I’ve been specifically thinking about that with respect to “IoT Ring Theory.” Before that, it wasn’t so clear. To me, anyway.
They’re taking input through the end of the year, so you still have time to review and make suggestions. You can find the latest candidate release here (via FTP).
Note: there’s a page on the website with an earlier release that says that comments had to be in by Nov. 1, not by the end of the year… but I checked in, and that was for an earlier round of comments. You can still provide input. There’s also an explanatory webcast here.
posted by Bryon Moyer
In spring of last year, we described a new standard from the Multicore Association for use in managing tasks on multicore embedded systems. Called MTAPI, it abstracts away details of exactly where a particular task might run at any given time, allowing for fixed or real-time binding to a core or hardware accelerator.
Well, standards are all well and good, but then someone has to write code that actually implements the standard. Last month, Siemens announced an open-source BSD-licensed implementation that supports homogeneous multicore systems.
The MTAPI implementation was part of a larger multicore support package that they released, called Embedded Multicore Building Blocks (EMB2). It also includes implementation of some popular algorithm patterns as well as various structures and frameworks focused on streaming applications (an extremely common application type that is prone to challenging performance – meaning that effective multicore utilization makes all the difference).
They’ve segregated the code such that only a bottom base layer has any interaction with an underlying OS. This makes most of the code independent of the operating system (OS). They support Linux and Windows, but changes to the base layer will allow ready porting to other OSes.
Next year, they plan to support heterogeneous systems – a tougher deal because each node may have a different processing architecture, and memory may be scattered all over the system. In so doing, they’re likely to bring the venerable MCAPI standard into play. That, the first of the Multicore Association standards, handles communication between disparate cores running different OS instances.
You can find more info in their announcement.