posted by Bryon Moyer
Growing high-quality graphene for use on wafers is hard. Chemical vapor deposition (CVD) is the favored approach, but no one has perfected the ability to grow it directly onto the oxide surface of a wafer.
It’s much easier to grow it on a sheet of copper and then transfer it over. But that transfer step can be tricky, and copper isn’t a perfectly uniform, crystalline material either. So defects can easily result.
One obvious trick might be to put copper on the oxide, grow the graphene on that, and then etch the copper away, leaving the graphene on the oxide surface. This technically can work, but the graphene tends to lift off the surface before it can be secured in place.
So… it would be useful to find a way to hold that graphene layer before it’s baked down. And if you were looking for a way to get something to adhere to a surface, where would you look in nature for ideas?
Why, tree frogs, of course!
Image courtesy W.A. Djatmiko (Wikipedia)
It turns out that tree frogs stay attached to underwater leaves thanks to nano-sized bubbles and capillary bridges between leaf and foot. Some beetles do a similar trick.
Well, this idea has now been transferred to graphene. Prior to laying down the copper, the wafer surface is treated with nitrogen plasma. Copper is then sputtered on and CVD deposits the carbon. The carbon is then etched, and, during that process, nano-bubbles form, creating capillary bridges. These hold the graphene in place as the copper disappears.
A final bake step secures the graphene to the wafer and eliminates the bubbles and capillaries.
You can read more about this in their paper, but it’s behind a paywall.
posted by Bryon Moyer
It’s one of those good problems.
You’ve been doing some exploratory MEMS work. Your main focus is biomedical – implants for dealing with prostate cancer. Silicon is too brittle, so you do some exploration with a foundry to experiment with different structures and materials. A nickel alloy looks interesting – more forgiving than silicon (at the expense of a lower Young’s modulus). And there’s some extra space on the die.
One a whim, you and a co-researcher half-jokingly discuss putting a windmill on there. During the discussion, she is watching her daughter play with a pinwheel. Inspiration strikes, and overnight she completes a design that goes onto the die. Despite the auspicious name of the MEMS company you’re working with, WinMEMS (one letter away from WindMEMS), you think it probably won’t work.
Only… it does work. Not only does it function as expected, but someone accidentally drops some on the ground – and they still work.
What do you do now?
Most academics would publish. But here’s the deal: you’ve been burned before by companies that have leveraged your work with nothing coming back to you. And universities don’t like this either. So you don’t publish: you patent. And you delay telling the world about it for a couple months until the lawyers relax.
And then you issue a press release.
And then you give up any hope of getting any work done until the phone stops ringing.
This has been life for Dr. Jung-chih Chiao and Dr. Smitha Rao at the University of Texas in Arlington. They’ve been totally sidetracked by the surprising (to him) success of this little side project.
Because no paper has been published, there’s no end of questions about how they achieved their results. There were some pictures, but no details, especially about such critical aspects as, how do they convert the motion into electrical energy? I discussed that with Dr. Chiao, but apparently I didn’t ply him with enough drink to get him to give up the secret. So it remains a secret.
I was actually the 20th person to talk to him. They’ve been bombarded not just with press, but with companies wanting in on the action. They’re not just calling him; they’re calling colleagues as well. So they’re remaining tight-lipped for now.
He’s pretty confident in the design that they’ve done – they’ve aimed for simplicity in order to ensure reliability, but there are still issues to be solved. The two main ones are figuring out how to keep dust from mucking up the works and new ways of countering stiction.
They will be looking for commercialization partners. He sees the university’s role as solving the basic physics, including the two problems just mentioned. There will be other changes before anything goes into full production, but he sees the partner company doing that work. And he’s confident that this thing is manufacturable. Depending on funding, he sees this as being completed on about a one-year horizon.
After his work on this has been completed, he’s looking at possibly putting together a simulation tool. Depending on where you want to place the micro-windmills – cars, bridges, wherever – you may want to optimize the design. A simulation tool would make that possible.
For right now, it’s more basic: the phone needs to quiet down so they can get back to doing actual research.
And we’re still going to have to wait to figure out how this all works.
posted by Bryon Moyer
Some of you may have come to this link already; if you did, you read a piece that voiced some confusion about the positioning of two new products from PointGrab. I had tried to do what I could with the information at the time, but I remained confused. Since then I have gotten much more specific information, and so the questions are removed, and I have redone what follows to explain more clearly what’s going on.
You may recall that my discussion with PointGrab last fall included discussion of an evolving gesture approach for screen-and-cursor based devices. The idea was that existing gesture-based approaches simply allowed the user to move the cursor on the screen as if using a mouse. The new approach was to bypass the mouse using gestures directly.
The new AirTouch product works along those lines, although rather than using gestures per se, it focuses on transforming the screen into a touchscreen that you don’t have to touch. You simply point and click, like you would on a tablet. Only you’re doing it from a distance.
This actually raises some experiential questions, since there is no specific feedback on the screen, like a cursor trailing around, to show you where you’re pointing. With a touchscreen, you literally touch an icon or button. But if you’re 20 feet away, well, you’ve got this virtual screen in mid-air, and it’s not obvious where everything lies. You can have the interface highlight buttons when you “hover” over them, so that could be a clue. But PointGrab says that, the way they’ve done this, it’s hard to explain, but within a second or three of trying, you feel oriented and it becomes extremely natural.
AirTouch itself does not support any gestures, but it can be combined with gesture products. In other words, electronics that use AirTouch can also provide gesture recognition, but it’s not AirTouch doing the gestures.
How do they figure out where you’re pointing? With a stereo camera that provides depth information and looks at where your pointing finger is relative to your eyes. The camera responds to the IR spectrum so that it can work in any room lighting conditions.
This product is targeted at “consumer electronics” – which needs a bit of unpacking. Frankly, these days, any appliance could be considered electronic. More typically, however, “home electronics” have referred to your entertainment center: stereo, speakers, television, VCR – er – DVD player, etc.
In this case, it refers to anything that could have a touchscreen (whether or not the screen is actually touchable). TV, computer, tablet, smartphone, or set-top box (as viewed through the TV). Not audio equipment (unless you somehow operate it through your screen).
Meanwhile, in the other corner, we have PointSwitch. This is a simpler, lower-cost solution for all of the “home environment” devices – frankly, all the other electronics (and lower-tech things like thermostats, dishwashers, lighting, whatever). This is a new market for PointGrab.
PointSwitch supports what I’ll call a “point plus” interface. By that I mean that your primary interaction is pointing, which works for toggle functions like on/off. But you can also raise or lower your pointed finger to do things like brighten or dim lights or change the thermostat temperature. More sophisticated gestures are not currently supported.
First of all, these kinds of interfaces have to be low-cost. So they’ve partnered to provide a simple, inexpensive camera module that fits unobtrusively into the devices. Of course, one challenge is obviously going to be specificity: if you have a light switch and thermostat and stereo and an apartment-style washer/dryer stack all within view at the same time, how do you keep from operating them all at once with a single pointing event?
They claim to have achieved high selectivity so that this doesn’t happen. Items need to be located at least 6” apart to avoid any confusion. One other possible issue could arise if, say, you have a thermostat positioned, say, 8” above a light dimmer switch. You want to dim the lights up, so you point at the light switch – no problem – and then move your finger up, and the dimmer automatically brings up the lights. Great. But at some point, because you’ve raised your finger, you’re pointing at the thermostat, even though you started below at the light switch. The thermostat doesn’t know that; it knows only that it’s now being pointed at. So this kind of positioning issue must also be considered in the overall design of a room.
While all PointSwitch does right now is handle this pointing interface, it opens the door to future features like detecting when the room is empty and turning down or off the lights or lowering the temperature.
Like AirTouch, PointSwitch responds to light in the IR range so that it can work in a completely darkened (or brightly washed-out) room.