One Building at a Time
Car navigation technology has made amazing strides over the last decade. Heck, we may talk about a future of driver-less cars, but we already have navigator-less cars, with humans acting only as vehicular operators following the mindless turn-by-turn instructions of some disembodied Voice. Many drivers will freely admit that they’ve lost all innate sense of where they are or how to get anywhere without the assistance of the Voice.
Get out of the car, however, and things aren’t so straightforward – especially once we go under a roof. Part of it is the fact that we’re on foot instead of in a car, making it harder to track our location. But even more important is the fact that, indoors, we are no longer within the benevolent embrace of the GNSS (global navigation satellite system) systems. We’re like bees, with GPS as our queen: If we lose contact with the queen, then we’re lost.
New PowerVR GPU Includes Ray-Tracing Hardware
Ray tracing is one of those cool things that computer geeks often play with at some point in their careers. I fiddled with ray-tracing software a number of years ago and decided that (a) it was pretty cool technology, and (b) I was no good at it.
If you’re not into graphics, “ray tracing” is a way of producing computer graphics by mathematically calculating how rays of light would actually bounce around a scene if it were real. That is, instead of a graphic artist drawing the scene on his/her computer, you instead model the scene and let physics take its course. Got a desk over here, a few walls over there, and some sunlight coming through the window? Splendid. Let the ray-tracing software take over and it will tell you how the scene appears.
A Compelling Mobile Embedded Vision Opportunity
The prior article in this series, "Embedded Vision on Mobile Devices: Opportunities and Challenges," introduced various embedded vision applications that could be implemented on smartphones, tablet computers and other mobile electronics systems (Reference 1). In this and future articles, we'll delve into greater implementation detail on each of the previously discussed applications. Specifically, this article will examine the processing requirements for vision-based tracking in AR (augmented reality), along with the ability of mobile platforms to address these requirements. Future planned articles in the series will explore face recognition, gesture interfaces and other applications.
Computer graphics pioneer Ivan Sutherland established the basic concepts of AR as known today in his seminal 1968 paper “A Head-Mounted Three Dimensional Display” (Reference 2). Sutherland wrote, “The fundamental idea is to present the user with a perspective image which changes as he moves.
In part one of this article series, I suggested that datacenter architectures could benefit from revisiting the parallel computing innovations of the 1980s, and I waxed lyrically about the Transputer, which struck a chord with a surprising number of readers - including one reader who wrote “we built a fabulous Transputer board … didn’t sell many of them … I haven’t thought about it for decades.” It was a heartfelt email, though I am not clear if that was because of his fond memory of the Transputer or the commercial failure of his product. In any event, I firmly stand by my belief that much could be learned and leveraged from revolutionary parallel computing architectures.
In mobile computing, I observed that quad+ core CPUs are vastly underutilized in the majority of real-world applications (the notable exception being gaming). Virtually all of these apps are built on a simple client-server model, taking fractional advantage of mobile CPU horsepower.
Self-driving Cars Might Be Better Than What We Have Now
When you're driving at 182 MPH, don't slam on the brakes and expect to survive.
That thought flitted briefly through my mind as I watched the concrete wall surrounding Daytona International Speedway approach my car window at, well, 182 MPH.
This is the sort of thing that happens to me in the winter months, when the racetracks are too wet, the tires are too cold, and the carburetor is too finicky. I wasn’t really racing at the real Daytona. Oh, no. A strong sense of self-preservation runs in my family. That’s why we’re still here. Rather, I was joystick-ing my way around a PlayStation version of the big Florida racetrack, but that was now coming to an abrupt halt.
RTI Updates Their DDS System
The Internet of Things (IoT) is all about Things talking to people and to other Things. This relationship between Things and other Things and People is vague enough that pretty much any product, from transistors to toilet paper, can be marketed as somehow helping to enable the IoT.
While that confusion suggests that some ordering of the IoT might be helpful to those trying to comprehend it (which I’ve attempted before and was originally planning to update), that very scattered nature can make intercommunication between Things a challenge.
Most of the way we’ve approached the IoT has been from a consumer-centric standpoint. Like the smart home concept. Such systems typically involve some kind of hierarchical arrangement: Things that talk to Hubs or the Cloud, on the one hand, and Computers and Phones that talk to the Cloud (and, by proxy, the Things) on the other hand. Perhaps the Phones talk to nearby Things directly, using WiFi.
In the Mobile Modem Market, There’s a Different 500-pound Gorilla
In the computer world, we’re long accustomed to Intel’s being the overwhelmingly dominant supplier. If you use the words “computer chip” or “microprocessor” around normal people, they reflexively think of Intel, the way most people equate “Coke” with fizzy cola drinks or “NSA” with creepy surveillance.
It’s a different reality with cellphone makers. In that world, Qualcomm is the proverbial 500-pound gorilla. The San Diego–based company makes 58% of all the baseband chips used in cellphones around the world, regardless of country, wireless standard, or price level. That means Qualcomm alone sells more chips than its dozen or so competitors combined.
The 1980s witnessed a “golden age” of the computer. While the commercially successful x86 architecture continued to evolve at the microarchitecture level, completely new architectures and instruction sets innovated rapidly and set the stage for intense competition. RISC concepts were refined and expanded in the MIPS, Sparc, Power, PA-RISC and Alpha (to name a few) architectures with great success.
In short, there was a lot of Darwinian action taking place. Interesting in retrospect, the vast majority of these architectures focused on workstation CPUs: by and large, the architectures were optimized for compute horsepower with a focus on integer and floating-point performance. The race was on to build faster and faster compute engines.
What It Is Can Be Defined By Who You Are
Q: What is the Internet of things, Mr Salesman?
A: Whatever matches my product range.
Perhaps that is a little jaundiced, but after three days in the circus that is embedded world, fighting though the aisles with nearly 27,000 visitors and on Thursday over 1000 students, one can easily become jaundiced. It is possible to forgive those who are strolling so that they can see everything, almost possible to forgive those who also drag along bags on wheels that are big enough to smuggle out a body, but the ultimate hate is reserved for those dragging such bags and texting at the same time. Since we all know males are not designed to multitask, texting and dragging a bag requires a man to walk so slowly that he is very close to stationary. And these guys are always in the aisle that you need to rush down to get to your next meeting. AAARGH.
New Pathways and Ambiguous Terms
Those of you in the sensor world are deeply involved with the low-level nuances and intricacies of your devices. How accurate, how linear, how to connect, how to read data, how to fuse data… – there’s so much to think about if you put your mind to it.
Of course, the people you’re doing this for – users of phones and tablets and medical devices and industrial sensors – couldn’t care less about that stuff. They want to sleep soundly knowing that, by hook or by crook, those sensors are detecting their assigned phenomena accurately, and the system is correctly reading those data and munging them into whatever form is necessary to provide a simple, meaningful result at the application level.
And, in between you and that user lies, among other things, the operating system (OS). And OSes are now wise to the ways of sensors, and they’re laying down some rules of the road.