Today’s post is inspired by recent advancements in AR and VR technology. Also I’m a personal fan of Palmer Luckey, who kickstarted, literally, the current trend towards AR/VR everything.

Augmented Reality: is a commonly used term for a collection of various technologies that enable human to machine interaction through visual, audio, and haptic feedback of supplementary information integrated with the perceived world.

Figure 1: BM 1620 data processing machine on display, Seattle World’s Fair

The transistor computer, successor to vacuum tube computers, were a significant step forward in the 1950’s. Their key, enabling technology was the transistor, before which vacuum tubes were the state of the art. I believe the current state of Augmented Reality is much like this transition between vacuum tubes and transistors. Where practical implementations, readily available for purchase, are bulky, finicky, and unreliable. There is a conceptual possibility, within the leading people of the industry, of significant advancements with a few key enabling technologies. These so called generation 2.0 AR devices are proposed to be available by the start of the next decade. Rumors of “smart glasses” by Apple, a “Hololens 2” by Microsoft, etc… all indicate a step change from current technologies.

However, it is fair to say, from an engineering perspective, that the level of sophistication achieved by computers, such as by the latest smartphones, is far beyond the realm of possibility for AR devices in the near term future.


It is because of the complexity and sophistication of the human visual processing system. Commonly referred to as the eyes, visual cortex, and connecting volumes of the brain. In this sense AR is far more difficult than VR  (virtual reality). As the human visual system is already very well adapted and very fine tuned to the real world (what constitutes the real world is a fascinating topic in of itself but beyond the scope here). Creating an entirely virtual realm, with no trace of “reality”, is conceptually simple. Creating a pseudo reality, that can be integrated with existing vision, is tautological to creating another “real” version. Practically speaking, AR has to be good enough to fool the human visual system when it can be, literally, side-by-side compared with the real world.

It doesn’t take a philosophy background to understand that creating a just as good version of the real world (visually) requires technology not yet invented, or even conceived of. Which is why I specifically compared vacuum tubes and transistors in the beginning, and not integrated circuits or beyond. The level of sophistication of modern circuits is so far beyond that of vacuum tubes, even the most speculative science fiction stories of the time did not conceive of it. Although science fiction can dream fabulous stories involving “cyber realities” and “neural uplinks” and the like, their enabling technologies are as of yet undreamt of. (and likely, based on historical precedence, very different from what will actually happen) In my belief, in the long future ahead there will eventually be technology that enables completely immersive realities, indistinguishable from reality itself, but not in this century!

So then where does that leave AR? Continuous evolution, progressing from a very simple state, currently, to a level of sophistication comparable to a near seamless, but still distinguishable, overlay on the real world. Drawing parallels with the evolution of computers, foreseeable and unforeseeable advances to various components trending towards miniaturization and complexity. I can offer no timelines on this progress as I am not in this industry. I am, however, excited to experience this profound change first hand, and from the beginning.