In The Beginning.

It is true.  Building a pair of glasses for everyday people that is enhanced with a miniature computer inside is a bold challenge.  We are not building this product because it’s easy, we’re building it because we have an authentic dream, a vision for the future of mankind — to create an incredible, spectacular world where human abilities are increased and enhanced using computers they can wear as glasses and visors — a future where both by design and engineering, the interaction layer between human and computer becomes unified.  Where the input is a truly natural interface driven from biometric signals, eye motion, gestures, and thought. Where the display context is complete volumetric 3D visual immersion and the information you have access to within this new computer is integrated with the real world, actualized all around you.  A future that takes a giant leap towards a brighter more intelligent world.

The vector of change.

Humanity is near the entrance of an incredible intersection — a point in our space time history where global distributed computation, 3D computer graphics, information and humanity will converge — in the form of vision enhancing, non-invasive, networked wearable computational systems & sensors.

Image credits: Ray KurtzweilIt is true that transistors and hardware components, semiconductors, memory and storage are all shrinking and yet getting faster simultaneously.  What many don’t realize is that other factors of technology are having the direct effect of compounding exponential growth.  The computer network gives us access to information and the ability to process it in a distributed manner across the world faster than we’ve ever had before.   Humans are learning faster and this results in technological advancements and scientific discoveries occurring at an ever increasing rate.  The computer network distributes and spreads data creating outcomes no-one ever expected due to the underlying nature of network effects.  Software algorithms are advancing and contributing in their own way to increase the amplification of  exponential acceleration – research into the simplification and optimization of software algorithms allows them to run faster and faster than they ever could before, allowing more software to run simultaneously and further increasing the pace of change.  For example, 3D visual computations for computer graphics that once took weeks or days to render on an incredibly high-end workstation computer has now moved into being rendered in literally sub-seconds or perceptually in real time, creating near photo-real 3D graphics instantly. So — computational capabilities and capacity is growing yet becoming ever more miniature, modular, vectorized and parallel, the computer network is making us smarter, and as a result we’re building better software algorithms.  All of these things are known elements — our capabilities increase at an exponential rate technologists now expect; yet despite having this open knowledge few are able to keep up with it. These are all key indicators that something bigger than ourselves is shifting.  The momentum of technological acceleration has become it’s own physical force, like the tides of the ocean or the  gravitational force of earth, transitions into new paradigms have begun moving faster and now on a pace of it’s own, like a true force of nature.  Causation can be linked to various complex economic and human underpinnings, and although moderately prone to black swan events, technology has created a force of nature  that has the ability to truly make us better as human beings.  Today we are experiencing the powerful economic forces of Globalization and Commoditization combined with the uncertainties of innovative technological disruption — while Moore’s 5th Paradigm is ever accelerating, but within the next big paradigm shift exists the potential to create a true renaissance of technological advancement that will save our planet.

This is all happening today as we watch around us; but believe — there is incredible beauty in the chaos and the flowering nature of it. We must recognize that we are free to expose reality; to redefine reality; and further; to enhance humanity. We can adapt; we will thrive, and prosper. We can further ourselves by building a globally sustainable digital ecosystem; and create rejuvenating value within the new dimension of the interactive immersive 3D universe. We can integrate with today’s powerful stakeholders through the virtualization of objectified life itself into data. We can transition to create new channels and new models that allow for health and prosperity; that do not suffer the flawed tribulations of the 1990′s and the MP3 disaster; we are no longer playing a zero sum game.

True; this is expansive; broad, and a huge goal; but an authentic one. We are starting boldly; but we are also starting rationally and realistically. Our first eyewear product is designed for today’s market and today’s social consumer; but our goals and vision reach much farther into the horizon of humanity.   It is this vector of change that is truly important to understand — because the direction we’re pointed at the onset will have an incredible impact on where we end up in the future.

Begin with the end in mind.

The future has already arrived — and is moving & diffusing faster than ever.  Unlike many others, our product vision is unique and instead of just jumping onto an already existing band-wagon, we began by looking at what is really missing in order to build a future where everyday consumers can experience Virtual and Augmented Reality — so that this big dream can reach the full potential it’s promise holds.  As we began our mission, we brainstormed and stumbled upon many new ideas, problems, challenges and concepts that ultimately allowed us to understand that what we are setting out to do is to literally create an entirely new type of computer.  Something that had really never been done in entirety before, and with-in this search for greatness, we realized that our mission had become something truly important:

“Redefine the future of the human-computer paradigm.”

If we compare the state of the art of what mankind has achieved today with what will continue to occur, we see several ways forward in terms of advances that will become, but have not yet been fully realized.  Next generation semiconductor fabrication (photo-lithography) can be pushed to make transistors as small as 10nm (Intel’s present best is ~20nm), further pushed into the 3rd dimension, and then made massively parallel. AI, computer vision, machine learning and machine vision will expect to see a multitude of advances in object recognition, image segmentation, photogrammetry, probabilistic navigation, and natural language processing;  these advances along with new electrical, physical, and kinematic engineering advances will fully enable new revolutionary devices such as domestic robotics far beyond vacuum cleaners to be market-ready, producing goods, supplying services and living in your home.

New hardware and algorithm advances will also lead to an enormous supply of new “smart” camera-enabled devices, including computers and cameras in your glasses and other products such as wide-spread “smart” consumer-grade robots and drones, which will in turn cause an abundance of photos and videos that can be quickly and easily analyzed instantaneously across the planet.  An “Internet of things” is beginning to arise where computer chips identify objects and report their location and status across a computer network. Software frameworks (such as Kinect, Oblong, Leap Motion) will popularize natural gesture interfaces.  Measurement and understanding of brain activity via EEG and fMRI is already very reliable and will become even more understood, miniaturized, stable and high-resolution.  Adoption of a multitude of new sensors into new networked devices will grant us the capabilities to detect and read new bio-electrical patterns, hear new frequencies, see new waves of light, understand new patterns and process new dimensions of information, giving the human the ability to have more than the original set of natural senses.  An abundance of new computer chips, sensors, advances in screen resolution and computing power, together with more effective and new cutting edge algorithms, and greater infrastructure investment in fiber-optic wired and broadband wireless networking, will enable the rendering and transmission of complex 3D objects within the real world, and 3D virtual worlds — anywhere — instantly.

Vergence Labs maintains a deep and authentic vision for the future. It is both audacious yet truly authentic; boldly meaningful yet singularly simple:

“Enhance Humanity.  Redefine Reality.”

We believe that these fundamental technologies can be integrated into a new spread of devices that will eventually incorporate and submerge a human user into the Internet.  There are nearly an unlimited number of uses and applications for such a state, but we see three main uses of this: quantifying the user’s mental and physical state and surroundings to allow the user interface to make better decisions, immersing the user in a virtual or augmented world for information retrieval and virtual object interaction, communication, education or entertainment; and regulating the user’s mental state using visual and auditory stimulation to amplify learning, boost productivity, intelligence, empathy, and creativity.  Further, the true impact becomes what may become known as the extension of “self” — the ability to network all of the sensors, cameras, devices, robots, and ultimately, humanity, into what we believe to be the true potential of the internet; an integrated network that will enable you to access and control robotic devices using an immersive volumetric visor computer.   A visual interface that will allow you to communicate with people right next to you or across the world using eye tracking, gesture and thought.

Enhance Humanity.  Redefine Reality.

Make people smarter, better, give them enhanced memory recall.  Give people the ability to see the world in a higher resolution than the natural human eye.  Imagine for a moment the ability to zoom into objects that are miles away through a computer network and an advanced ultra-high resolution digital immersive volumetric display.  Imagine having night vision, automatically — or the ability to zoom into the surface of the sun by simply looking up and activating a zoom command with your mind.  Imagine having instant recall of any event that occurred in your life and being able to share that experience with others.   Finally, imagine if every single object that you interacted with in the world had additional information associated with it that was only viewable through the computer network, information that helped you understand that object, such as extra nutrition content on food, reviews of physical retail establishments, historical information, origins information, or simply rendered additional 3D virtual content associated with that object which adds to the real-world object’s functionality.

People will be able to record their lives, not only what they see, but how they feel.  The ability to record and correlate their point of view with their biomechanical and psychological state of being at every moment of every day.  They will use this data for the purpose of self-improvement, with machine learning algorithms that look for correlations between certain biomechanics and phenotypes and adjust the computer’s settings, allowing additional visual and auditory stimulation of the information being taught, and even perhaps applying trans-cranial stimulation, so that the wearer can absorb information more effectively, learn faster, better, and understand new complex information at an accelerated pace.  Wearable computers will be incredible data acquisition platforms to capture terabytes upon terabytes of video, audio, location, and metadata about the world, that could be used immediately in applications similar to Bing Maps, Google Street View, or Microsoft Photosynth.  When enough data is collected through out the world as our entire civilization begins wearing computers, it can be processed using automated advanced photogrammetry, turning the data into rich 3D models of the entire world, that will be navigable, as well as documented history accessible by future civilizations to build a nearly complete 3D model of the subtle details of our inhabited world as it progresses through history and transforms through time.

Moreover, we will create the ability to grant people sensory and mental abilities far beyond what is possible with our current limitations as humans.  People will be able to live 24×7 in an augmented world comprised of real objects mixed with virtual objects that are rendered to provide the optimal information that the user needs, incorporating new visual sensors with a new spectrum of data such as infrared; downloaded maps for instant navigation, access to location based video and metadata streams from nearby networked “Internet of Things” transmitting data, not to mention their own richly annotated social graph.

Memory augmentation will be possible by anticipating what information the user needs in a particular situation and presenting it to them, as well as allowing the user to instantly search their entire personal archive of information and experiences — and if they don’t find the experience they need, the ability to access other’s experiences that have been made public — creating the ability for us to remember each other’s memories and understand each other’s experiences, to learn from each other faster. Advanced UI technology will allow users to effortlessly control both computer programs and physical robot servants as well as prosthesis with their minds.  Finally, trans-cranial magnetic stimulation will give people the ability to retain information more effectively, learn faster and remember things better.  The question then becomes if the people of the future will use their limited human memory to remember the actual experience & memory, or rather act as a fast indexing mechanism to simply remember the best most effective way to access and recall the information / memory from the computer; similar to how we as humans have already become so reliant on automated driving directions or the use of search engines with the rich data on the internet to quickly access and retrieve our information.

Learn from the past to invent the future.

Ivan Sutherland was a pioneer in the creation of computer graphics. He is considered by many to be the creator of computer graphics because of his huge input, creations and massive contributions to the field. In 1965 to 1968 Ivan was an Associate Professor of Electrical Engineering at Harvard University, he and a student there, Bob Sproull created a virtual reality head mounted display in 1968 which is widely known to be the first virtual reality and augmented reality head-mounted display system. It was called ‘The Sword of Damocles’.

In 1965, Ivan Sutherland wrote about what he called “The Ultimate Display” for a computer system, a “kinesthetic display” as he described it here:

“A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland.”

He goes on to explain further:

“If the task of the display is to serve as a looking-glass into the mathematical wonderland constructed in computer memory, it should serve as many senses as possible.  [...] I want to describe for you a kinesthetic display. [...] By use of such an input/output device, we can add a force display to our sight and sound capability. The computer can easily sense the positions of almost any of our body muscles.  Our eye dexterity is very high [...] Machines to sense and interpret eye motion data can and will be built. It remains to be seen if we can use a language of glances to control a computer. An interesting experiment will be to make the display presentation depend on where we look.  Such experiments will lead not only to new methods of controlling machines, but also to interesting understandings of the mechanisms of vision. There is no reason why the objects displayed by a computer have to follow the ordinary rules of physical reality with which we are familiar. The user of one of today’s visual displays can easily make solid objects transparent – he can ‘see through matter!’  Concepts which never before had any visual representation can be shown [...] By working with such displays of mathematical phenomena we can learn to know them as well as we know our own natural world.  Such knowledge is the major promise of computer displays. The ultimate display would, of course, be a room within which the computer can control the existence of matter. “

This was written in 1965.

Ivan Sutherland went on to co-found one of the most influential and ground breaking computer graphics companies, Evans and Sutherland, pioneering work in the field of real-time hardware, accelerated 3D computer graphics, and more. Ivan Sutherland, as well as Evans and Sutherland’s impact on the world diffused in many notable ways from both a technical and human influence perspective.  Former employees of Evans and Sutherland included the future founders of Adobe (John Warnock) and Silicon Graphics (Jim Clark). While teaching as professor at the University of Utah, among Evan’s students there were Alan Kay, inventor of the Smalltalk language, Henri Gouraud who devised the Gouraud shading technique, Frank Crow, who went on to develop antialiasing methods, and Edwin Catmull, computer graphics scientist, co-founder of Pixar and now President of Walt Disney and Pixar Animation Studios.

 

Mission: Reinvent the future of the Human-Computer Paradigm.

Vergence Labs’ audacious goal is to reinvent the future of the human-computer paradigm — by building bioelectric human computer interfaces and immersive volumetric displays that enhance users’ mental and sensory abilities beyond the human norm in the form of a wearable HMD computer. With 3D gesture, eye tracking and brain interfaces we’re creating a new natural 3D human interface with a wearable HMD computer. For our first years, we are focusing on recording, streaming and sharing the human perspective through vision.

Next, augmenting and later sending information into the eyes that is “smart”, and then fully immersive; so your vision contains data that is more rich, interactive and informative than the raw world around you. Later, we have plans to interface with and control robotic devices and small remote robot servants with basic gesture and thought.

Never Compromise on Design.

Smooth interfaces. Fluid interaction.

Hardware that’s “bio-electric” but never invasive.

Personal Privacy aren’t just words; it’s a basic human right.

Trust and Privacy: Choice, Equality, Openness & Truth, Integrity, Ethics must co-exist with absolute control over privacy.