Future/Virtual Reality

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Virtual reality is, plainly speaking, seeing an imaginary world, rather than the real one. Seeing, hearing, smelling, tasting, feeling. The imaginary world is a simulation running in a computer. The sense data is fed by some system to our brain. The term itself has somewhat fell out of fashion, but games really are our current VR. Virtual reality IS going to be very important. Various technologies (communications, AI, computing, interface) will affect us and together these will shape society in 2015-2020.

Scale and evolution[edit | edit source]

Virtual reality worlds are running on clusters of servers (sometimes distributed) and usually allow creation of custom content and programming by users. More than ten million people play MMORPGs as of 2005 and about 100 thousand "play" in general purpose worlds. Overall more than 100 million people play 3D computer and video games online (45 million in 2002 [1]).

The next step (2010–2015) is going to be development of more open systems, where content can be moved across platforms and where separate worlds can be linked (for example a room in a virtual building can be simulated on a private server using different simulation software, but would still be accessible for the people walking in the virtual city). [2] Open source may play a role there. [3] Eventually virtual reality worlds will integrate into a global Metaverse running on a distributed grid.

The step after that will be the integration of these worlds with input/output technologies, such as VR goggles and brain-computer interfaces. By then most of the people will spend a significant part of their lives in virtual reality (playing, communicating, working, having sex). Eventually, uploading will make feasible a full migration into virtual reality, while robotic bodies will make the reverse possible.

Content acquisition[edit | edit source]

To bridge the gap between reality and virtual reality we need methods to quickly (not slowly and manually [4]) convert objects from physical reality into digital models and back. This will have much wider implications than just more realistic games, this is going to gradually change what we consider reality.

Some technologies already exist: laser scanners and 3d printers for small objects. Some crude methods already exist to quickly generate 3d models of larger real world scenes (using image processing and LIDAR) - urban landscapes [5] [6] [7] [8] [9] and indoor environments [10].

It is already feasible and cost-effective to acquire photographic data for Yellow Pages [11] [12] using the "drive and shoot" model. Hi-resolution satellite images of urban areas are being incorporated into MSN Virtual Earth [13]. Google is quietly doing similar stuff [14], and may do much more in the future [15] [16] [17]

Using a combination of these approaches 3D models of cities will soon (est. 2007-2009) be built cheaply and quickly. To create a realistic virtual environment one would only need to clean up the raw data a bit, combine the air photos for rooftops and large buildings with ground level images for details, add virtual pedestrians and traffic on streets.

Photosynth is an upcoming technology from Microsoft [18] (videos, live demo, etc.) to recreate 3D environments from unstructured collections of photographs. In essence, it can take hundreds of photos of the Eiffel Tower from Flickr and automatically create a detailed 3D model.

Current realism of computer games[edit | edit source]

As of 2005 we are on the threshold of realism in computer games. It is finally possible to simulate certain aspects of reality in real time and with sufficient precision to declare it an accurate simulation overall.

For example, the Forza Motorsport racing simulation for Xbox is physically realistic. It is mostly on par with reality, even though it's not indistinguishable yet. To achieve this, programmers from Microsoft Game Studios take into account between 3000 and 10000 variables and simulate all aspects of driving, running the simulation at 240 ticks per second. For Race Against Reality Popular Science asked a veteran gamer and a professional race driver to extensively test drive both real cars and their virtual prototypes. The conclusion was that the game simulation is accurate.

Similar level of realism is available for the flight simulators, again from Microsoft [19]. Some simulators are so realistic that pilots are allowed to log the virtual hours just like the real ones.

However, these simulations are not completely realistic yet. There are still things that can be improved though before we have perfect VR.

  • Graphics aren't perfect yet. One of the bigger problems is lighting and shadowing. To make realistic materials technologies such as RealReflect need to be developed.
  • Sound - there is still no good programmatic sound generation. It's all samples, mostly.
  • Global physics - it's possible to simulate several objects (cars, planes) very accurately, but an all-encompassing simulation is still too complex for the tech we have.
  • Simulation of acceleration, tactile contact and everything else related to physically "being there".
  • AI to make the world come alive

The shader model (introduced in 2002-2005) made possible to move graphics a step up from poligonal textured environments to much more realistic worlds. Games introduced in 2005 simulate realistically such superfluous details as raindrop splashes, smoke clouds (Call of Duty 2), etc. Water shaders and 3D textures further enhance the realism.

The video-realistic graphics based on general-purpose stable rendering systems (i.e. no more custom-made rendering engines for every new project) will come around 2010-2015. The programmatic sound may be delivered somewhere between 2015 and 2025. Global physics may be done sufficiently well around 2015-2020. The realistic simulations of all senses may come somewhere between 2015 and 2025. Sufficiently good non-human and domain specific human AI (i.e. for an NPC that can realistically perform in a narrowly defined context) may come somewhere about 2015-2020. Good human-level AI (in the context of video games it's a companion that you can interact closely for many hours in a variety of situation, including free-form talking) is a more complex problem and will probably not be achieved until the 2030s.

Still, we have already entered the realm of virtual reality. In some aspects, although not in all, virtual environments are already as good as real ones.

Interfaces[edit | edit source]

  1. Currently external stimulation is possible. Large VR gaming stations are being developed [20]. Alternatively a user can wear glasses, headphones, virtual reality gloves. Ultimately this should lead to high-quality retinal projectors (for vision).
  2. Progress is being made into direct neural connections. The work is being done mostly in cochlear and retinal implants. Other senses can be controlled too, such as the vestibular system [21], [22].
  3. Ideally the interface would be a direct brain-computer link. First it will be a connection to the cortex, allowing the computer to "read thoughts" and send information directly to the mind. Eventually all brain will become random-access memory, with nanodevices able to control each and every neuron.

Nanomedical Virtual Reality[edit | edit source]

From Nanotech.biz:

Question 5: Ray Kurzweil has proposed having billions of nanorobots positioned in our brains, in order to create full-immersion virtual reality. Do you think that such a scenario will ever be feasible?

Yes of course. I first described the foundational concepts necessary for this in Nanomedicine, Vol. I (1999), including noninvasive neuroelectric monitoring (i.e., nanorobots monitoring neuroelectric signal traffic without being resident inside the neuron cell body, using >5 different methods), neural macrosensing (i.e., nanorobots eavesdropping on the body’s sensory traffic, including auditory and optic nerve taps), modification of natural cellular message traffic by nanorobots stationed nearby (including signal amplification, suppression, replacement, and linkage of previously disparate neural signal sources), inmessaging from neurons (nanorobots receiving signals from the neural traffic), outmessaging to neurons (nanorobots inserting signals into the neural traffic), direct stimulation of somesthetic, kinesthetic, auditory, gustatory, auditory, and ocular sensory nerves (including ganglionic stimulation and direct photoreceptor stimulation) by nanorobots, and the many neuron biocompatibility issues related to nanorobots in the brain, with special attention to the blood-brain barrier.
The key issue for enabling full-immersion reality is obtaining the necessary bandwidth inside the body, which should be available using the in vivo fiber network I first proposed in Nanomedicine, Vol. I (1999). Such a network can handle 1018 bits/sec of data traffic, capacious enough for real-time brain-state monitoring. The fiber network has a 30 cm3 volume and generates 4-6 watts waste heat, both small enough for safe installation in a 1400 cm3 25-watt human brain. Signals travel at most a few meters at nearly the speed of light, so transit time from signal origination at neuron sites inside the brain to the external computer system mediating the upload are ~0.00001 millisec which is considerably less than the minimum ~5 millisec neuron discharge cycle time. Neuron-monitoring chemical sensors located on average ~2 microns apart can capture relevant chemical events occurring within a ~5 millisec time window, since this is the approximate diffusion time for, say, a small neuropeptide across a 2-micron distance. Thus human brain state monitoring can probably be “instantaneous”, at least on the timescale of human neural response, in the sense of “nothing of significance was missed.”
I believe Ray was relying upon these earlier analyses, among others, when making his proposals.

Completeness and complexity of simulation[edit | edit source]

Currently most games (or professional simulations) take into account only a few aspects of reality. A car racing game has a detailed simulation of the engine, tires, tracktion, drag, etc., but "pedestrians" are glued to the ground, all other objects, e.g. planes, are moving on a predetermined path, etc. A real time strategy or tycoon game simulates the social dynamics and resource processing to some extent, but ignores the physics of individual characters moving around.

But the big trend is that the engines all games use become more and more similar. Nowadays a strategy and a shooter game can use the same graphics engine, the same physics engine (such as Havoc 2) and look and feel rather similar (compare it with Dune 2 vs. Doom 2). John Carmack believes that universal engines will emerge around 2010-2015 and he will probably program only two more generations of custom game engines.

Of course, as long as content creation and programming are expensive, the games will avoid simulating parts unnecessary for the core gameplay. But the inevitable emergence of a common engine base will make it possible to integrate different games in one world and eventually it will be done. A crude example of that is the Second Life game where the complexity is not limited, at least in principle. There are also more and more games that use completeness as a selling point, such as GTA series and upcoming Spore from Will Wright.

The increased completeness will eventually make the virtual world real. In that virtual reality a "player" will be able to race, shoot, socialise, control armies, play with "physically real" objects and do a very large subset of what is possible in reality.

Uses of Virtual reality[edit | edit source]

  • tourism
  • entertainment, emerging from FPS games on one end and interactive attractions at Disneyland [23] on the other.
  • social interaction, emerging from MMORPG and from first feeble attempts at online virtual conferences.

Social issues[edit | edit source]

Computer and video games are relatively non-controversial (bar some violent games). Virtual reality hardware, while clumsy and awkward, is accepted too. While full-scale VR a la Matrix, if implemented today, would be scary to most people, gradual development will probably be accepted easily. For example, Sony has discussed future neural interfaces several times.

  • A World of Warcraft World - a good description of the MMORPG-related trends for the VR. Of course, it suffers horribly from the Single factor problem.

Timeline[edit | edit source]

Technological development[edit | edit source]

  • 2010-2015: video-realistic graphics based on general-purpose stable rendering systems.
  • 2015-2020: integrated persistent worlds.
  • 2015-2020: global physics with unlimited world complexity and simulation of most physical aspects.
  • 2015-2020: sufficiently good non-human and domain specific human AI.
  • 2015-2025: programmatic sound. Most aspects of reality can be simulated sufficiently well.
  • 2015-2025: realistic simulations of all senses (through brain-computer interface).
  • 2030+ : good human-level artificial intelligence.
  • 2045+ : uploading and life in virtual reality.

Japanese NISTEP forecast, 2001[edit | edit source]

The NISTEP report [24] lists the following predictions related to virtual reality:

  • 2010: Widespread use of electronic travel pamphlets and product catalogs that use virtual reality.
  • 2012: Emergence of electronic media that stimulate the pleasure center in the brain, causing a social problem similar to narcotic drugs. (this isn't VR per se, but similar technologies will be used)
  • 2015: Widespread use of multimedia-based virtual leisure, leading to a decline in the development of ecosystem-threatening resorts.
  • 2015: Sales from on-line shopping through a digital network (shopping through virtual malls) account for more than 50% of total sales by retail shops.

Description[edit | edit source]

The use of VR in the period covered by NISTEP forecast (2010–2015) will probably include video-realistic virtual worlds with limited physical and AI realism delivered through light, comfortable high-quality VR goggles and possibly some primitive neural interfaces (may be for motor control or mood enhancement). BTW, this will be the PlayStation 5 era.

By 2025-2030 both the simulation and interface technologies will likely advance to a stage sufficient for a perfect Matrix-like simulation indistinguishable from reality, but the virtual avatar will still be controlled by the actual human brain. While the Matrix scenario of naked immobile humans immersed in a nutrient medium and immersed in a virtual reality permanently is possible, it is likely that most people will still spend much of their time in physical reality.

References[edit | edit source]

External links[edit | edit source]