Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium natus error sit omnis iste natus error sit voluptatem accusantium.

@2018. Select Theme All Rights Reserved.


Studio Farzi

  /  Article   /  Reality: Extended, Virtual, Augmented and Mixed

Reality: Extended, Virtual, Augmented and Mixed

Basic Knowledge
Extended Reality or Cross Reality (XR) is an umbrella term that includes a series of immersive technologies; electronic, digital environments where data are represented and projected. XR includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) [3]. In all the above-mentioned XR facets, humans observe and interact in a fully or partially synthetic digital environment constructed by technology.

VR is an alternate, completely separate, digitally created, artificial environment. Users feel in VR that they are immersed, located in a different world and operate in similar ways just like in their physical surroundings [4]. With the help of specialized multisensory equipment such as immersion helmets, VR headsets and omnidirectional treadmills, this experience is amplified through the modalities of vision, sound, touch, movement and the natural interaction with virtual objects

AR adopts a different approach towards physical spaces; it embeds digital inputs, virtual elements into the physical environment so as to enhance it [7]. It spatially merges the physical with the virtual world [8]. The end outcome is a spatially projected layer of digital artifacts mediated by devices, e.g., smartphones, tablets, glasses, contact lenses or other transparent surfaces [9]. Moreover, AR can also be implemented in VR headsets with pass-through mode capability by displaying input from integrated camera sensors.

MR is a more complex concept and its definition has fluctuated across time, reflect- ing the contemporary technological trends and dominant linguistic meanings and narratives [10]. MR is sometimes represented as an advanced AR iteration in the sense that the physical environment interacts in real-time with the projected digital data [10]. For instance, a scripted non-player character in an MR game would recognize the physical surroundings and hide behind under a desk or behind a couch. Similar to VR, MR requires special glasses. However, for the purpose of this article, we accept the conception of MR as any combination of AR and VR as well as intermediate variations such as augmented virtuality [3]. The rationale behind this decision is the long-term technological evolution and maturation of AR to include interactive affordances. Therefore, AR and VR remain the two fundamental technologies and MR their combination.
To comprehend and visualize how these immersive technologies interact with the environment, we point to Milgram and Kishino’s one-dimensional reality–virtuality con- tinuum [3]. This continuum is illustrated as a straight line with two ends. On the left extremum of the line, there is the natural, physical environment. The right end marks the fully artificial, virtual environment that the user experiences instead of the physical one. Hence, AR is near the left end of the spectrum while VR occupies the right extremum. MR is a superset of both.
Multimodal Metaverse Interactions
The Metaverse is based on technologies that enable multisensory interactions with virtual environments, digital objects and people. The representational fidelity of the XR system is enabled by stereoscopic displays that are able to convey the perception of depth [11]. This is possible with separate and slightly different displays for each eye that replicate sight in physical environments [11]. XR displays with high resolutions activate a wide user field of view that can span from 90 to 180 degrees. XR systems also offer superior auditory experiences in comparison to 2D systems. 3D, spatial or binaural audio allows the construction of soundscapes that enhance immersion decisively in AR and VR [12]. The spatial distribution of sound allows users to orientate themselves and identify the directions of sound cues, a powerful medium for navigation and user attention attraction.
In addition to the above passive sensory inputs, XR systems allow active interaction with virtual elements through the use of motion controllers. These are handheld input devices with a grip, buttons, triggers and thumbsticks. Using the controllers, users can touch, grab, manipulate and operate virtual objects [13]. This capability renders them active agents in any educational experience. On this front, the development of full hand tracking will further improve the user experience toward a more natural interface. Research is also being conducted towards wearable devices such as haptics suits and gloves that respond to touch [13]. Further sensory research efforts are concentrated in the direction of smell digitalization and simulation

Interaction in XR environments does not require users to be stationary. Users can activate their entire bodies. Physical movement is being transferred into XR environments through positional and rotational tracking [15]. Movement can be tracked with either external, permanently mounted cameras (outside-in) or through inherent headset sensors and cameras that monitor position changes in relation to the physical environment (inside-out ). The latter is used in stand-alone, wireless headsets. The supported degrees of freedom (DoF) of a XR headset is an essential specification that reflects its motion tracking capabilities [15]. Early and simpler headsets support three rotational head movement DoFs. Contemporary and high-fidelity headsets support all six DoFs adding lateral body movement along the x, y and z axes [15]. One frontier pertaining to occluded VR spaces is perpetual movement translation through unidirectional treadmills

Write a Comment