Although upcoming technologies will be of enormous value for first responders, these technologies are often not yet mature enough for actual use or testing, even in a laboratory environment. However, some of today’s virtual reality (VR) technologies are ready and can be used to prototype new tools that incorporate more advanced technologies.
We propose FirstSimVR, a versatile VR system that can be configured for specific first-responder scenarios in order to prototype, evaluate, and inform improved design of early virtual and physical prototypes. The following paragraphs provide an overview of our proposed first-responder simulation system.
REAL-WORLD SIMULATION
In addition to simulating future technologies, VR can be used to simulate and control real-world crisis scenarios, which are often dangerous and/or expensive to replicate. In addition to visually rendering the scene, our proposed system supports spatialized audio and the touching/manipulation of physical objects that emphasizes tangible interaction. Section “Realistic Elements – Virtual” describes how we propose to render visual and audio cues that simulate the real world, and Section “Realistic Elements – Physical” describes how we propose to incorporate real physical objects into the experiences.
AVAILABLE TODAY
FirstSimVR will be built with the Unity Game Engine and Commercial Off-The-Shelf (COTS) hardware. Because all the hardware is available today, the system and scenarios can be replicated at multiple sites. Section “Purchase Order List” discusses COTS hardware to be used.
VERSATILE SCENARIOS
FirstSimVR will support a diverse set of first responder scenarios. Section “First Responder Scenes” discusses several example scenarios that our proposed system will support.
TESTING AND MEASURING TOMORROWS INTERFACES
FirstSimVR will support measurement and testing of different ergonomics, interfaces, and usefulness. For example, A VR system can be used to simulate what a firefighter might see with a futuristic heads-up display. Tradeoffs such as field of view, visual fidelity, display brightness, etc could be studied before the actual heads-up display is built. In addition, FirstSimVR will support automated motion capture for after-action reviews, head/eye tracking for attention mapping, microphones for voice commands/anxiety/breath, and performance data. Such data can be used to provide invaluable insight to inform design decisions that are not typically available with traditional real-world testing and measurement. Section “Technology Testing” discusses testing of technology, equipment, and interfaces. Section “Measurements” discusses the collection of data and metrics.
REPLICATEABLE AND REPEATABLE SCENARIOS
Because all required hardware is available today, the system and scenarios can be replicated at multiple sites. Once the core system is set up, scenarios can be changed in as little as 10 minutes by reorganizing the system’s real-world objects, including styrofoam blocks, simple wooden structures, hand-held props, and mannequins that are defined by a scenario floorplan and accompanying documentation. Section “Purchase Order List” discusses the hardware required for each site the scenarios will be replicated at. Section “Realistic Elements – Physical” describes how the physical configurations work.
INTEROPERABILITY
FirstSimVR is designed to use arbitrary COTS hardware that meets certain requirements. This means that today’s best hardware can be used, but it can also be swapped out easily for improved and/or lower cost hardware if and when it becomes available. The level of customization options includes the support of different tracking technology, different head-mounted displays, and various existing and prototype interfaces/tools. In addition, arbitrary 2D software applications can be spontaneously brought into the virtual environment. See Section “Purchase Order List” for a discussion of COTS hardware and Section “Technology Testing” for a discussion of reusing existing 2D software applications.
SAFETY
First responder situations can be dangerous and expensive to replicate so it makes sense to simulate those dangerous situations with VR. Unfortunately, VR can also be dangerous if not designed and implemented well. Section “Safety” discusses some of the challenges and solutions that maximizes safety.
MULTI-USER EXPERIENCES AND AFTER-ACTION REVIEWS
FirstSimVR will support multiple users in the same physical space at the same time. It will also support multiple users working remotely at different sites, or it can replay logs from a previous use to simulate working with or evaluating other users/interfaces in after-action reviews. Since the system will tracks many points, multiple users will be able see, hear, and interact. If the users are in the same tracked space, then they can directly interact with each other by shaking hands, passing tools, or even working together on moving a large item out of harm’s way.
This section assumes the general FirstSimVR system is already installed at the site location. See Section “Purchase Order List” for a description of the required system software and hardware that applies to all scenarios.
A primary goal of FirstSimVR is to make the system easy to configure, so that any first responder will be able to install it, configure it, and jump into a scenario without issue. We propose easy-to-use software, Commercial Off-The-Shelf (COTS) VR hardware, and simple physical props that enables teams to recreate VR scenarios at arbitrary physical sites.
Due to VR’s complexity and so many unknowns inherent in creating VR experiences and the future technologies the scenario is simulating, we highly recommend an iterative approach to creating prototype scenarios, with substantial feedback from first responders through experimentation, collaboration, site visits, prototypes, and pilot programs. Easy configuration at multiple sites will support such an approach.
Scenario configuration consists primarily of placing low-fidelity physical objects that correspond to objects in the virtual environment that users might touch. In addition, some customization of virtual elements will be possible through software settting options in order to support different conditions defined by the scenario creator (e.g., different lighting and weather, choosing from a set of predefined interface configurations, or the simulated field of view of an augmented reality heads-up display). Software settings can be saved and loaded from files to ensure identical settings across different testing conditions.
The amount of scenario setup time will depend on the scenario complexity and how many physical objects will be used. Assuming the general FirstSimVR system is already set up, changing of scenarios with minimal physical setup (e.g., only a few styrofoam walls and basic furniture) and minimal change to software settings (and/or loading of a settings file) should take as little as 10 minutes.
Documentation for each scenario provided by creators of the scenario will include the following:
1. A simplified blueprint for placement of static, easy-to-place, low-fidelity physical walls, furniture, and other objects that users might feel and to prevent them from walking through.
2. A spreadsheet listing the physical objects to be placed in the environment. Information will be provided for each object such as
a. type of material (e.g., styrofoam blocks where only minimal physical fidelity is required, or wooden structures where more stability is required),
b. whether the object is tracked (accomplished by attaching markers to the objects) or not,
c. Links to order online,
d. specialized first-responder hardware,
e. other sensory-inducing items like fans or heat lamps
3. 3D printable files in .stl format for small tangible objects (created from a program like SolidWorks or MakeVR Pro).
4. External software applications to be used (e.g., software mapped onto a virtual hand-held mobile device physically represented by a tracked hand-held a plastic panel).
5. Software configuration setting options.
6. Notes from the original creators detailing any special circumstances specific to the scenario.
7. Contact information for support and/or scenario creators in case problems or questions arise.
Sensory conflict can be a challenge for VR systems. Many such sensory conflicts exist in today’s VR systems and applications, where what is most notably known as visual-vestibular conflict (a conflict caused by the perceived difference between what a VR user is seeing in front of them and what their inner ear is sensing about their physical orientation and balance) can lead to motion sickness, physical injury from falling down, disorientation, and confusion. Such conflicts occur when low-quality tracking is used, the system is not well-calibrated, or virtual navigation techniques are utilized (e.g., pushing a joystick to move forward virtually but not physically moving forward; or when virtual rotations occur without corresponding physical rotations).
VR can also have other sensory conflicts that can lead to discomfort and even injury. For example, if a user leans onto a virtual table that he or she sees (in the virtual space) and assumes is real, but that does not exist physically, then there is danger of falling. FirstSimVR will take advantage of physical props that are congruent with virtual/visual objects in order to help improve safety over today’s typical consumer setups.
Another danger of virtual environments is the potential for negative training aftereffects if the experience is not properly designed or calibrated. Inaccurate training representations run the risk of imbuing the user with non-useful or detrimental real-world reflexes based on the inaccurate representations of these activities in the virtual experience. This potentiality is made more nefarious by the aforementioned visual-vestibular conflict, as well as general spatial inconsistencies, which can temporarily confuse a users’ sense of spatial awareness. For example, if a system is not properly calibrated or does not have a one-to-one mapping of real motion to virtual motion then users may need to readapt back to the real world after performing tasks within the virtual world. Negative training effects can be most dangerous when the real equipment being used is dangerous that requires careful handling (e.g., bio matter, heavy machiner, or sharp objects).
Because of the safety challenges of sensory conflict described above, FirstSimVR will only use one-to-one tracking (e.g., not relying on inertial sensors alone that drift over time, or warp space as done with redirected walking or redirected touching) and matching of all sensory stimuli.
In addition to sensory conflict challenges, most of today’s high-end systems are tethered to a PC, which can lead to tripping and/or breaks in presence. Although mobile VR utilizing smart phones or dedicated processors do not require a cable, quality is limited when using such hardware due to the computational resources required for realistic scenes. FirstSimVR will use wireless technology that streams data to and from a PC. See Section “Purchase Order List” for a discussion of VR-appropriate wireless technology. In addition, for standing/walking scenarios, we recommend a human spotter that carefully observes and stays near the user(s) to help catch that user if he or she loses balance.
Although presence (the sense of actually being in a virtual world) is not always required for evaluating technology, we believe that presence is quite important for first-responder simulations since a person’s anxiety level very much affect performance. For such situations where realism is essential for measuring actual performance, presence should be maximized. This section describes virtual elements that FirstSimVR supports and the next section “Realistic Elements” describes physical components that can dramatically increase presence.
FirstSimVR will be built on the Unity Game Engine, which we have found to work extremely well for creating a diverse range of VR experiences. In addition to the engine itself, numerous easy-to-use and modifieable assets are provided by 3rd parties that can be purchased at a low price. Likewise, any custom assets needed for specific scenarios can serve as “plug-ins” for FirstSimVR.
VISUAL RENDERING
The visual elements of VR are extremely important as the eyes serve as the broadband path into users’ minds. Fortunately, visual rendering is relatively mature compared to other VR sensory-inducing sensory technology as evidenced by the emphasis on visuals in most all of today’s VR experiences.
In addition to standard rendering, FirstSimVR will support visual effects that are especially important to first-responder scenarios. The specific choice of visual effects will depend on the specific scenario requirements.
Special effects will include
1. Fire, smoke, and water
2. Lighting such as police lights, user-held flashlights, car headlights, poor lighting conditions, thermal imaging, pre-baked lightmaps, and real-time lighting.
3. Weather such as snow, rain, fog, and wind.
Where high-visual fidelity is important, subject-matter experts (i.e., perception scientists specializing in storm conditions) should be utilized when creating the special effects.
AUDIO
George Lucas stated 50% of the motion picture experience is audio, and in our opinion audio is even more important for VR, and even more important for most all first responder situations where subtle audio clues can be essential to making decisions. Unfortunately, VR developers often only add audio as an afterthought. Examples audio FirstSimVR will support are fire, wind, water, traffic, people in the background; real-world audio capture, communications tools, voices, and text-to-speech for dynamic audio interfaces. In most cases, spatialized audio should be used. Precisely controlling audio across replicated systems can be difficult due to PC configuration, gain levels, and specific headphones used. All audio settings and hardware should be explicitly defined when documenting scenarios for replication.
For multiple users in the same tracked space, voice spatialization occurs automatically due to those voices emanating from the true location of the person speaking (and virtual representation of that person since the virtual and physical coincide).
BODY TRACKING
Although we take for granted human motion in the real world, seeing body motion within VR can significantly add to presence.
Embodiment
Most of today’s VR experiences place the user as a disembodied point in space, which is quite different from the real world. Some systems do provide a representation of the hands but not more of the body. There is good reason for this as consumer-level hardware has limitations that make it quite difficult to create a compelling experience of seeing one’s own body that is congruent with his own physical sense of where his body parts are located and how his body feels like it is moving (known as the proprioceptive sense). The primary challenge of embodiment is that consistent high-quality tracking of multiple points of the body is required. Since FirstSimVR tracks multiple points, the full body will be represented.
Character motion
As humans we are significantly sensitive to the motions of others although we often don’t consciously notice that motion. Motions capture will be applied to human character rigs in real-time for other live users (whether in the same tracked space or from another site) or applied as animations for virtual characters based on scripts and users’ behaviors.
INTERACTION
Being able to affect the environment or control tools provides a sense of agency that significantly adds to the sense of presence compared to passive experiences. Most notably, interaction is required in most testing cases as described in Sections “Technology Testing” and “Measurement”.
2D SOFTWARE INTEGRATION
Arbitrary 2D software applications can be replicated in the virtual environment at runtime using VNC (see Section “DIY Recipe Kit”). This will allow for example, a user to bring up a google search or calculate tool on a virtual ipad during the VR scenario, even if the scenario creators did not plan on users having access to those specific tools.
FirstSimVR will provide physical sensations that users can feel with their hands or other parts of their bodies. Because vision tends to dominate touch, physical cues do not need to be as high fidelity as their real-world equivalents; tangible objects need only to approximate their real-world equivalents for the true real-world scenarios being tested. In most scenarios, the true object textures do not need to be replicated. Instead users only need to be aware that some wall is physically in the way or that a panel feels roughly like an iPad.
Because of the need for physical touch, tracking of actual hands instead of implied hand position by a tracked controller should be used instead of only hand-held tracked controllers (although hand-held devices/controllers are also supported where appropriate). The need for physical elements depends on the specific needs of the scenario being tested. For cognitive tasks, less physical elements will be required.
PASSIVE HAPTICS
Since FirstSimVR tracks users and objects with high accuracy, our proposed system has the benefit of being able to take advantage of a very simple solution to add physical sensations through the use of physical props, also known as passive haptics. Passive haptics are static physical objects that can be touched without any control by a computer. Examples of passive haptics include hand held-props or large physical walls. Passive haptics provide a sense of touch in VR at low cost—the VR creators simply create real-world physical objects and match those objects to the shape and pose of virtual objects.
What is quite surprising is how such simple physical objects can add to presence/realism, and once a user realizes such objects are real, they assume virtual objects they do not touch are real and they would physically feel them if they did touch them. Because of this, only a subset of the virtual environment near the user that they are likely to touch needs to be physically represented. Passive haptics have also been shown to improves cognitive mapping of the world and to improve training performance.
FirstSimVR will support the following passive-haptics options. Different scenarios will make use of different options and FIrstSimVR will output blueprints and documentation as described in Section “DIY Recipe Kit” for easy scenario configuration.
-- Passive non-tracked haptics are static (i.e., they do not move) physical objects placed in the world when setting up the physical space that match the virtual representation. Examples include walls, immoveable furniture, or a parked car that does not move.
-- Passive tracked haptics are tracked physical objects where the computer is aware of any change in those objects’ pose so the system can update the virtual representation of them so that the visual and physical representations stay in sync. Examples include hand-held props such as a flashlight, a moveable chair, and a hand-held panel.
-- Mannequins can add significant realism for scenarios requiring physical interaction with a simulated human body (e.g., a heart attack situation). Mannequins could be considered a form of passive tracked haptics, but are a bit more complicated than typical tracked passive haptics because multiple body parts are typically required to be tracked.
ACTIVE HAPTICS
Active haptics are controlled by a computer. Their advantage is that forces can be dynamically controlled to provide a feeling of a wide range of simulated virtual objects. When combined with visual representations, active haptics can provide compelling illusions that are not apparent with haptics or visuals alone. Examples include tactile haptics such as Tactical Haptics Reactive Grip technology, single point haptics such as Sensable’s Phantom, exoskeletons such as the Dexmo Glove, and motion platforms. We do not expect most scenarios to take advantage of active haptics beyond basic buzzers due to their high-cost to replicate (i.e., each site would need its own motion platform) and maintain, but the software will support such devices when needed. If active haptics are used, precise COTS should be documented to ensure consistency across sites. Self-built haptics are not recommended due to the difficult of precisely replicating across multiple sites.
WIND AND TEMPERATURE
Wind and cold/hot temperatures can significantly add to adverse weather and firefighting scenarios by using fans, cold or hot rooms, heat lamps, or chilled props. Although it is typically not feasible to match the temperature of the actual scenario being simulated, when combined with visual and audio cues, smaller changes in temperature can result in significant illusions of cold and heat. We do not recommend mist machines or similar technology as water can cause damage to hardware.
Testing technologies and interfaces being simulated will be essential for understanding tradeoffs and informing design of various aspects of the futuristic tools and iterative improvement of those tools.
FirstSimVR will support the testing of a broad range of technologies and tools. Because experimental conditions can be better controlled with a virtual environment compared to laboratory or real-world scenario settings (e.g., actors where behavior changes every time), better repeatable conditions can lead to more accurate data and statistical results.
Although FirstSimVR will not be appropriate for testing all aspects of all technologies and interfaces, we do believe it is an ideal fit for testing a broad range of technologies. Below is discussion of some examples that would fit well with FirstSimVR.
-- Physical equipment. Since one of the primary features of FirstSimVR is using physical props or existing devices, testing of new or slightly different-than-existing equipment is a natural fit. The existing or modifiable equipment is simply tracked. In some cases, additional sensors may need to be added to the equipment to inform the system that something has changed (e.g., the twisting of valve or pulling of a pin). Virtual feedback can be used to simulate different results.
-- 2D applications. FirstSimVR supports real-time screen capture/display and input control of external 2D applications in a simulated context. For example, the most common first-responder interface may be a virtual tablet with a low-cost plastic hand-held prop that executes on a remote machine. Bringing in such applications will support a wide range of software without having to reinvent that software. Thus, existing tools that are already used by first responders (or software that is in development) such as smart alerts, data analytics, video conferencing, and sharing of tools can easily be used and tested within the virtual environment. This will also allow traditional UI designers and software developers who want to test within VR scenarios to use their development tools they are already familiar with. FirstSimVR will utilize Virtual Network Computing (VNC) to integrate remote desktops into the virtual environment.
-- Augmented reality (AR) heads-up displays. Tradeoffs such as field of view, display brightness, and registration (how well computer-generated visuals match real world objects) can be simulated with FirstSimVR with higher quality than what is available today and/or before the next-generation AR systems are built.
-- Cameras. Camera interfaces can be tested via simple hand-held props, 3D printed props, and/or VNC.
-- Sensors. FirstSimVR will support the exploration of ways that sensors can be utilized to help first-responders. Examples include 1) thermal imaging where heat maps are overlaid onto geometry as if seen through an augmented reality heads-up display, 2) simulating hand-held sensors, and 3) the overlaying of abstract data visualization on the world with the arrows pointing to the virtual sensors located in the scene that provide the data.
-- Wearable computing. Technology that always travels with the user such as smart watches, pendants, and ring input are great fit for first responders and FirstSimVR. In addition to the immediate interface of the worn computer, the results of the interaction can be simulated in the scene (e.g., turning on/off lights or an alarm in a house).
-- VR-relevant technologies. Technologies that can work well well with VR may also work well for first responders. Examples include voice commands and auditory cues, active haptics, biofeedback, and scanning/mapping technologies. As such technologies mature and are better integrated with VR, they can be more easily be included for testing as part of first-responder scenarios.
FirstSimVR will simulate future technologies using simple props and today's VR technology as discussed below. Note some scenarios will not require all options depending on specific scenario goals.
SOFTWARE
1. A library of FirstSimVR scenario software executables with a user interface for investigators to control variable factors (e.g., weather, virtual asset pose, lighting, etc). Identical settings should be used when replicating experiments, but different values might be used for testing different conditions.
2. FirstSimVR Scenario Builder software where scenario creators can intuitively build their scenarios and place/modify assets from within VR. Output of these scenarios will feed into FirstSimVR scenarios executable described above and support scenario documentation.
3. The Unity game engine running on Windows 10 with FirstSimVR plugins and assets. For more customized scenarios beyond what can be done with the FirstSimVR Scenario-Builder software described above, scenario creators can modify and create assets and scripts which will maximize the ability for scenario creators to iterate and innovate.
4. Virtual Network Computing (VNC), a graphical desktop sharing system used to interact with remote applications (for interaction with arbitrary 2D apps from within the virtual environment).
5. Cloud storage for easy reporting of collected data to a centralized shared database accessible (password protected) by different investigators exploring similar scenarios.
COMPUTING HARDWARE
1. Wireless streaming components from PC to HMD. The TPCast Wireless Adaptor is an example of wireless communication optimized for VR and several other companies are competing with similar technologies.
2. Internet connection for accessing databases, cloud software platforms, voice/video communications, and collaborative simulations across multiple sites.
3. Intel i7 processor, 16 GB of memory, nVidia 1080GTX
4. Head-mounted display (e.g., Oculus Rift or HTC Vive)
5. Earphone speakers (not shared earbuds for hygiene reasons).
TRACKING OPTIONS
FirstSimVR will support different tracking systems so that there is no reliance on a single vendor, and newer higher quality / lower cost systems can be used as they become available.
Tracking system requirements are
1. Position and orientation data for each tracked point.
2. A single tracked point for each dynamic device or prop in the scene (dependent on scenario needs).
3. Six tracked points per user (head, two hands, back or torso, and two feet). Other body motion will be estimated via inverse kinematics.
4. Inertial data for head tracking where low latency, prediction, and filtering to remove jitter is important for reducing motion sickness. Inertial data is optional for other tracked points.
Beyond the above requirements, different tracking systems have their own unique advantages and disadvantages but each ultimately provides the same position and orientation data. Prices vary significantly depending on quality of tracking, number of tracked points, and tracking area. Examples of tracking systems FirstSimVR will support include:
1. Optitrack. Costs depend very much on the specific scenario. For example, Optitrack recommends 12 Prime 17W cameras and related equipment for $46,596 to track 14 points in a 40’x40’ tracked space. We suggest Optitrack for the initial default tracking system due to its proven record, large tracked space, and small lightweight passive or active markers.
2. WorldViz. Passive and active tracking options. Similar in price to Optitrack.
3. PhaseSpace. Active tracking system. Similar in price as Optitrack and WorldViz.
4. HTC Vive pucks. Low cost ($99 per tracked point) but limited to a tracking area of 15 feet by 15 feet, bulky, and difficult to attach in a stable way to the body.
We do not recommend camera-based markerless tracking (e.g.,Microsoft Kinect, Leap Motion, or Intel Perceptual Computing) as today’s technology is not yet robust/consistent enough for VR interactions (e.g., dropped frames, line of sight issues, and lighting challenges). Successful tracking acquisition needs to be achieved for more than 99% of frames for FirstSimVR to be useable without frustration.
PHYSICAL ELEMENTS
More elaborate hardware such as active haptics and motion platforms could conceivably be added for specific scenarios.
1. Styrofoam Blocks for walls, furniture, and counters where little interaction will occur.
2. Simple wooden structures where more stability is needed than what can be provided by Styrofoam.
3. A tracked hand-held plastic panel for simulating the physical sensation of a phone or tablet. The panel can be weighted if important.
4. Scenario-specific devices and/or equipment (e.g., a real fire hose, radio, or voice recorder).
5. Active haptics for scenarios that require dynamic feedback to the user. Examples range from simple buzzers to motion platforms.
FirstSimVR will support a wide range of first responder scenarios, although the system will not be a good fit for all scenarios. While VR can be a powerful tool, the technology should not be forced where it is not a good fit. Instead, FirstSimVR should be applied where the most valued can be added. Below are discussions of how FirstSimVR can support some of the example first responder scenarios described in the NIST Challenge.
FIREFIGHTERS
Public Building / Private Home
Physical walls, tracked firefighter equipment (e.g., hoses, shovels, and axes), debris, and mannequins can be utilized so that firefighters can feel their tools, physically crawl, be blocked by barriers, and take victims to safety. Smoke, fire, and water simulation can be implemented with particle effects and audio. If firefighters are expected to explore more than one floor, then fully stable structures should be used. Because of the expense of tracking and replicating the physicality of large areas across multiple sites, we don’t recommend using FirstSimVR for navigating multiple levels of a building (techniques such as redirected walking can be used but introduce perceptual side effects as discussed in Section “Safety”). Where teamwork is important, firefighters can train together (whether at the same site, remote sites, or at different times using previously recorded experiences). See Figure 2 in “Supporting Documents.”
Investigation
A hand-held camera prop can be used to simulate the feeling of a real camera that is rendered visually. Alternatively, a real digital camera capable of executing a VNC server could be used where the input and the screen are replicated within the virtual environment. Likewise, a voice recorder prop or real recorder might be used. Photogrammetry should be considered for modeling and texturing the scene, although tradeoffs should be considered depending on specific goals defined by subject-matter experts as changing lighting conditions using such a technique can be difficult. See Figure 1 in “Supporting Documents.”
POLICE
Traffic Stop
To simplify hardware (i.e., not require motion platforms), we suggest to start the scenario after the officer has parked his/her car. The physical interior of the car can be mocked up (or a real car or portion of a car might be used). Standard police equipment such as radios should be used as physical props for communication, since officers intimately know such equipment and may sense incongruence if the props don’t feel like what they are accustomed to. Properly weighted props that match the feel of real weapons should also be used.
Documenting a Scene
Similar to the fire investigation scenario discussed above, a tracked physical prop or real camera should be used. Real-world photographs and measurements in harsh weather conditions should be collected to more accurately simulate weather and lighting. If high-fidelity is required for simulating weather and lighting, subject-matter experts (i.e., perception scientists specializing in storm conditions) should be utilized when creating the weather simulation system.
EMS (EMERGENCY MEDICAL SERVICES)
Accident Response / Car Crash Triage
Physically navigating multiple car pileups would require a large tracked space, so only a small subset area should be physically built although vehicles and victims further in the distance can be rendered. Tracked mannequins should be used for victims that must be physically interacted with, and virtual stretchers and emergency vehicles should match the physicality of real-world equivalents. We don’t suggest simulating physical motion of emergency vehicles as large motion platforms can be quite expensive. Pre-recorded voice and video embedded in the virtual environment should be utilized for voice and video conference if the user has no or little impact on what voice/video is used. Where voice/video response is more dynamic, real-time dynamic sound/textures can be integrated into the virtual environment that is obtained from a real human actor or other user. See Figure 5 in “Supporting Documents.”
Heart Attack in Restaurant
For where users are likely to physically move and touch, bystanders, other personal, furniture, the victim, and the ambulance should be represented physically. For areas where users are unlikely to touch, only virtual representations are necessary. See Figure 4 in “Supporting Documents.”
SEARCH & RESCUE
Terrestrial
Physical debris that can fall on or be moved by the first responder should be tracked. A virtual robot can be used as long as the robot does not move the physical debris. If the robot does move the physical debris then both the robot and debris should be tracked. If a physical robot is required, the physical robot can be a dramatically simplified from the virtual robot being simulated. Local people and personal can be recorded in advance, simulated, or be avatars controlled by other users at the same site or from remote sites.
DEFAULT LOGS
FirstSimVR will support data collection through log files in text format so that data can be easily parsed. Each tracked point will log position, orientation, and timestamp so that it can be later analyzed by investigators. In addition, other input and output events (e.g., an explosion, alert, or change in stimuli) can be written to a log file with timestamps to be compared with the tracked motion data and to help determine performance. Microphone recordings can also be logged for verbal parsing and anxiety/fatigue (e.g., change in voice pitch and breathing). Future versions will provide eye tracking data for creating attention maps.
CUSTOMIZED LOGS
In addition, to the data options described above, other data can be collected by the following methods.
1. Modification of FirstVR itself for customized data output. This provides the most control of data output, but requires the most work. Most appropriate for metrics such as time to completion, distance metrics, and number of mistakes made.
2. Listening to FirstVR events by subscribing to a FirstSimVR callback system so that data can be written to a log file with timestamps. This option is most appropriate for metrics such as interface events (e.g., button presses on a physical or virtual device), selection of objects, or movement of a prop.
3. External software outside the core FirstSimVR system. Software can be recorded with timestamps in order to synchronize that data with that data provided by FirstSimVR. This option is most appropriate where data collection is largely independent of the VR experience such as physiological measures (e.g., heart rate, palm sweat, and EEG), and external software where system timestamps are available in order to sync with other log data.
4. Data from arbitrary software that the user interacts with on a simulated device. This option is most appropriate where the software automatically writes its own output in its own way without timestamps and where there is no need to fully integrate with FirstSimVR (e.g., when utilizing 2D software via VNC).
VIEWING OF DATA IN REAL-TIME
In addition to writing data to log files, the system can be configured to show data in real-time to the user and/or observers. For examples, graphs can be placed on walls in the room, in front of the user’s torso, or onto a panel on the users hand.
ARCHIVING & AFTER-ACTION REVIEWS
Logged data can be used for after-action reviews where researchers, other first responders, and the users themselves can review experiences (including scenario events and character motion) by entering the recorded virtual environment as an observer. Observers can then view the experience from any perspective and the lead observer can rewind, pause, or fast forward the experience via a simple hand-held controller in order to better discuss and evaluate the users, interfaces, simulated technology, and critical events. These archived experiences will allow any investigator to qualitatively review user performance to inform better design, suggest how users might improve, and help train new first-responder to use new interfaces (i.e., watch an expert use tools).