In this study, the average of FIM-M of all subjects was 53.0 ± 21.6 points (Table 1). The subject needs maximal or moderate assistance for some performance of ADL. As the common clinical test for USN, in the first evalua- tion of the frequency of presence of neglect for ADL (Table 3), 75 percent of all subjects admitted a USN symptom in activities of dressing. For example, a patient with USN cannot easily put on their clothes on the left side. Moreo- ver, 62.5 percent of the subjects admitted a USN symptom in activities of transferring, and locomotion (Table 3). According to the motion analysis of head motion in the common clinical test, the subjects began searching from the right side in both the line and the star cancellation tests. In a normal performance, the head naturally rotated from the right to the left to follow a movement during the line cancellation test. However, the head movement to their left was insufficient for searching from the right side in the both tests. For the line cancellation test under the common condition, the mean percentage of the correct Experimental setup for the HMD (head mounted display)
Each subject underwent 8 training sessions, being per- formed twice a week. Each session included 40 min of training with 2 different games (20 min per game). Optometric tests that were available in the beta version of the software were performed directly in the head mounted display before each training (ocular dominance and suppression). BCVA was tested before first and after last training session. Patient did not perform any other visual training during the period of dichoptic training. Ten patients were treated with patching when they were child, but they did not remember for how long.
Driving a vehicle is one of the most common daily yet hazardous tasks. One of the great interests in recent research is to characterize a driver’s behaviors through the use of a driving simulation. Virtual reality technology is now a promising alternative to the conventional driving simulations since it provides a more simple, secure and user-friendly environment for data collection. The driving simulator was used to assist novice drivers in learning how to drive in a very calm environment since the driving is not taking place on an actual road. This paper provides new insights regarding a driver’s behavior, tech- niques and adaptability within a driving simulation using virtual reality tech- nology. The theoretical framework of this driving simulation has been de- signed using the Unity3D game engine (5.4.0f3 version) and programmed by the C# programming language. To make the driving simulation environment more realistic, the HTC Vive Virtual reality headset, powered by Steamvr, was used. 10 volunteers ranging from ages 19 - 37 participated in the virtual reality driving experiment. Matlab R2016b was used to analyze the data obtained from experiment. This research results are crucial for training drivers and ob- taining insight on a driver’s behavior and characteristics. We have gathered diverse results for 10 drivers with different characteristics to be discussed in this study. Driving simulations are not easy to use for some users due to mo- tion sickness, difficulties in adopting to a virtual environment. Furthermore, results of this study clearly show the performance of drivers is closely asso- ciated with individual’s behavior and adaptability to the driving simulator. Based on our findings, it can be said that with a VR-HMD (Virtual Reali- ty-Head Mounted Display) Driving Simulator enables us to evaluate a driver’s “performance error”, “recognition errors” and “decision error”. All of which will allow researchers and further studies to potentially establish a method to increase driver safety or alleviate “driving errors”.
16 Read more
For the results of Patient A, the cancellation test of com- mon test was 100% score. However, USN symptom was existed eight items in activities for the frequency of pres- ence of neglect for ADL. The area of neglect had a pro- found effect on dynamic ADL, for example, dressing, transferring, and locomotion. The common cancellation test did not indicate problems of ADL in relation to the patient's neglect. The subjects' dressing, transferring, and locomotion of checklist by Halligan et al.  indicated a high frequency of presence of USN symptoms. The line cancellation score of special test 1 and 2 was lower than that of the common test. When the patients with USN concentrated on an object in OC condition, their USN symptoms were more aggravated for the left test sheet as compared to the right test sheet. For the EC condition, both right and left test sheet score in special test 2 were lower than that in the common test. Patient A had a bias to the right space, because the movement of HMD and CCD camera was synchronized with the subject's head movement. Moreover, the Patient moved her head to find the sheet, and then she might have lost sight of both right and left sheets on the display of HMD. The HMD test may be better able to find a USN symptom which may not be easily detected. This means that the new HMD system
The brittleness of vehicle automation under the challenges imposed by mixed traffic conditions in urban areas signifies the necessity of human intervention and con- trol (Martens and van den Beukel, 2013; Sivak and Schoettle, 2015; Bilger, 2013; Simonite, 2016; Woods and Cook, 2006). In vehicles with an onboard driver, the driver is the likely choice for takeover. In SAVs such option is simply not available. Teleoperations could be a means through which vehicles can quickly and safely resume operations. Because teleoperations offer high service reliability and oper- ability, it is a preferable choice. However, human intervention in an automated pro- cess brings a widely documented challenge in itself; the Out-Of-The-Loop (OOTL) performance problem (Endsley and Kiris, 1995). A challenge that is even more accentuated when the operator is remote as with teleoperations. As Endsley and Kiris (1995) describe, the OOTL performance problem is characterised by a funda- mental loss of perception of elements in time and space within a given environment, the comprehension of their status and meaning, now and in the near future. For a remote operator to combat the OOTL performance problem, it is important that they deeply understand and grasp the situation and are able to resolve it, therefore an appropriate level of Situation Knowledge (SK) (Andre, 1998; Banbury et al., 2000) is essential. Research on teleoperations and the OOTL performance problem (Endsley, 2017) in the specific context of SAVs is sparse. The appliance of computer displays in teleoperation systems have been well-researched (Kikuchi et al., 1998; Hainsworth, 2001; Grange et al., 2000; Porat et al., 2016). However, the use of a Head-Mounted Display (HMD) in teleoperations is less investigated (Schmidt et al., 2014; Jankowski and Grabowski, 2015). Studies by Meng et al. (2014); Santos et al. (2009) have shown promising results for using an HMD during navigational tasks. Studies on the use of an HMD by remote operators of SAVs have, to the author’s knowledge, not been published. These reasons motivate this work to explore the use of an HMD as a human machine interface between the SAV and remote operator.
89 Read more
Besides the advantages of a HMD compared to paper, HMD’s also have certain disadvantages that cannot be ignored. First of all, the current HMD’s on the market have limited Field Of View (FOV) and low image resolution (Ateş, Fiannaca, & Folmer, 2015; Hua, Hu, & Gao, 2013). Various industries plea that a ‘good enough’ HMD should have at least have a horizontal FOV of 120 degrees, a vertical FOV of 50 degrees and an image resolution of 1600x1200 pixels if not more (Havig, Goff, McIntire, & Franck, 2009). In contrast, the Google Glass only has a 15 degrees FOV and an image resolution of 640x630 pixels (Hua et al., 2013), and “the Microsoft Hololens only feels natural when you're not handling anything much bigger than a basketball” (Robertson, 2015). Second, the tracking system is one of the most important problems of the HMD with Augmented Reality capabilities. Although it is able to determine its position up to 1.2m, it remains difficult to align objects in the real and the virtual world with respect to each other (Zollmann et al., 2014). Third, the HMD is a relatively fragile device compared to the paper-based information carrier, and given the average cost people may feel reluctant to use such a system for the fear of breaking it. Fourth, a HMD could isolate people from their surroundings although they should be capable of interacting with it. The ‘isolation’ from the real world could also induce nausea to the HMD user (Havig et al., 2009). Fifth, a HMD requires electronics, which means that it must be charged, and thus has a limited battery live. In addition its weight and heat uttering could become disturbing to its user. Lastly, the surrounding view of the user could limit or distract the user due to the information displayed on the HMD (Fiorentino, Uva, Gattullo, Debernardis, & Monno, 2014; Woods et al., 2012).
31 Read more
Vision Systems International (VSI; the Elbit Systems/Rockwell Collins joint venture) along with Helmet Integrated Systems, Ltd. developed the Helmet-Mounted Display System (HMDS) for the F-35 Joint Strike Fighter aircraft. In addition to standard HMD capabilities offered by other systems, HMDS fully utilizes the advanced avionics architecture of the F-35 and provides the pilot video with imagery in day or night conditions. Consequently, the F-35 is the first tactical fighter jet in 50 years to fly without a HUD . A BAE Systems helmet was considered when HMDS development was experiencing significant problems, but these issues were eventually worked out. The Helmet-Mounted Display System was fully operational and ready for delivery in July 2014.
Abstract. Augmented Reality (AR) is getting close to real use cases, which is driving the creation of innovative applications and the unprece- dented growth of Head-Mounted Display (HMD) devices in consumer availability. However, at present there is a lack of guidelines, common form factors and standard interaction paradigms between devices, which has resulted in each HMD manufacturer creating their own specifications. This paper presents the first experimental evaluation of two AR HMDs evaluating their interaction paradigms, namely we used the HoloLens v1 (metaphoric interaction) and Meta2 (isomorphic interaction). We report on precision, interactivity and usability metrics in an object manipula- tion task-based user study. 20 participants took part in this study and sig- nificant differences were found between interaction paradigms for trans- lation tasks, where the isomorphic mapped interaction outperformed the metaphoric mapped interaction in both time to completion and accuracy, while the contrary was found for the resize task. From an interaction perspective, the isomorphic mapped interaction (using the Meta2) was perceived as more natural and usable with a significantly higher usability score and a significantly lower task-load index. However, when task ac- curacy and time to completion is key mixed interaction paradigms need to be considered.
22 Read more
output. No matter how good these resulting LDR images are, they cannot exploit as many levels of available luminance as possible, considering the user is only able to see a small portion of the whole image. Moreover, computationally efficient as these methods are, they are still too expensive to run in real time. That is to say, even when applying these methods just to the small viewable region, the computational cost will severely jeopardize the application’s frame rate. When the user is exposed to an immersive VR environment, insufficient frame rate is one of the major causes of virtual reality sickness , a term for symptoms that are similar to motion sickness symptoms like general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy . Computational cost combined with hardware limitation makes these methods not ideal while using HMD as the display media.
123 Read more
The Smart helmet itself comprises of most of the components except the processor which will be executing the operations of the helmet. Firstly the array of microphones will be installed on the chin protector the, ‗array of microphones‘ makes the noise cancellation process very efficient as practically there will be inevitable noise coming towards the rider while riding the bike . The chief component is the ‗Arduino Uno‘ which will be incorporated at the rear side internally between the polystyrene foam liner and the comfort liner . The HC-05 Bluetooth Module will be affixed with the Arduino for the communication with the smartphone . The Scaled-down class of Heads-up Display inspired by is to be used for displaying information such as navigation directions, driver alerts, power meter, etc. . Ultrasonic Sensor/Infrared Sensor which will be distributed on four places on the helmet first will be on the front side above the visor (face shield), other two will be on the side opposites of each other and the last one will be placed on the rear side of the Helmet . Lastly, the Solar Panel will be used to power the entire circuitry present inside the helmet which is to be situated on the upper side of the hard outer shell of the helmet. Headphone Speakers will be placed near the ear area to hear the output of the Moving to the Smart Phone, a simple smartphone having Bluetooth will be enough to establish a connection with the helmet. An application which will be processing and executing the voice commands and the sensor data received from the helmet through Bluetooth . ————————————————
amplitude measured the maximum range of measurements, displacement measured the maximum position, and aver- aged displacement measured the average position over the trial. Raw measurements for both the force plate and the VRHMD were in millimeters, and force-plate measurements corresponded directly to the center of pressure, whereas the VRHMD measurements provided the 3-D position of the VRHMD relative to the measurement camera. For measurements taken from the VRHMD, tilt in degrees was calculated using knowledge of the participant’s height, and the VRHMD orientation adjusted head position. Head posi- tions and center positions were measured relative to an initial position, which was reset between each trial. Measurements of the center of pressure were obtained from the force plate. Area was also calculated as a 95% CI ellipse, as outlined by Duarte and Freitas. 27
The graphical display of the computer-generated harp will be redesigned to resemble a small sized concert harp. The harp would consist of thirty-six strings and the frame designed slim and semi-transparent to allow the users easy viewing of the strings. The strings would have the properties of the concert harp and be scaled down to a reasonable viewing and playing size. The animation of the vibrations on the string will be implemented on all thirty- six strings. The sound will still use MIDI as the output but the string properties will have effects on how the sound will be heard to the users. The users will have control over the loudness of the sound from the plucking of the string. The strings will have a limit for the distance of stretching the string based on their properties. The distance the users’ stretches the string will determine how loud the sound will be heard and the amount of time the sound will last. Other features such as musical songs, colored strings for a specific note and other needed features to help users play the harp will be implemented as users test playing the harp.
96 Read more
Augmented reality or also known as AR is not a new technology. The technology has existed for almost 40 years ago after Ivan Sutherland introduced the first virtual reality (VR) application. At that time, works and research were mainly concerned to establish the hardware aspects of the technology. The head-mounted display (HMD) or some might called head-worn display is the result of augmented reality research and also one of the fundamental equipment for accessing the technology. As time goes by, the augmented reality technology has begin to mature to a point where the hardware cost and capabilities have collided to deliver a more feasible AR thus enable the rapid development of AR applications in many fields including education. To create a non-commercial AR application specifically for education, the ARToolkit can be taken into consideration. ARToolkit is the product of AR community and it is registered under the GNU General Public License. The user is provided with basic source code that lets the user easily develop Augmented Reality applications. Despite the fact that AR is not a new technology, people may unaware or unfamiliar with its existence. Therefore this paper is intended to (1) give an overview of augmented reality; and provides (2) solution to the technical problems that one’s will face in setting up open-source augmented reality toolkit.
To induce accommodation, the display system needs to generate an appropriate focused image in accordance with a result of the focus point detection by a computer graphics. We used the OpenGL developed by Silicon Graphics Inc for real-time generating of 3D objects. Fig. 9 shows computer graphics images generated on the Windows XP using the OpenGL graphics library. In this figure, five balls are on the same plane and arrangements become farther and farther from left to right. At near in focus, an outline of the farthest ball is a blur (Fig. 9(a)). At far in focus the nearest ball is a blur (Fig. 9(e)). When focus at center ball, other balls are out of focus (Fig. 9(c)). Thus we can confirm that the OpenGL produce virtual 3D images in accordance with the focus depth by computer graphics.
This report describes the process and design choices made throughout this bachelor assignment. The goal was to create a design for the casing of a HMD. This design has been developed on the basis of research that has been done in the beginning of the assignment, the research phase. This research has delivered a direction for the design process in the form of a program of demands and wishes and a set of design guidelines. Through an iterative design process a concept design and prototype have been created. The first step is an ideation phase in which ideas have been cre- ated and developed. Here the focus lies on creating ideas that provide solutions to the problems found in the research phase. The phase of concept generation shows a bundling of similar or compatible ideas to form concept designs, these concepts are further developed to create several coherent design solutions. Concept evaluation is the next phase in which the concepts are evaluated side by side by experts and the program of demands and wishes. The chosen concept is further developed in the detailing phase where design choices on materials, mechanisms, user aspects and production are further elaborated on. This detailed final concept is transformed into a 3D model for production of the prototype through rapid prototyping. This report is wrapped up with a chapter of conclusions and recommendations where the final design proposal and design process are evaluated and recommendations for further development of the design are described.
77 Read more
Figure 3 shows the implementation inside a planetar- ium demo application. Users are able to change different settings of the software while keeping them immersed. Rotation of the wrist changes the displayed submenus and interaction with the second hand changes the dis- played settings. This is done by hit testing the individual fingers with the plane on which the menu resides. UE4 already has interaction handlers for menus, which can be used in conjunction with a 3D-menu. This allows for context based manipulation, such as selecting an elem- ent and changing its ID, name, material or colour in the scene. It is also possible to display the user’s current position inside a model or to use this to teleport him to different places inside. Storing special positions and viewports for later presentation is also a feasible ap- plication, as cumbersome navigation through large-
18 Read more
In this project it was shown that a surgeon will be able to use a HMD to control an endoscopic robot. However, before this robot will enter an operation room further research has to be done. A proof of concept is made and the basics for a working system are laid. The HCSRATS is able to control the TeleFlex and steer the tip of the endoscope. The design of the HCSRATS is modular and can be easily expanded. Using a clear and object orientated structure, future work can be implemented easily. Choosing the right control module is a personal preference and writing a custom control module is therefore made easy.
38 Read more
In this experiment, the location of the LM-attached HMD was set as the variable. That is, optical see-through HMDs that block the HMD display according to the LM location were not used. Instead, VR HMDs were used to conduct the experiment. Furthermore, a Kinect sensor was used as a color-IR camera, color to get the real image of HMD with LM and IR to identify the 2D IR LED locations on LM and HMD. For the experiment, the calibration of computers was directly conducted using Visual Studio 2013 (for Windows 8.1), and the specifications were an Intel i7-6700 3.40-GHz CPU, GeForce GTX 1060 GPU from NVidia, and 16 GB DDR5 2133 MHz of RAM. And the Oculus Rift DK2 is used as the HMD device. One man participated in the experiments as a subject, who is 29 years old is used to utilizes VR devices.
15 Read more
Observers viewed the display through the goggles, which were mounted in a chin and head rest 114cm from the monitor. They reported the rivalry state continuously during trials using a mouse in their left hand. Each probe presentation was indicated by a beep, after which observers used the left and right arrow keys (with their right hand) to indicate which side of the grating they believed the probe was presented on. They then reported their confidence in this response (high or low) using the up and down arrow keys. Both of these responses were acknowledged by beeps. The next probe was presented between three and five seconds after the observer’s confidence response. Each block consisted of 60 probe presentations, divided equally between the 6 probe contrasts. Observers completed 12 blocks for each of the two temporal conditions (transient and sustained), taking around 2‐3 hours per observer.