SAUNDERS, Jonathan, GIBSON, Helen
<http://orcid.org/0000-0002-5242-0950>, LEITAO, Roxanne and AKHGAR, Babak
<http://orcid.org/0000-0003-3684-6481>
Available from Sheffield Hallam University Research Archive (SHURA) at:
http://shura.shu.ac.uk/17413/
This document is the author deposited version. You are advised to consult the
publisher's version if you wish to cite from it.
Published version
SAUNDERS, Jonathan, GIBSON, Helen, LEITAO, Roxanne and AKHGAR, Babak
(2017). AUGGMED: developing multiplayer serious games technology to enhance
first responder training. In: Proceedings Estonian Academy of Security Sciences, 16 :
From Research to Security Union. Estonian Academy of Security Sciences, 223-253.
Copyright and re-use policy
See
http://shura.shu.ac.uk/information.html
TECHNOLOGY TO ENHANCE
FIRST RESPONDER TRAINING
Jonathan Saunders, MSc
CENTRIC (Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research), Sheffield Hallam University, UK
Researcher
Helen Gibson, PhD
CENTRIC, Sheffield Hallam University, UK Lecturer in Computing
Roxanne Leitao, MSc
CENTRIC, Sheffield Hallam University, UK, Researcher; University of the Arts, London, UK
Babak Akhgar, PhD
CENTRIC, Sheffield Hallam University, UK Director of CENTRIC, Professor of Informatics
ABSTRACT
INTRODUCTION
With the ever-changing security threat landscape, the rapid advance of technology, and the need for more advanced and realistic forms of training, modern organisations are looking for novel, state of the art solutions to prepare for terrorist and organised crime threats. Alongside traditional physical dangers posed by terrorists and extremists, a new threat has emerged in recent years: cyber-attacks. As the line between on - and off-line crime blurs, targeted attacks such as denial of service and ransomware, which can be designed to extort money and informa-tion (Veerasamy, Grobler and Von Solms, 2007; Broadhurst et al., 2014), can also be used to cripple critical infrastructure (Miller and Rowe, 2012). Cyber-attacks delivered in conjunction with traditional physical attacks aim to maximise impact and hamper response efforts (Leyden, 2008) Thus any training platform targeting the response to such threats should take into account the cyber-element from a high-level training point-of-view.
well as how this affects others experiencing the simulation in the same exercise.
A serious game is defined as ‘any piece of software that merges a non-entertaining purpose (serious) with a video game structure (game)’ (Djaouti, Alvarez and Jessel, 2011). Serious games have been found to have impacts on the player which can be affective and motivational, facilitate behaviour change, enhance knowledge acquisition and under-standing, improve motor skills, have perceptual and cognitive benefits, physiological benefits, and improves social and other soft skills (Connolly et al., 2012). Concerning AUGGMED, serious games have already been shown to improve triage accuracy (Knight et al, 2010) [an intended pilot scenario] whilst the military already have a long history in the use of simulation and serious games (Smith, 2010) which may provide cross-overs into aspects of counterterrorism training.
1. THE VISION FOR AUGGMED
The AUGGMED (Multi-agent counter terrorist training in mixed reality environments with an automated serious game scenario generator) proj-ect is designed to enable law enforcement agencies (LEAs), paramedics, firefighters and other first responders to train simultaneously in a single virtual environment. The environment represents real world locations populated with realistic, civilian agents who react and respond to events, other trainees and threats. Using modern games and server technology the AUGGMED platform enables users to train from any physical loca-tion, allowing multiple organisations to collaborate on training without the requirement of co-locating the trainees.
The project has completed one of three pilots which will be carried out throughout development in twelve month intervals. These have already proven to give both trainers and trainees opportunities and capabilities which would not be possible in real world exercises.
The AUGGMED platform, as shown in Figure 1, provides a single sys-tem in which both trainers and trainees can operate in the same envi-ronment. The trainees access the platform through a range of devices (mobile, tablet, desktop PC, laptop, virtual reality headsets and haptic vests) while the trainers controlling the overall definition and progres-sion of the scenario also have access to live analytics for feedback and evaluation. The AUGGMED project itself centres on three main scenar-ios, an airport terror attack and fire scenario, an underground station hot bag and explosion scenario and a combined cyber/terror attack on a busy port. These are realised through technological components includ-ing the Unity games engine, communications layer and simulation layer.
Th e system incorporates realistic fi re and explosion simulations which enable trainees to experience complex life threatening situations, which would not be possible in real world training environment using Exodus. Th ese simulations assess and calculate the impact of these events on both civilians and trainees, realistically replicating the outcomes of smoke inhalation and injuries through negative eff ects on the trainee avatars.
Th e AUGGMED platform can be utilised on touch screen devices, standard PC’s and in virtual reality, allowing end users to train using the most appropriate input method for their requirement. Each input method is designed to enable trainees to be able to meet their training requirements depending on the context of the scenario they are using.
Using virtual reality enables trainees to build upon both their technical and decision making skills as well as developing their emotional resilience to stressful and oft en psychologically diffi cult events (Wiederhold and Wiederhold, 2008). Th e capability to develop the emotional resilience of
first responders is a unique aspect to virtual reality training when com-pared to standard training methods. Alongside this the system will fea-ture a full immersion mode utilising a virtual reality treadmill, gun con-troller and haptic feedback vest capable of simulating heat, gunshots and touch. These systems will combine to provide a fully immersive training experience for trainees, further enhancing their training experience and better enabling them to reach their learning objectives.
1.1 AUGGMED ARCHITECTURE OVERVIEW
The AUGGMED platform is comprised of a set of core systems, the plat-form itself utilises the Unity® Games Engineto handle the base game algorithms responsible for rendering, physics simulation, sound, and networking. The trainer tools are built on top of this, which will enable trainers and trainees to customise, observe, record, analyse and assess any training scenario and trainees on an individual basis. Civilian intel-ligence, fire and explosive simulations are processed and handled by the Exodus platform, a civilian population simulation program designed for large scale evacuation models.
The architecture of AUGGMED consists of a number of individual com-ponents, each responsible for a specific aspect of the platform. Figure 2 displays the individual components contained within the AUGGMED system that interact and integrate with one another.
1.2 THE TRAINER TOOLS
To be able to control the simulation and provide benefi ts to the trainer as well as the trainees, AUGGMED features a ‘trainer tools’ interface and associated functionalities which give each trainer fi ne-grained control over how the scenario evolves. Th e trainer tools consist of three indi-vidual components, each of which is designed to enable trainers to maxi-mise their capabilities whilst using the platform. Th e trainer tools have been designed based on Microsoft ’s design layout for Windows 10 devices . Th is ensures interaction with the trainer tools is consistent with the operating system and retains a specifi c theme on both desktop personal computers and touch screen devices (such as the Microsoft Surface). Th e trainer tools themselves are then divided into three further components: the confi guration tool, real-time view and intervention interface, and the assessment and evaluation tool.
1.2.1 Confi guration Tool
[image:10.448.69.390.219.422.2]Th e confi guration tool enables trainers to generate unique and custom-ised training scenarios by changing setup variables such as time, loca-tion, populaloca-tion, potential cyber-attacks, trainee roles and capabilities, and the severity of the threat they face. Th is is achieved by creating a custom scenario which can be saved and re-used at a later date. Each scenario can contain a set of roles, each role uses an inventory system to determine the capabilities of the trainees who are performing that role. Th e interface for setting up such a scenario is displayed in Figure 3, while the character selection interface is shown in Figure 4.
FIG URE 3: AUGGMED CONFIGURATION INTERFACE
FIGU RE 4: AUGGMED CHARACTER CONFIGURATION
1.2.2 Real-time View and Intervention
Th e real-time view and intervention tool enables trainers to observe trainees from a number of perspectives and intervene when necessary. Trainers can observe the entire simulation from a bird’s eye perspective and watch an individual or group of trainees, simultaneously through the zoom controls. Th ey can also set the camera into a “follow” state which automatically tracks and focuses in on the movements of the selected trainee.
In addition to trainers, observers are also able to use some of the func-tionality of this tool to monitor the progress of trainees, however they do not have access to the intervention aspects such as deployment, fire initialisation or cyber threat controls.
1.2.3 Assessment and Evaluation Tool
The assessment and evaluation tool collects and analyses statistical data about the performance of individuals and groups of trainees. It then col-lates and outputs this data as a report for trainers and in a visual format immediately after a scenario has concluded. It records metadata for the entire scenario, allowing trainers to replay the scenario and re-observe trainee behaviour, actions and decisions as if it were a live scenario.
The data collected includes statistical information regarding player actions such as bullets shot, enemies hit, civilians hit, and visual data such as movement heat maps. Trainers will be able to use this data to support them during debriefing sessions and as inputs into future train-ing scenarios. Trainees’ will have access to a subset of the data relattrain-ing to their own personal performance at the conclusion of an exercise.
Due to the multiplayer element of AUGGMED, an interesting use case for the statistical tracking data is the possibility to compare team and individual performances. Furthermore, it is not always clear what might represent a successful individual performance as many members of SWAT or counterterrorism teams may play more of a tactical role, which may not translate well into ‘good statistics’.
1.3 AUTOMATED GAME SCENARIO ENGINE
problems can all be modified. Monitoring of how the scenario is ini-tialised can then be combined with player behaviour to identify where training efforts may need to be focused.
1.3.1 VR and MR Environments
The Virtual Reality (VR) and Mixed Reality (MR) environments within the AUGGMED platform dictate the virtual geometry, environmental behaviour and available fire locations. Each location has a number of preselected locations for a fire to begin as well as varying the size and types of fire available for trainers to use. Each location is unique and provides opportunities for first responders to train in an environment which represents a real world location.
1.3.2 AUGGMED Scenarios
A part of the automated game scenario engine is the individual scenarios which are simulated within the AUGGMED platform. There are mul-tiple scenarios per location including terror attack, hot bag, explosion and fire. Each type of scenario changes the capabilities of the trainees and allows for additional elements to be set up before a scenario begins, for instance choosing the location of a fire or placing suspicious items into the scenario.
1.4 UNITY GAMES ENGINE
Unity® is a multi-platform games engine enabling developers to rapidly develop and deploy software on a multitude of platforms simultaneously. Games engines are software frameworks which facilitate the creation and development of games by providing a base set of functionalities and capabilities, such as rendering, audio and physics calculations (Unity Technologies, 2016).
as managing the rendering, audio and physics it also provides built in networking capabilities which enable AUGGMED to create servers and clients, which connect to these servers. Unity handles the entire visual, audial, interface and communications within the AUGGMED platform, all other components communicate through and/or are realised using the game engine.
As part of the rendering system for AUGGMED, all the main compo-nents for the trainees heads up display (HUD) are represented using Unity’s built in 2D rendering canvas system. This system, which has been built specifically for developers to create player interfaces, enables the AUGGMED project to create dynamic and intuitive control systems for the trainees to use. An example of this is the player to civilian inter-action inputs, a trainee can issue commands such as “Get Down” and “Stop” to civilians inside a training scenario at any time. For standard keyboard inputs this can be achieved through keyboard commands, however when using a touch screen device the player will require touch specific methods of initiating commands to players. Figure 5 displays the touch screen gesture control inputs available to a trainee, the central but-ton represented by an open palm is used to reveal and hide the surround-ing controls. These controls then directly map to a ssurround-ingle command a trainee can give to civilians, these include: stop, move, get down, get up, evacuate now and get out of the way.
1.5 EXODUS PLATFORM
The Exodus platform is a civilian simulation system which realisti-cally replicates civilian behaviour during an evacuation event. It also simulates behaviour relating to commands from first responders as well as injuries and fatalities due to fire and explosions (Galea, Owen and Lawrence, 1996).
1.5.1 Civilian Simulation
The civilian simulation system is responsible for realistically replicating civilian behaviour during a scenario, based on real world data it repli-cates civilian movements, reactions and injuries throughout a scenario. The number of civilians in any given scenario can range from tens to high hundreds depending on the requirements of the trainer.
contextual information back to exodus, such as player movements and actions.
1.5.2 Fire and Explosion Simulation
The Exodus fire and explosion simulations are responsible for calculat-ing and produccalculat-ing data which create realistic representations of these events in the environment provided. In the case of a fire, this includes the build-up of smoke, the effect of inhalation of the smoke on players and civilians, and the spread of fire throughout a given location. The explosion simulation handles information relating to an explosives area of influence, depending on the type and amount of material used. This information is built up using a predictive algorithm which determines the effect of an explosion on an actor including fatalities and injuries.
1.6 PHYSICAL COMPONENTS
The AUGGMED platform relies on a selection of specialised physical components to deliver the fully immersive virtual reality experience. Whilst not required for all users of AUGGMED, these components enable trainee’s to experience a significantly more interactive scenario through the use of virtual reality headsets and haptic feedback vests.
1.6.1 Haptic Feedback Vest
1.6.2 Virtual Reality Headsets
The virtual reality headsets which interface with AUGGMED will pro-vide trainees with an immersive method of engaging with the platform. Modern headsets not only display the environment in 3D, they allow 360 degrees of rotation and can track real world movements.
With the recent release of commercial virtual reality headsets they have become far more affordable and reliable. Both the HTC Vive1 and Oculus
Rift2 provide accurate head-tracking and high quality visual fidelity.
Users training using virtual reality will require state-of-the-art com-puters with powerful rendering technology when compared to standard desktop and touch screen computer.
Virtual Reality users will also be able to use other specialised input devices for their training including replica gun controllers and virtual reality treadmills, such as the Virtuix Omni.3 These combined with
the haptic feedback vest being developed have the potential to maxi-mise the levels of interaction and immersion. Higher levels of immer-sion have been found to improve learning of geospatial tasks, including search operations and environmental awareness (Pausch, Proffitt, and Williams, 1997).
2. DEVELOPMENT APPROACH
The AUGGMED project follows an agile user-centred design methodol-ogy focused on attaining constant end user input and integrating the results with future developments of the platform.
‘Agile development excels in exploratory problem domains — extreme, complex, high-change projects — and operates best in a people centered, collaborative, organizational culture.’ (Cockburn and Highsmith, 2001).
Agile’s focus on rapid iteration and approaching the development pro-cess from the bottom up fits precisely with the aims of the AUGGMED project. Through the identification of non-rigid development targets and milestones the platform can focus on developing the highest priority fea-tures and easily react to changing priorities during development.
The design methodology followed by AUGGMED utilises an approach of compartmentalising elements of the platform into independent modules assigned to individual technical partners. These modules encompass a specific technical requirement of the project, such as environments, and ensure a single, accountable point of responsibility exists. Each module owner is responsible for developing, testing and integrating their work into the core project, with the technical lead responsible for overseeing the integration process, ensuring AUGGMED remains stable, responsive and reliable.
2.1 DEVELOPMENT
objectives. Following the requirements gathering phase of the project, the outputs were consolidated and prioritised based on the resource requirements of each individual feature weighted against the end users own promised requirements.
Using this method deadlines were set out for the systems which were required for the first pilot of the platform. Subsequent features and their deadlines were introduced based on observations, feedback, and more requirements identified during the first pilot. This iterative method of requirements gathering and feature prioritisation has enabled the devel-opers and end users to build the AUGGMED platform to client specifica-tions without significant redundancies in work.
This iterative approach which is at the core of agile software development methodologies ensures the project can continuously adapt to end user requirements and promotes the need to constantly integrate, test and pilot the software. This in turn ensures the AUGGMED serious game is bug free, reliable and robust.
2.2 TESTING
New features and large changes to the core of the AUGGMED platform are developed and worked on in separate independent branches of the project for a maximum of two weeks. Before integration into the main branch at completion of the feature the entire branch must be rigorously tested for errors, processing and rendering speeds, and network behav-iour, as well as code and project continuity.
Upon passing the testing process the feature is merged into the core proj-ect and re-tested to ensure no merge conflicts interfered with the process of merging the two versions of the platform. Failing either of these tests requires the technical developer to address the issues found and restart the testing and merging process.
2.3 INTEGRATION
Integration of every module of the AUGGMED platform is handled using GIT version control software4 and third party merge conflict
man-agement tools5. The principal technical developer in charge of a specific
module is responsible for carrying out the merge, resolving any conflicts arising during the process and testing the full integration. Every module, task and subtask being developed in a development cycle is discussed during bi-weekly development meetings to ensure all technical develop-ers are aware of coming changes; this reduces the chances of significant merge conflicts and duplicate work being carried out.
The integration of the separate modules is overseen by the lead technical developer, who handles any significant feature merges or conflicts. This certifies continuity of the merged content and ensures the lead technical developer understands the purpose, implementation and behaviour of each module.
2.4 PILOTING
Piloting is a critical aspect of the AUGGMED development process, it is not only responsible for acquiring a refined set of user requirements, it also serves as a method of disseminating the progress of the project and stress testing the capabilities of the platform.
The critical nature of the piloting process in regard to future develop-ment of the platform creates a definitive set of deadlines and milestones which must be adhered to throughout the project. It also ensures the development of the platform is grounded through end user inputs, test-ing and feedback.
Each pilot is designed to test the entirety of the core AUGGMED seri-ous game alongside a specific input method. The purpose of the first pilot was to test the mouse and keyboard control system for both the
trainers and trainees alongside the core functionalities required for all training scenarios. These core functionalities include network stability, system reliability, agent behaviour, hardware load and avatar interaction capabilities.
The second pilot will once again stress test these core functionalities alongside the mouse/keyboard control method. In addition to these fea-tures, the second pilot is also required to test the virtual reality capabili-ties of the system. This includes an assessment of the ability of end users to adopt virtual reality as an input method - which is likely to be alien to most users - and compare its effectiveness against standard inputs.
3. TECHNICAL CHALLENGES
Thus far, the AUGGMED platform has overcome a number of signifi-cant technical challenges throughout its development, leading up to, and beyond the first pilot. These challenges both guided and defined the out-come of the platform including its simulation capabilities, concurrent users and customisability.
3.1 MULTIPLAYER NETWORK SYNCHRONISATION
A key challenge faced by the project was the amount of network capacity required for each client and by the server hosting a scenario.
A scenario could have 200 to 800 civilian agents at any one time. Exodus would simulate the origin and targets of each agent, and send an update to the Unity games engine every 10th of a second. This update contained
positional information for every agent. Within the games engine each agent’s movement delta was calculated between the last and next posi-tion and a vector, with both direcposi-tion and magnitude, and could there-fore be used to interpolate agent positions. This information would be used to simulate movement of the agents on the server and would be relayed to all clients.
Given the number of civilian agents, the amount of information sent between the server and individual clients has to be optimised at every opportunity, for this reason all orientation specific information was omitted from the synchronisation process of artificial intelligence agents, and would instead be inferred by calculating the vector between the origin and the target of an individual agent. Whilst rotational data is insignificant for an individual agent, it becomes a considerable task to send the rotational data of hundreds of agents multiple times per second.
replicated between all clients. Whilst this meant more information was synchronised for a player compared to an agent, the small number of concurrent players ensures it is not a significant load for the network system. The total data, required to synchronise players per update is the sum of these variables and whilst insignificant compared to the collec-tive agents data requirements, adds to the total network requirements of AUGGMED. The equation shows how to roughly calculate the total server network load required based on the number of players, while Table 1 shows the amount of data transmitted for each element of posi-tion data for each player.
TABLE 1: NETWORK REQUIREMENTS FOR TRANSMITTING PLAYER DATA
Variable Type Size Variable
Player Position Vector 3 12 (4+4+4) p
Body Rotation Quaternion 16 (4+4+4+4) b
Look Rotation Quaternion 16 (4+4+4+4) l
Total Size Per
Player 44 Bytes
TABLE 2: AGENT DATA REQUIREMENTS
Variable Type Size Variable
Time Single 4
Agent Position Vector 2 8 (4+4) Agent Target Vector 2 8 (4+4)
Stance Integer 2
Total Size Per
Agent 22 Bytes
A=t+ o+ T+ s
A standard simulation could contain around five trainees, three red team players, three observers and a second trainer. Assuming a simu-lation contains four hundred agents being synchronised between users three times per second (u). Equation (3) defines the equation required to calculate the amount of bytes per second needed for an individual user, and Equation (4) shows the final method of calculating the rough bandwidth requirement to send the information to all clients effectively.
To discover the total expected bandwidth requirement of the server, the total number of concurrent users must be calculated, which in the above example is roughly equal to 344,256 bytes per second, or 344.3Kbps.
This total bandwidth requirement does not cater for all other messaging systems required to enable player interaction with the system, such as player voice commands.
Due to the amount of information the server is expected to send and receive, developing efficient and minimal methods of updating each cli-ent’s individual game state was a core challenge the development team would need to overcome.
During the course of the first pilot test of the platform it was discovered that the built in network system UNET6, which is Unity’s standard
net-work API, struggled to handle the level of data and netnet-work connections required by the platform. This resulted in an unreliable network with synchronisation problems and inconsistent agent behaviour. As a result of this the developers decided to look for alternative network API’s, such as Photon7, which would be better suited to handle the amount of data
and concurrent users required by AUGGMED. Whilst this decision would set back development, the requirement for a stable and scalable network API far outweighed the importance of other features planned in development. This decision meant a re-prioritisation of work and new targets and milestones were defined.
3.2 MULTI-MODAL PLAYER INTERACTIONS
A core aspect of the AUGGMED platform is the capability for users to train using their preferred method, which can enable them to better meet their learning requirements, style and availability. This includes the ability to train using a standard mouse and keyboard setup, using a touchscreen device or utilising virtual reality headsets. These interaction methods require specific considerations when it comes to player input systems, the on-screen heads-up display (HUD) and their methods of interacting with the environment.
McMahan et al. (2012) have shown that input methods used to interact with a virtual environment can have a significant effect on the perfor-mance output of the user. This difference in perforperfor-mance can make it difficult for a trainer to fairly evaluate multiple trainees using different interaction methods while performing the same task. Identifying the strengths and weaknesses of these methods and building upon these is critical in helping trainees’ achieve their learning requirements.
Alongside different interactions, players using a selection of devices will require diverse levels of information displayed depending on their capa-bility. An example of this is a touch screen user verses a standard desk-top user. Players controlling their avatar using a mouse and keyboard are presented with standard controls attributed to first person shooter (FPS) games as well as an on screen crosshair, whereas a touch screen user needs to see their controls on screen as well as the crosshair.
[image:26.448.68.386.355.494.2]Table 3 displays the current and planned control systems for each input method, these are based on current best practices and successful serious and standard game controls.
TABLE 3: MULTI-MODAL CONTROLS
Interaction
Method Movement Control Look Control Jump/Sprint/Crouch Civilian Interactions
Mouse/
Keyboard (Current)
W, A, S, D
Keyboard Keys Mouse Movement Spacebar, Shift, CTRL Keyboard Number Keys
Touch Screen
(Current) Radial Joypad Control Radial Joypad Control Jump Button, Sprint Toggle, Crouch Toggle
Interaction Buttons
Virtual Reality
(Planned) Controller Joypad Head Rotation Controller Buttons: A, Left Joypad Press, B
Radial Selection
These control systems were based on tried and tested methods utilised by standard computer games on personal computers and touch screen devices.
overcome this, it was decided to accept the strengths and weaknesses of each interaction method and build upon these to cater them to specific areas of training.
Standard desktop interaction using a mouse and keyboard allows for the most balanced form of training and the most precise control methods (McMahan et al, 2012). Whilst it loses virtual reality’s geospatial and emotional benefits; it has lower resource requirements, is more acces-sible for trainees, and is easier for remote training. This standard form of input provides greater precision capabilities to trainees and is the most accessible of the three interaction methods.
Touch screen device interactions enable even more capabilities for remote training; most touch screen devices are portable enabling for trainees to use AUGGMED on the move and can further reduce the resource cost of training. However these benefits are offset by the loss in precision which a mouse and keyboard can give, and virtual reality’s emotional resilience and geospatial capabilities. Training using touch screen devices is the cheapest and most portable of the three interaction methods, enabling remote training capabilities and cost effective training solutions.
Virtual Reality can provide the most immersive experience and is capa-ble of greater emotional resilience training (Wiederhold and Widerhold, 2008) and development of geospatial skills (Pauch, Proffitt and Williams, 1997; Bowman and McMahan, 2007) which are often overlooked in tra-ditional forms of training. However these benefits have greater costs and less portability due to the higher technical requirements of virtual reality and the necessity for more hardware. Virtual reality training delivers the most complete training experience, capable of developing a multitude of skills from situational awareness, geospatial capabilities, tactics, to com-munication and stress management.
CONCLUSION
The AUGGMED project identified and overcame a number of technical challenges during the first year of development. These challenges intro-duced important questions to the development process utilised during the project and the solutions implemented also require reflection.
The challenge of creating a multiplayer serious game which contains tens of players and hundreds of intelligent agents’ highlighted limitations of Unity’s built in UNET network protocol, and of standard Wi-Fi con-nections and hardware. The discovery of these problems during devel-opment and during the first pilot project highlighted the necessity for constant testing, piloting and refinement during the development pro-cess. Utilising the findings of the first pilot, the development process of the platform was changed, with more emphasis on early testing of core systems. Through this process alternative networking API’s were identified as potential replacements to UNET which are better suited to AUGGMED’s networking requirements.
Similarly the multi-modal input challenges faced by the project rein-forced the requirement for informed development planning, research and end user feedback. Through efficient and effective planning during development, the AUGGMED platform built upon the strengths of using multiple input methods to ensure the platform can deliver a complete training experience, regardless of the devices used to achieve the learn-ing requirements.
ACKNOWLEDGEMENTS
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 653590.
Contacts:
Jonathan Saunders
CENTRIC
Sheffield Hallam University, UK E-mail: [email protected]
Helen Gibson
CENTRIC
Sheffield Hallam University, UK E-mail: [email protected]
Roxanne Leitao
CENTRIC
Sheffield Hallam University, UK; University of the Arts, London, UK E-mail: [email protected]
Babak Akhgar
CENTRIC
REFERENCES AND SOURCES
Allen, P., 1992. Simulation Support of Large-Scale Exercises: A REFORGER
Case Study, RAND. R-4156-A.
Bowman, D.A. and McMahan, R.P., 2007. Virtual reality: how much immersion is enough?. Computer, 40(7).
Broadhurst, R., Grabosky, P., Alazab, M. and Chon, S., 2014. Organizations and Cyber crime: An Analysis of the Nature of Groups engaged in Cyber Crime. International Journal of Cyber Criminology, 8(1).
Cockburn, A. and Highsmith, J., 2001. Agile software development, the people factor. Computer, 34(11), pp.131-133.
Connolly, T.M., Boyle, E.A., MacArthur, E., Hainey, T. and Boyle, J.M., 2012. A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59(2), pp.661-686.
Djaouti, D., Alvarez, J. and Jessel, J.P., 2011. Classifying serious games: the G/P/S model. Handbook of research on improving learning and motivation
through educational games: Multidisciplinary approaches, 2, pp.118-136.
Galea, E.R., Owen, M. and Lawrence, P., 1996. The EXODUS model. Fire
Engineers Journal, pp.26-30.
Knight, J.F., Carley, S., Tregunna, B., Jarvis, S., Smithies, R., de Freitas, S., Dunwell, I. and Mackway-Jones, K., 2010. Serious gaming technology in major incident triage training: a pragmatic controlled trial. Resuscitation, 81(9), pp.1175-1179.
Leyden, J., 2017. Russian cybercrooks turn on Georgia. [online] Theregister. co.uk. Available at: http://www.theregister.co.uk/2008/08/11/georgia_ddos_ attack_reloaded [Accessed: 19-May-2016].
McMahan, R.P., Bowman, D.A., Zielinski, D.J. and Brady, R.B., 2012. Evaluating display fidelity and interaction fidelity in a virtual reality game. IEEE transactions on visualization and computer graphics, 18(4), pp.626-633.
Miller, B. and Rowe, D., 2012, October. A survey SCADA of and critical infrastructure incidents. In Proceedings of the 1st Annual conference on
Research in information technology (pp. 51-56). ACM.
Pausch, R., Proffitt, D. and Williams, G., 1997, August. Quantifying
immersion in virtual reality. In Proceedings of the 24th annual conference on
Computer graphics and interactive techniques (pp. 13-18). ACM Press
Smith, R., 2010. The long history of gaming in military training. Simulation &
Unity Technologies, “Unity®Pro.” 2016.
Wiederhold, B.K. and Wiederhold, M.D., 2008. Virtual reality for
posttraumatic stress disorder and stress inoculation training. Journal of
CyberTherapy & Rehabilitation, 1(1), pp.23-35.
Veerasamy, N., Grobler, M. and Von Solms, B., 2012, July. Building an ontology for cyberterrorism. In European Conference on Cyber Warfare and