• No results found

A.R.I.S.E. Architectural Real-time Interactive Showing Environment. Supervisor: Scott Chase

N/A
N/A
Protected

Academic year: 2021

Share "A.R.I.S.E. Architectural Real-time Interactive Showing Environment. Supervisor: Scott Chase"

Copied!
89
0
0

Loading.... (view fulltext now)

Full text

(1)

A.R.I.S.E

Architectural Real-time Interactive Showing Environment

Supervisor: Scott Chase

Kasper Grande, Nina T. Hansen

Kasper J. Knutzen, Anders Kokholm

Long T. Truong

(2)
(3)

Contents

A Unity 2

A.1 Unity in General . . . 2

A.1.1 The Unity Editor . . . 2

A.1.2 Scripting in Unity . . . 5

A.2 Lightmapping . . . 8

A.3 First Person Controller . . . 11

A.4 Sound Design . . . 13

A.4.1 Theory . . . 13

A.4.2 Sound Scripts . . . 14

A.5 Opening and Closing Doors . . . 20

A.6 Glass Shader . . . 23

A.6.1 Theory . . . 23

A.6.2 GLSL Shader . . . 26

A.6.3 GLSL Code Explained . . . 28

A.6.4 CG Code Explained . . . 32

A.7 Line of Sight . . . 34

A.8 Day and Night Cycle . . . 36

B 3DS MAX 37 B.1 The House . . . 37

B.2 Texturing Roof and Walls . . . 39

B.3 Furniture . . . 41

C Test 1 43 C.1 Process and Decisions . . . 43

C.2 Test 1 Material List . . . 44

C.3 Questionnaire and Meaning . . . 44

C.4 Test 1 Questionnaire . . . 44 C.5 Test 1 Data . . . 49 C.5.1 Demography . . . 49 C.5.2 Test Subject 1 . . . 49 C.5.3 Test Subject 2 . . . 50 C.5.4 Test Subject 3 . . . 51 C.5.5 Test Subject 4 . . . 52 C.5.6 Test Subject 5 . . . 53

(4)

Contents Contents

C.5.7 Test Subject 6 . . . 55

C.5.8 Test Subject 7 . . . 56

C.5.9 Test Subject 8 . . . 57

D Test 2 58 D.1 Questionnaire and Meaning . . . 58

D.2 WebPlayer Frontpage . . . 59

D.3 Test 2 Survey Questionnaire . . . 59

D.4 Test 2 Survey Data . . . 67

D.5 Test 2 Architect Questionnaire . . . 78

(5)

Portfolio A

Unity

A.1

Unity in General

Unity is a software for creating 3D video games or other interactive content such as architectural visualizations or real-time 3D animations. Unity consists of both an editor, for developing and designing, and a game engine for compiling and executing the final product. The software is free to download with an option to upgrade the version to Pro licenses. The Pro license benefits addi-tional features that the free version would not have, such as render-to-texture, occlusion culling, global lighting, and post-processing effects. (Technologies [2011b])

A.1.1

The Unity Editor

The Unity Editor has an integrated graphical environment as the primary method of develop-ment, which means the Editor is composed of several of useful windows for developing and designing. There are five main important windows; theProject folder, theHierarchyfolder, the

Inspectorwindow, theScenewindow, and theGamewindow.

The Project folder handles all the ”physical” asset files of the project, sort of speak. Meaning, all asset data that are relevant for the project is stored on the computer’s hard drive, and is easily accessible through the Project folder. Assets are files that can be imported from other applica-tions and used in the Unity Editor, such as images or 3D models. In the Project folder, assets aretemplate objectswhich means they are the objects which can then be copied and used in the Hierarchy for creating the specific objects needed.

Figure A.1 shows the Project folder in the Unity Editor. It shows a simple project named Hel-loUnity2 with a scene file, material assets used for texturing objects, and three folders with assets inside.

Figure A.2 shows the asset files as located on the hard drive. All the same files seen in the Project folder in the Unity Editor will be stored here, as this is the path for the asset folder on the hard drive.

(6)

A.1. Unity in General Portfolio A. Unity

Figure A.1:The Project folder in the Unity Editor.

Figure A.2:The Project folder as files located on the hard drive.

The Hierarchy folder handles all the objects in the world scene. An object can be of many forms, such as normal geometry, light sources, texture, or audio. However, it is also possible for an

(7)

ob-A.1. Unity in General Portfolio A. Unity

ject to be empty, which means it simply has not been specified the type of object. This is useful for assigning the type later when needed, or for non-depending world positions specific scripts. Figure A.3 shows the Hierarchy folder, with the same example project as seen in Figure A.1. In this case, the Hierarchy folder contains two cube objects, a main camera, and a couple of wall objects, along with a mesh which acts as a light source in this case.

Figure A.3:The Hierarchy folder which gives an overview of all objects in the world scene.

The Inspector window is an essential part of the development process in Unity, as it is the one that handles all the settings for each individual object in the world scene. Figure A.4 shows the Inspector window, in this case, one of theWall sideobject has been selected. Here it is possible to transform, scale, or rotate the object. Along, there is a mesh renderer and box collider attached, and additionally a material to shade the wall side.

The Scene window is the primary viewport for development, since the window makes it easy for navigating and orient the objects in the world scene. Figure A.5 shows the Scene window, with the objects placed in the world scene. In this case, the two cubes are inside the surrounding walls and a camera is placed outside in front of the opening. The example also shows that the redwall sideis selected, since there is a green bounding box around the wall and a corresponding gizmo is visible as well. It is also possible to align the view port whenver needed, meaning the gizmo, in the top right part of the scene, is clickable and will align the view for which ever axis was clicked on.

(8)

A.1. Unity in General Portfolio A. Unity

Figure A.4:The Inspector folder which gives an overview of all the settings and attached scripts for the specific object.

the camera that is deployed, sees the two cubes and the walls from the opening. It is not possi-ble to see what objects are selected or transform them in the Game window, as it is only a preview.

A.1.2

Scripting in Unity

The Unity development environment provides options to modify and implement scripts and shaders. The scripts can be written inJ avascript, C#, or Boo. The scripts in this project is written in Javascript, as Java is the programming language the group is familiar with. Scripting inside Unity consists of attached custom script objects called behaviours to game objects. Different functions inside the script objects are called on certain events. The most used method is calledUpdate(), which will be called before rendering a frame. This is where most game be-haviour code goes. Code outside any functions is run when the object is loaded. This can be used to initialise the state of the script. It should also be noted that the Unity API script documentation

(9)

A.1. Unity in General Portfolio A. Unity

Figure A.5:The Scene window is the viewport for navigating and orient objects in the world scene.

Figure A.6:The Game window is the viewport for the final product.

was heavily utilized. Technologies [2011d]

There are also specific scripts that are used often in cooperation with other scripts. These scripts handle the common but vital operations such as position of object and player, or whether the character is inside a collider box. The following sections describe the source code of those com-mon operations.

Object and Character Position

Two scripts were written to handle the position of objects and the character. Listing A.1 shows the script for Object Position, which gets the attached object’s position and return the coordi-nate values whenever called. In the first line, a simple variable declaration is made of the type

(10)

A.1. Unity in General Portfolio A. Unity

line 5, the created variable is now being continously updated by getting the attached object’s transform position in the 3D world space. Line 8 gives the option to return the values of object position, for use in other scripts depending on the object’s position.

Listing A.1:Object Position

1 var _objectPosition : Vector3; 2

3 function Update ( ) { 4

5 _objectPosition = transform.position;

6 }

7

8 function getObjectPosition( ) : Vector3{ 9

10 r e t u r n _objectPosition;

11 }

Listing A.2 shows the Player Position, which is very similiar to the Object Position. The main differences are this is for the character’s position and the script is only attached to the First Person Controller, thus the variable is continously updated with the character’s position and can be returned whenever called.

Listing A.2:Player Position

1 var _playerPosition : Vector3; 2

3 function Update ( ) { 4

5 _playerPosition = transform.position;

6 }

7

8 function getPlayerPosition( ) : Vector3{ 9

10 r e t u r n _playerPosition;

11 }

Collider Triggers

Another script that is heavily used, is the Collider Trigger script. This script checks whether the character is within a collider or not. This is useful for triggering specific events when the player is inside a collider. Listing A.3 shows the collider script. Starting with the first line, a variable decla-ration of the typebooleanis made, to check whether the character is inside or not. Note that in line 3, theUpdate()method is empty, that is because no operations besides the trigger checking is needed. And for that, Unity provides with two default methods calledOnTriggerEnter()

(11)

A.2. Lightmapping Portfolio A. Unity

it, respectively.

Lines 7-13 shows theOnTriggerEnter()method which takes aColliderobject as paramter, in this case it will be the character collider. This is done by making anif-sentence in lines 9-12, with checking if theotherparameter has the tag ofPlayerand then set theentervariable totrue. Similiar operation will be done in lines 16-22, for when the character exits the collider, but in this case theentervariable will be set tofalse. Lastly, lines 24-27 can be used in other scripts to get the return value of theentervariable.

Listing A.3:Collider Trigger

1 var enter : boolean; 2

3 function Update ( ) {

4 }

5

6 // A c t i v a t e t h e Main f u n c t i o n when p l a y e r i s near t h e door 7 function OnTriggerEnter (other : Collider){

8

9 i f(other.gameObject.tag == ” P l a y e r ”){ 10

11 enter = t r u e;

12 }

13 }

14

15 // D e a c t i v a t e t h e Main f u n c t i o n when p l a y e r go away from door 16 function OnTriggerExit (other : Collider){

17

18 i f(other.gameObject.tag == ” P l a y e r ”){ 19 20 enter = f a l s e; 21 } 22 } 23 24 function getEnter( ){ 25 26 r e t u r n enter; 27 }

A.2

Lightmapping

Lightmaps are data structures which contains the brightness of surfaces in a 3D scene. The lightmaps are rendered offline, meaning they are precomputed and used on static objects. This technique can ease the computation time for real time 3D applications. Lightmaps are scaled with what is calledlumelsorlumination elements, and in Unity it is calledresolution. The smaller the lumels, and therefore the more of them present in the image, the higher the resolution needs

(12)

A.2. Lightmapping Portfolio A. Unity

to be set at the cost of speed and performance. Therefore, it should be noted that there must be a balance between the resolution and performance, as a too high resolution would end up using too much system resources and affect the performance negatively.

Although Unity only gives the option to adjust the resolution, lightmap resolution and scale are two different things. The resolution is the area, in pixels, available for storing one or more surface’s lightmaps. The number of individual surfaces that can fit on a lightmap is determined by the scale. Lower scale values, means higher quality and more space taken on a lightmap. Higher scale values, means lower quality and less space taken.

The lightmaps generated are usually colored texture maps and they are usually 2D images, with-out information abwith-out the light’s direction. While it is possible in Unity to use multiple lightmaps to provide approximate directional information to combine with normal maps, however, that fea-ture is not used in this project, as no normal maps were used. Any lighting model may be used, to create light maps, because the lighting is entirely precomputed and real time is thus not always a necessity. Unity provides a variety of techniques including ambient occlusion, direct lighting with sampled shadow edges, and full radiosity bounced light. (Beam [2003])

The specific feature for rendering lightmaps is called Beast Lighting, and comes with three main functionalities. The first functionality is for adjusting the shadows cast from the light source. The second functionality is for adjusting the light settings and resolution. The last functionality is for adjusting the generated lightmaps. (Technologies [2011c])

Figure A.7 shows theObjecttab, which handles the shadow functionality. Here it is possible to adjust the samples and the shadow radius, along with which type of shadow should be baked. Figure A.8 shows theBaketab, which handles the setting for the light from all light sources. Here it is possible to adjust the resolution, how many bounces the light rays should take into account, how intense the ambient occlusion should be. Unity Pro also has the option to take the sky light into account and how large the final gather rays should be (bounces from surrounding).

Figure A.9 shows theMapstab, which handles all generated lightmaps. Here it is possible to adjust and modify the lightmaps. Modifying the lightmaps would require an external program which deals with graphics editing, such as Adobe Photoshop. (Incorporated [2011])

(13)

A.2. Lightmapping Portfolio A. Unity

Figure A.7:The Object tab in the Lightmapping. This tab handles the shadow samples of a light source.

(14)

A.3. First Person Controller Portfolio A. Unity

Figure A.9:The Maps tab in the Lightmapping. This tab handles all the generated lightmaps.

A.3

First Person Controller

The first person controller is included in the Unity Standard Asset, which means the controller is already available when downloading the Unity development software. The first person con-troller is composed of three game objects. The first game object, which is also a parent object for the two other ones, handles all the movement with keyboard and mouse. The second game ob-ject, handles the graphics rendering of the character, in this case, a simple capsule ridig body. The last game object, handles the camera functionalities, which determines what the user is going to see through that viewport. (Technologies [2011a])

Figure A.10 shows all the relevant Java scripts that handles the movement of the character. Here it is possible to adjust the movement speed, acceleration, turnaround, and jumping if enabled. It is also possible to adjust the character collider, in which how big the width, height, and slope should be.

Figure A.11 shows theGraphicsobject, which handles the rendering of the character. There is a mesh renderer object attached, along with a mesh filter. The mesh filter simply determines what kind of polygon object it is, in this case a capsule. The mesh renderer functionality is then to draw the capsule, if enabled. It is also possible to enable cast and receive shadows for the mesh body.

Figure A.12 shows theMain Cameraobject for the character controller. The camera object handles what will be drawn on the screen and thus determine what the user can see. Several of options

(15)

A.3. First Person Controller Portfolio A. Unity

Figure A.10:The First Person Controller object. This object handles the movement settings and the character controls.

Figure A.11:The Graphics object. This object handles the rendering of the first person character.

are available for adjustments, including the field of view, which is the parameter to simulate depth in the scene. Other important parameters are projection and clipping planes. The projec-tion can be set to either perspective or orthographic. This setting is set to perspective, since the orthographic setting would perceive the 3D world as 2D, which is not desired in this case. The

(16)

A.4. Sound Design Portfolio A. Unity

clipping planes determine how near and far the objects will have to be, in Unity units, before it is drawn to the screen.

Figure A.12:The Main Camera object. This object handles the camera settings for the character.

A.4

Sound Design

This section will describe some theory about the use of sound, and it will describe how the sound part of the product is created.

A.4.1

Theory

Sound is a difficult technique to study, because humans today are accustomed to ignoring many sounds that occur in the environment. Most are accustomed to the information about the sur-roundings coming from sight. Whether it is noticed or not, sound is an important technique to use. Even the old silent films where not heard without the sound of music. Connecting image and sound is something that appeals to something in the human consciousness, even babies will spontaneously connect what they hear to what they see.(David Bordwell [2010])

(17)

A.4. Sound Design Portfolio A. Unity

Fidelity

In this context, fidelity refers to which extend the sound is faithful to the source as it is conceived. If the sound does not match the picture this will be a lack of fidelity an example could be a film of a dog barking, but the sound being of a cat meowing. If the film of the dog are to have a high fidelity, the sound accompanying the image will be the barking of a dog. Fidelity is thus purely a matter of expectation. (David Bordwell [2010])

In this project it will be important to achieve a high fidelity to maintain realism in the product. If an animation of a car is implemented it will be important to have the sound of a car driving by to accompany the animation. Equally important will it be to have images matching the sound in the product.

Space

Sound has a spacial dimension because it comes from a source. an individuals beliefs about the source have a powerful effect on how the individual understand the sound.

• A diegetic sound is a sound that has a source in the story world(David Bordwell [2010]).

• Nondiegetic sound is represented as coming from a sourece outside of the story world(David Bor-dwell [2010]).

In this project the sounds will be diegetic, this because the sounds used in the product shall have a source in the product. The source of the sound may not be visual at all times, an example could be the sound of children playing. Even though the user is not looking at the playground, the sound from the children playing will still be there.

A.4.2

Sound Scripts

Sound scripts were written to handle the sound sources and are depending on collider triggers. The sound scripts consist of three different scripts, one for handling the sound effects of when the character is walking in the house, another one for handling the ambient sound effects, and the last one for when the doors open and closing.

Walk Sound

Listing A.4 shows the script for the sound effect of character walking. The first two lines are for declaration of two audio clips, meaning the sound effects of the walk cycle is splitted into two segments, one for each footstep. Lines 4-6 are variables for getting the player position from the Player Position script. Line 7 is to check whether the character has moved or not, this is to en-sure the sound effects only play when the character is moving. Line 9 is necessary for remember which footstep is current footstep, anintis used for this purpose. Lines 11-14 are to check if the character is within the colliders that will trigger the sound effects. In this case, two colliders are used to cover the house.

In lines 16-23, aStart()method is used to initialize certain variables. Within theUpdate()

(18)

A.4. Sound Design Portfolio A. Unity

firstif-sentence in line 31, checks if the player is within any of the two colliders. The second

if-sentence in line 32, checks whether the player has moved or not. If the player has not moved, theif-sentence will exit and the alternative effect will be active, in lines 57-60, which will simply stop the audio. If the player has moved,whichFootwill be incremented in line 34 for the next footstep. However, if thewhichFootvariable is greater than one, the variable will be set back to zero. The next operations will then be to play the correct sound effects corresponding which footstep. This is done in lines 41-56. Line 54 will update the last player position as the current player position, meaning the sound effect will only be triggered one time for each new footstep.

Listing A.4:Play Walk

1 var channel1 : AudioClip; 2 var channel2 : AudioClip; 3

4 var playerPosition : PlayerPosition; 5

6 var _playerPosition : Vector3; 7 var _lastPosition : Vector3; 8

9 var whichFoot : i n t; 10

11 var playWalkCollider1 : PlayWalkCollider; 12 var playWalkCollider2 : PlayWalkCollider; 13 var _enter1 : boolean;

14 var _enter2 : boolean; 15

16 function Start( ){ 17

18 whichFoot = 0 ;

19 audio.clip = channel1;

20 _lastPosition = playerPosition.getPlayerPosition( ) ;

21 }

22 23

24 function Update ( ) { 25

26 _playerPosition = playerPosition.getPlayerPosition( ) ; 27 _enter1 = playWalkCollider1.getEnter( ) ;

28 _enter2 = playWalkCollider2.getEnter( ) ; 29 30 31 i f(_enter1 == t r u e | | _enter2 == t r u e){ 32 i f(_playerPosition ! = _lastPosition){ 33 34 whichFoot++; 35 36 i f(whichFoot > 1 ){

(19)

A.4. Sound Design Portfolio A. Unity 37 38 whichFoot = 0 ; 39 } 40 41 i f(audio.isPlaying == f a l s e){ 42 43 i f(whichFoot == 0 ){ 44

45 audio.clip = channel1;

46 audio.Play( ) ;

47 }

48 e l s e{ 49

50 audio.clip = channel2;

51 audio.Play( ) ; 52 } 53 54 _lastPosition = _playerPosition; 55 } 56 } 57 e l s e{ 58 59 audio.Stop( ) ; 60 } 61 } 62 } Ambient Sound

The Ambient Sound scripts handle the audio sources for ambient sound effects. Each ambient script has three different audio channels that will be played one at a time, and depending on the player’s position, the volume will be scaled accordingly. The following scripts are the Ambient Sound script splitted into three segments, Listing A.5 shows part one. Lines 1-3 are the declara-tion of the audio clips. Line 5 is for storing which audio clip should be played. Lines 13-21 will be useful for the volume scaling. TheStart()method in lines 23-33, initializes the necessary variables. The maximum distance that the player can at before the volume is completely turned off, that is initialized in line 30.

Listing A.5:Play Ambient part 1

1 var channel1 : AudioClip; 2 var channel2 : AudioClip; 3 var channel3 : AudioClip; 4

5 var whichChannel : i n t; 6

(20)

A.4. Sound Design Portfolio A. Unity

7 var playAmbientCollider : PlayAmbientCollider; 8 var enter : boolean;

9

10 var playerPosition : PlayerPosition; 11 var objectPosition : ObjectPosition; 12

13 var distPlayer : Vector3; 14 var distObject : Vector3; 15 16 var dist : f l o a t; 17 18 var minDist : f l o a t; 19 var maxDist : f l o a t; 20 21 var volume : f l o a t; 22 23 function Start( ){ 24

25 audio.clip = channel1; 26

27 enter = t r u e; //very weird ! 28 29 minDist = 0 ; 30 maxDist = 3 5 ; 31 32 whichChannel = 0 ; 33 }

Listing A.6 shows part two of the Ambient Sound script, where this concerns the Update()

method. In line 5, thevolumeDistance()method is run, which can be seen in Listing A.7. Lines 7-15 is for audio channel switching whenever the character is outside of the relevant collid-ers. The channel number,whichChannel, can not exceed three, else it will be set back to zero. Lines 17-40 will play the audio file corresponding to which audio channel it currently is set to, when the character enters the relevant colliders.

Listing A.6:Play Ambient part 2

1 function Update ( ){ 2

3 enter = playAmbientCollider.getEnter( ) ; 4 5 volumeDistance( ) ; 6 7 i f(enter == f a l s e){ 8 9 whichChannel++; 10 11 i f(whichChannel > 3 ){

(21)

A.4. Sound Design Portfolio A. Unity 12 13 whichChannel = 0 ; 14 } 15 } 16

17 i f(enter == t r u e){ //very weird ! 18

19 i f(whichChannel == 0 ){ 20

21 audio.clip = channel1;

22 }

23 e l s e i f(whichChannel == 1 ){ 24

25 audio.clip = channel2;

26 }

27 e l s e i f(whichChannel == 2 ){ 28

29 audio.clip = channel3;

30 }

31 e l s e i f(whichChannel == 3 ){ 32

33 audio.clip = channel2;

34 } 35 36 i f(audio.isPlaying == f a l s e){ 37 38 audio.Play( ) ; 39 } 40 } 41 }

Listing A.7 concerns with the scaling of volume. Lines 3-4 get both the player and object position to later be used in line 6, to determine the distance between the two objects. Lines 8-19 handle volume strength, as line 8 first checks if the distance is between minimum and maximum dis-tance. If so, normalize the distance to percentage and the result is stored to theaudio.volume. If the distance is smaller or greater than the allowed distances, set the volume to zero.

Listing A.7:Play Ambient part 3

1 function volumeDistance( ) : void{ 2

3 distPlayer = playerPosition.getPlayerPosition( ) ; 4 distObject = objectPosition.getObjectPosition( ) ; 5

6 dist = Vector3.Distance(distPlayer, distObject) ; 7

(22)

A.4. Sound Design Portfolio A. Unity

9

10 var tempVolume = Mathf.Abs(dist − maxDist) ; 11

12 volume = tempVolume / maxDist;

13

14 audio.volume = volume;

15 } 16 e l s e{ 17 18 audio.volume = 0 . 0 ; 19 } 20 } Door Sound

The Door Sound script handles the playback for the door sound effects when the door opens or closing. Listing A.8 shows the source code for the Door Sound. Starting with the usual declara-tions of variables in lines 1-7, and initialize the relevant ones in theStart()method in lines 9-13. In lines 15-30, theUpdate()method handles which sound effect should be played, the open or closing sound effect. Lines 19-23 will play the audio file for opening, while lines 25-29 will play the audio file for closing, when the arguments are true for theif-sentences, respectively.

Listing A.8:Play Door

1 var channel1 : AudioClip; 2 var channel2 : AudioClip; 3

4 var openableDoor : OpenableDoor; 5

6 var enter : boolean; 7 var open : boolean; 8

9 function Start( ){ 10

11 audio.clip = channel1; 12 open = f a l s e;

13 }

14

15 function Update ( ) { 16

17 open = openableDoor.getOpen( ) ; 18

19 i f( (open == t r u e) && (audio.isPlaying == f a l s e) && (Input.GetKeyDown(” e←

-”) ) && enter == t r u e){

20

21 audio.clip = channel1; 22 audio.PlayOneShot(channel1) ;

(23)

A.5. Opening and Closing Doors Portfolio A. Unity

23 }

24

25 i f( (open == f a l s e) && (audio.isPlaying == f a l s e) && (Input.GetKeyDown(”←

-e ”) ) && enter == t r u e){

26

27 audio.clip = channel2; 28 audio.PlayOneShot(channel2) ;

29 }

30 }

A.5

Opening and Closing Doors

The concept of open and closing doors contribute to the dynamic interaction, which is accom-plished by implementing transformable doors. In order for this to work properly, colliders were used both as door hinges and colliders. Figure A.13 shows the front door with two bounding boxes. The smaller one acting as the hinges, while the larger one is the ”door body” which has physics enabled, meaning there is collision on while the door is closed. Due to usability issues, collision will be deactivated when opening a door and when the door is opened.

Figure A.13:A top view of the front door. Two colliders are attached to the front door model. One for the hinges, and the other for the door collision.

It is also important to determine how far away the character should stand before he is able to open and close the door. Therefore, an additional collider is needed to mark out where the char-acter may open and close a door. Figure A.14 shows the collider used for the front door, to mark out where it is possible to open and close the door.

Listing A.9 shows the script relevant for the door mechanics. Starting in lines 2-8, the relevant variables are declared. Line 2 is for how ”fast” the opening and closing will be. Line 3 will be

(24)

A.5. Opening and Closing Doors Portfolio A. Unity

Figure A.14:A top view of the front door. One collider for checking whether the character is inside the collider, thus give permission to open and close the door.

the transformation when opening, in this case it is an angle of 90. Line 4 is then the angle when the door is closed. TheUpdate()method in lines 11-37, consists of three smaller parts. Lines 15-20 handles the transformation of opening the door, while lines 22-27 handles the transforma-tion of closing the door. The last part in lines 29-36, will check if there is a door body collision and whether the player has pressed the key for open and close a door, and then set the variable respectively.

Listing A.9:Openable Door

1

2 var smooth : f l o a t = 2 . 0 ;

3 var DoorOpenAngle : f l o a t = 9 0 . 0 ; 4 var DoorCloseAngle : f l o a t = 0 . 0 ; 5 var open : boolean;

6

7 var _OpenableDoorCollider : OpenableDoorCollider; 8 var isCollision : boolean;

9

10 //Main f u n c t i o n 11 function Update ( ){ 12

13 isCollision = _OpenableDoorCollider.getCollision( ) ; 14

15 i f(open == t r u e){

16 var target = Quaternion.Euler ( 0 , DoorOpenAngle, 0 ) ; 17 // Dampen towards t h e t a r g e t r o t a t i o n

(25)

-A.5. Opening and Closing Doors Portfolio A. Unity

target,

19 Time.deltaTime * smooth) ;

20 }

21

22 i f(open == f a l s e){

23 var target1 = Quaternion.Euler ( 0 , DoorCloseAngle, 0 ) ; 24 // Dampen towards t h e t a r g e t r o t a t i o n

25 transform.localRotation = Quaternion.Slerp(transform.localRotation, ←

-target1,

26 Time.deltaTime * smooth) ;

27 }

28

29 i f(isCollision == t r u e){

30 i f(Input.GetKeyDown(” e ”) && open == f a l s e){ 31 (open) = t r u e;

32 }

33 e l s e i f(Input.GetKeyDown(” e ”) && open == t r u e){ 34 (open) = f a l s e;

35 }

36 }

37 }

38

39 function getOpen( ) : boolean{ 40

41 r e t u r n (open) ;

42 }

Listing A.10 shows the script for handling the door body collision. The implementation is quite trivial and simply checks whether the door is open or not. If the door is closed, in lines 13-16, then enable collision, otherwise if it is open, deactivate collision.

Listing A.10:No Door Collision

1 var _openableDoor : OpenableDoor; 2 var noCollider : boolean;

3 4 function Start( ){ 5 6 noCollider = f a l s e; 7 } 8 9 function Update ( ) { 10

11 noCollider = _openableDoor.getOpen( ) ; 12

13 i f(noCollider == t r u e){ 14

(26)

A.6. Glass Shader Portfolio A. Unity 16 } 17 e l s e{ 18 19 collider.enabled = t r u e; 20 } 21 }

A.6

Glass Shader

A.6.1

Theory

Specular Highlights

The term specular describes how the light is reflected perfectly from its source to the eye of the viewer. Smooth surfaces reflect light at an angle equal to the angle at which they arrive, called the angle of incidence. These surfaces are known as specular surfaces and the reflection as specular reflection. Specular highlights are bright spots of light that appears on specular surfaces where the surface normal is oriented precisely between the view direction and the direction of the in-coming light. (Brooker [2008]; John Daintith [2008])

Figure A.15:View direction (V) Surface normal (N) Light direction (L) geometrically reflected direction (R). (Kraus [2011a]

The size of the highlights depends on how shiny the object is. The shinier the object the smaller the highlight. An object which is perfectly shiny will only reflect light from the light source (L) in the geometrically reflected direction (R). For a non-perfectly shiny object the light will be re-flected in the directions around the perfectly shiny reflection direction. See figure A.15.

Based on this, the intensity of the specular highlights should be large ifV and Rare close to-gether. This can be represented by the cosine of the angle betweenRandV to thenshininess−th

power based on the Phong Reflection Model: (R·V)nshininess

The dot product has to be clamped so that all negatives are set to0 as negative values would mean that the light source is on the wrong side. The specular highlight can then be computed

(27)

A.6. Glass Shader Portfolio A. Unity

through the following equation:

Ispecular=Iincoming·kspecular·max(0, R•V)nshininess

Where Iincomingis the intensity of the incoming light, kspecularis the material color for the

spec-ular reflection,V is the direction of the viewer andRis the normalized reflected direction.Kraus [2011b]

Cubemap Reflection

The Glass shader, which the chapter will describe the implementation of, is, as described earlier, is based on the idea of using specular highlight on one side of the window and the other side would show a reflection of the surrounding area. One way to implement the second part is to use a cubemap which is then reflected in the window.

A cubemap is basically a collection of six separate images which each is mapped to a face of an imaginary cube, hence the name cubemap, see figure A.16. They share some functionality with skyboxes as they often are used to show objects or scenery far away in the background. Figure A.17 shows the images of a cubemap mapped into a sphere. Technologies [2010] The idea

Figure A.16:A cubemap in Unity showing all six different images.

behind this part of the shader is to depict parts of the cubemap on the shader surface as if it was a reflection. The general principle is that a view ray travels from the viewer to a point on the surface where it is reflected. The reflected ray will then intersect with a point on the cubemap and this point will have a color value. This color value is then transferred to the point on the surface of the object where the ray is reflected.

The mathematics for calculating this reflection is exactly the same as the mathematics for cal-culating the reflection of a light ray at a surface normal vector as mentioned in the specular highlight chapter.However this calculation will make a perfect reflection of the cubemap on the

(28)

A.6. Glass Shader Portfolio A. Unity

Figure A.17:A cubemap in Unity shown with the images mapped to a sphere.

Figure A.18:This illustration shows the principle behind cubemapping. A view ray is sent from the viewer to a point of on the surface of the object. From here the ray is reflected awy from the object and will intersect with a point on the cubemap. This point will have a pixel value which is applied to the point on the object from where the ray originated. (TopherTG [2010]

surface of the object which will not look realistic in most cases. A glass window will have some reflection of the outside world but it is still possible to look through the glass. Furthermore the level of reflectance is not linear from all angles. According to a theory made by the French

(29)

en-A.6. Glass Shader Portfolio A. Unity

gineer Augustin-Jean Fresnel known as the Fresnel factor, the reflectance is depending on the incidence angle of the viewer and the object:

Fresnel factor A factor that describes how light is reflected from each smooth microfacet within a rough surface, as a function of incidence angle and wavelength.John Daintith [2008]

This means that it would be possible, using the Fresnel factor, to implement a shader in which the angle of incidence will determine the intensity of the reflected cubemap. Using Schlick’s Approximation of the Fresnel factor it is possible to use this theory in the shader. The Schlic’s approximation is:

Fλ=fλ+(1−fλ)·(1−H•V)5

Where V is the normalized direction of the viewer and H is the normalized halfway vector. The normalized halfway vector is described asH = (V +L)/|V +L|whereLis the normalized di-rection towards the light source. However in this application of the approximation there is no light. In this instance we use the reflected direction R instead of the light direction. This makes it a little bit simpler as the halfway vector betweenV andRis the normal vector due the way the reflection vector is computed:H = (V +R)/|V +R|=N.

Fλ=fλ+(1−fλ)·(1−N•V)5

fλdescribes the intensity when the direction of the viewer, Reflection direction and normal

di-rection all are identical, which isN•V = 1. Because this shader is meant to mimic a window an alpha-value can be used here to determine the level of transparency when looking straight at the window, which should not be1.

A.6.2

GLSL Shader

This following section will describe how the shader which has just been described was imple-mented in GLSL. GLSL stand for OpenGL Shading Languange and is a high level shading lan-guage based on C programming lanlan-guage and gives more direct control of the graphics pipeline without having to use lower level languages.

OpenGL is based on parallel processing at different stages of a pipeline. These stages are: Vertex data, Vertex shader, Primitive Assembly, Rasterization, Fragment Shader, Per-Fragments Opera-tions and Framebuffer. Figure A.19 shows the pipeline.

In the Vertex Data stage the raw data of the scene is transferred to the pipeline. This could e.g. be triangle meshes provided by 3d models. This stage of the pipeline is not programmable but purely data. Based on the vertex data is different attributes (position, color, normal vector and texture coordinates) which are communicated to the Vertex shader.

The Vertex Shader is a small program in GLSL that is applied to each vertice. This stage is pro-grammable which means that small programs or shaders written in GLSL are applied in these stages. Vertex shaders can manipulate properties such as position, color, and texture coordinate,

(30)

A.6. Glass Shader Portfolio A. Unity

Figure A.19: The OpenGL Pipeline. Blue boxes represent data, Gray boxes fixed-function stages and green pro-grammable stages.Krauss [2011b]

but cannot create new vertices. The purpose of the vertex shader is to transform each vertex’s 3 dimensional position provided by the Vertex Data stage to tge 2 dimensional coordinate at which it appears on the screen as well as the depth buffer (z-buffer).There are five different transforma-tions that have to be done for the original coordinates to get the screen coordinates: Modeling, viewing, projection, perspective and viewport transformation. The first three of these are applied in the vertex shader and the projection and viewport transformation is applied automatically in the of the fixed-function stages after the Vertex shader. Uniforms, which are variables that have the same value for all shaders (vertex and fragment) that are called when rendering a particular primitive, but may defer for other primitives, are being initialized here. Examples of uniforms are vertex transformation, details of light sources etc. The output of vertex shader is varyings, which are variables that are constantly being defined by the shaders. Examples of varyings are colors, normal vectors etc.

After the Vertex shader is the Primitive Assembly stage which setup the primitives according to the data received from the Vertex shader. This is a fixed-function stage which cannot be pro-grammed.

(31)

A.6. Glass Shader Portfolio A. Unity

In the Rasterization stage is the data interpolated. It determines the pixels covered by a primitive and interpolates the output varyings and depth of the vertex shader for each covered pixel. The interpolated varyings and depth is outputted to the fragments shader and is now interpolated as pixels. This is also a fixed-function stage.

The Fragment Shader, also sometimes refered to as a pixel shader, is the only other programmable stage. The fragment shader computes color and other attributes for each pixel. The output is frag-ment color and depth which send to the Per-Fragfrag-ment Operations stage.

The penultimate stage is the Per-Fragments Operations. This stage consists of different tests and operations which if called can alter the fragment color which was outputted by the Frag-ment Shader. These alterations are then sent to the Framebuffer.

The Framebuffer is the last stage where the array of pixels is computed from the colors processed by the earlier stages. Lipchak [2008]; Krauss [2011a]

A.6.3

GLSL Code Explained

The next section will describe the most essential parts of the GLSL code for implementing the shader. The first thing to notice is that the shader is written to be used on a plane. This is important because a plane has a backface and a frontface and this code is written specifically to show different effects on the two sides. This would not work on a cube or a sphere as the backface of these objects is the inside. Now that this is clear the first essential element is the declaration of the properties. See Listing A.11.

Listing A.11:Properties method

1 Properties {

2 _Color (” D i f f u s e M a t e r i a l Color ”, Color) = ( 1 , 1 , 1 , 1 ) 3 _SpecColor (” S p e c u l a r M a t e r i a l Color ”, Color) = ( 1 , 1 , 1 , 1 ) 4 _Shininess (” S h i n i n e s s ”, Float) = 10

5 _Cubemap_Intensity (”Cubemap I n t e n s i t y ”, Float) = 5 . 0 6 _FresnelFactor_Pwr(” F r e s n e l F a c t o r Power”, Float) = 5 . 0 7 _Alpha (”Alpha”, Range( 0 , 1 ) ) = 0 . 2

8 _Cube(” R e f l e c t i o n Map”, Cube) = ” ” {}

9 }

The properties are variables that can be specified after the creation of the shader. The first prop-erty Color changes the value of the diffuse color on the backface, which is the inside of the window. The next SpecColorchanges the color of the specular highlight on the same side and the Shininesshow big the specular reflection is. Cubemapintensity is used to increase or decrease the intensity of the cubemap reflection on the frontface and FresnelFactor Pwrcan modify the Fresnel factor on the same cubemap reflection. Alphais the alpha value of the back-face and Cubeis the cubemap. All of these properties except the cubemap is used to finetune the appearance of the shader without having to change to code. See Listing A.12.

(32)

A.6. Glass Shader Portfolio A. Unity

Listing A.12:SubShader

1 SubShader {

2 Tags { ”Queue” = ” T r a n s p a r e n t ” } // draw a f t e r a l l opaque geometry has←

-been drawn

Next is the start of the code. It start with the declarationSubshaderwhich lets unity choose the subshader that fits the hardware and thenTagwhich is set toQueue = Transparentthat tells the compiler to draw after all opaque geometry has been drawn.

Passes

The program is split into two passes. See Listing A.13 and A.14.

Listing A.13:Pass1

1 Pass {

2 Tags { ”LightMode” = ” ForwardBase ” } // pass f o r ambient l i g h t and ←

-f i r s t l i g h t s o u r c e

3 ZWrite Off // don ' t w r i t e t o depth b u f f e r i n order not t o occlude ←

-o t h e r -o b j e c t s

4 Blend One OneMinusSrcAlpha // use pre−m u l t i p l i e d alpha blending

5 Cull Off

The first pass computes the scene with the first lightsource and ambient light. The tagLightmode = FowardBase is set to make sure the uniforms are correctly set. TheZWriteoff make sure that nothing is written to the depth buffer (z-buffer) in order not to occlude other objects. Cull off makes sure that both back and frontfaces are shown and not discarded. Lastly the lineBlend One OneMinusSrcAlphamakes sure that this pass uses pre-multiplied alpha blending.

Listing A.14:Pass2

1 Pass {

2 Pass {

3 Tags { ”LightMode” = ”ForwardAdd” } // pass f o r a d d i t i o n a l l i g h t ←

-s o u r c e -s

4 Blend One One // a d d i t i v e blending

5 ZWrite Off // don ' t w r i t e t o depth b u f f e r i n order not t o occlude ←

-o t h e r -o b j e c t s

6 Cull Off

The second pass is for any additional light sources. LightMode = ForwardAdd makes sure that the uniform are correctly set for the second pass with the additional light sources. Blend One One set the alpha blending to additive blending and bothZWriteand Cull are off as it was in the first pass. Each pass is divided into a Vertex and a Fragment shader, but before any of these some different variables have to be defines. The Properties declared earlier has to be defined in each pass as well and besides this some built-in and scripted uniforms that has to be used later too (see code for details).

(33)

A.6. Glass Shader Portfolio A. Unity

Vertex Shader

The next code is written in the Vertex shader for both pass 1 and 2. See Listings A.15, A.16, A.17, A.18 and A.19.

Listing A.15:VertexShaderpart1

1 i f ( 0 . 0 == _WorldSpaceLightPos0.w) // d i r e c t i o n a l l i g h t ?

2 {

3 attenuation = 1 . 0 ; // no a t t e n u a t i o n

4 lightDirection = normalize(vec3(_WorldSpaceLightPos0) ) ;

5 }

6 e l s e // p o i n t or s p o t l i g h t

7 {

8 vec3 vertexToLightSource = vec3(_WorldSpaceLightPos0 − ←

-modelMatrix * gl_Vertex) ;

9 f l o a t distance = length(vertexToLightSource) ; 10 attenuation = 1 . 0 / distance; // l i n e a r a t t e n u a t i o n

11 lightDirection = normalize(vertexToLightSource) ;

12 }

The next essential part of the code is the distinction between different light sources. It is impor-tant to distinguish between point/spot light and directional light as both the attenuation and light direction is essential for later calculations but computes differently. The distinguisment is rather simple and can be done with a if-else construct. This is based on the fact that the unity specific uniform WorldSpaceLightPos0specifies the position of a light for a point/spot light and the 4th coordinate of a point is 1 while for directional light it specifies a direction which fourth coordinate is 0. So if0.0 == WorldSpaceLightPos0.wit is a directional light. Else it is a point or a spot light. Now that the distinguishment has been made the different properties can be computed. The main difference is that directional light has no attenuation while the other types have.

Listing A.16:VertexShaderpart2

1 vec3 diffuseReflection = attenuation * vec3(_LightColor0) * vec3(_Color)

2 * max( 0 . 0 , dot(normalDirection, lightDirection) ) ;

3 }

The diffuse color is computed with the attenuation calculated above as well as the LightColor0

and the userspecified Color.

Listing A.17:VertexShaderpart3

1 vec3 specularReflection;

2 i f (dot(normalDirection, lightDirection) < 0 . 0 ) // l i g h t s o u r c e ←

-on t h e wr-ong s i d e ?

(34)

A.6. Glass Shader Portfolio A. Unity 4 specularReflection = vec3( 0 . 0 , 0 . 0 , 0 . 0 ) ; // no s p e c u l a r ← -r e f l e c t i o n 5 } 6 e l s e // l i g h t s o u r c e on t h e r i g h t s i d e 7 {

8 specularReflection = attenuation * vec3(_LightColor0) * vec3(←

-_SpecColor)

9 * pow(max( 0 . 0 , dot(reflect(−lightDirection, ←

-normalDirection) , viewDirection) ) , _Shininess) ;

10 }

The specular reflection is dependent on the lightsource being on the right side of the object. An if-else construct tests wether the dotproduct of the lightdirection and normaldirection is smaller than 0. If that is the case light is on the wrong side and the specular reflection is 0. If the light is on the right side the specular reflection is calculated.

Listing A.18:VertexShaderpart4

1 color = vec4(ambientLighting + diffuseReflection + specularReflection, 1 . 0 ) ;

The different reflection models in combined and added to color which is a varyings. There is a slight difference here in the two passes. In the first pass the ambient light, thediffuseRelflection

andspecularReflectionis added together and an alpha value of one is assigned while the ambient light is neglected in the second pass.

Listing A.19:VertexShaderpart5

1 gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

This last line that will be explained in the Vertex shader is the Vertex transformations described earlier in the chapter. gl Vertexwhich is a predefined attribute that contains the vertex data is multiplied withgl ModelViewProjectionMatrix which is a combination of the Model, View and Projection Transformations.

Fragment Shader

The fragment shader is much more compact than the Vertex shader is this implementation. See listin A.20.

Listing A.20:FragmentShader

1 #ifdef FRAGMENT 2 3 void main( ) 4 { 5 i f ( !gl_FrontFacing) 6 {

(35)

A.6. Glass Shader Portfolio A. Unity

8 }

9 e l s e

10 {

11 vec3 reflectedDirection = reflect(viewDir, normalize(−←

-normalDir) ) ;

12 f l o a t reflectivity = _Alpha + ( 1 . 0 − _Alpha) * pow( 1 . 0 − abs←

-(dot(normalize(viewDir) , normalize(−normalDir) ) ) , ←

-_FresnelFactor_Pwr) ;

13 gl_FragColor = _Cubemap_Intensity * textureCube(_Cube, ←

-reflectedDirection) * reflectivity; 14 15 gl_FragColor.a = reflectivity; 16 // g l F r a g C o l o r =vec4 ( viewDir , 1 . 0 ) ; 17 } 18 } 19 20 #endif

The fragement shader in the first pass is based on a if-else construct to define what will hap-pen to the frontface and the backface. It is implemented by using the if the GLSL command

gl Frontfacing is not true (!gl Frontfacing) then it is the backface and else it is the frontface. If it is the frontface then the Fresnel computation is done to assign the cubemap to the surface. If it is the backface then the varyings Color will be assigned togl Fragcolor.

A.6.4

CG Code Explained

This section will describe the Surface shader written for the windows in unity. Surface shaders are Unity’s high-level shader code approach that simplifies the shader through autogeneration of repetitive code which would be necessary in normal vertex and pixel shaders. The shader is written in CG programming language. Technologies [2011e]; Wittayabundit [2011]

The shader is simplified version of the one described earlier as there will not be any specular highlight on the backface of the plane but only the Fresnel modified cubemap and transparency. As with the GLSL implementation the CG implementation uses some properties that can be al-tered without changing the code. See Listings A.21, A.22, A.23, A.24 and A.25.

Listing A.21:CGShaderPart1

1 Shader ”CG Gl a ss shader ” {

2 Properties {

3 _Cubemap_Intensity (”Cubemap I n t e n s i t y ”, Float) = 5 . 0 4 _FresnelFactor_Pwr(” F r e s n e l F a c t o r Power”, Float) = 5 . 0

5 _Alpha_Frontface (”Alpha f o r t h e f r o n t f a c e ”, Range( 0 , 1 ) ) = 0 . 2 6 _Alpha_Backface (”Alpha f o r t h e b a c k f a c e ”, Range( 0 , 1 ) ) = 0 . 2 7 _Cube(” R e f l e c t i o n Map”, Cube) = ” ” {}

(36)

A.6. Glass Shader Portfolio A. Unity

These properties are. Cubemap intensitywhich is used to increase or decrease the intensity of the cubemap reflection on the frontface, FresnelFactor Pwrthat can modify the Fresnel factor on the same cubemap reflection, Alpha FrontfaceandAlpha Backfacethat can alter the opacity of the shader on the front and backface and last Cubewhich is the cubemap.

Listing A.22:CGShaderPart2

1 SubShader {

2 Tags { ”Queue” = ” T r a n s p a r e n t ” } // draw a f t e r a l l opaque geometry has←

-been drawn

3 Cull Off

4

5 CGPROGRAM

6 #pragma surface surf Lambert alpha

After the properties has been defines the subshader command is called which lets unity choose the subshader that fits the hardware and thenTagwhich is set toQueue = Transparentthat tells the compiler to draw after all opaque geometry has been drawn. Cull off makes sure that both back and frontfaces are shown and not discarded. The lineCGPROGRAMjust tells the com-piler that the following code is a CG program.

The#pragmasurface is the directive to indicate it is a surface shader. Surf is the name of the function called as the main function, Lambert is the lightmodel used and alpha is a optionalpa-rameter that allows alpha blending mode.

Listing A.23:CGShaderPart3

1 struct Input 2 { 3 float3 viewDir; 4 float3 worldNormal; 5 float3 worldRefl; 6 };

The next part of the code is a struct that defines all the input values that has to be used in the main function. Here it containsviewDirwhich contains view direction,worldNormalwhich contains world normal vector andworldReflwhich contains world reflection vector.

Listing A.24:CGShaderPart4

1 samplerCUBE _Cube;

2 f l o a t _Alpha_Frontface; 3 f l o a t _Alpha_Backface; 4 f l o a t _Cubemap_Intensity; 5 f l o a t _FresnelFactor_Pwr;

(37)

A.7. Line of Sight Portfolio A. Unity

Just after the struct any additional variables defined will have to be declared. Here all the prop-erties are declared.

Listing A.25:CGShaderPart5

1 void surf (Input IN, inout SurfaceOutput o)

2 {

3 i f (dot(IN.worldNormal, IN.viewDir) < 0 . 0 )

4 { 5 o.Alpha = _Alpha_Backface; 6 } 7 e l s e 8 { 9

10 f l o a t reflectivity = _Alpha_Frontface + ( 1 . 0 − _Alpha_Frontface←

-) * pow( 1 . 0 − abs(dot(normalize(IN.viewDir) , normalize(IN.←

-worldNormal) ) ) , _FresnelFactor_Pwr) ;

11 o.Emission = _Cubemap_Intensity * texCUBE (_Cube, IN.worldRefl)←

-.rgb;

12 o.Alpha = reflectivity;

13 }

14 }

The main function is called asvoid surf (Input IN, inout SurfaceOutput o). The purpose of this function is to determine first what is backface and frontface. This is done simply with an if-else construct which test whether the dotproduct of normalvector and the viewdi-rection is below zero. If it is below zero it is the backface and the alpha value is set to the value specified in the properties. If it is the frontface the reflectivity is first calculated using Schlick’s approximation of Fresnel Factor. Then the output is set to use emission and output the

Cubemap intensitymultiplied with the cubemap and in the end the alpha level is set equal to the Fresnel reflectivity.

A.7

Line of Sight

Line of sight is a method to optimize performance and rendering time. The objects will only be rendered if the camera is ”facing” towards the object and is ”visible”, meaning no other object blocks the object. Normally, line of sight implementation is based on ray casting and back tracing, which is also a technique Unity supports, as it has runtime classes for raycasting and linecasting. However, the group chose a different approach for line of sight implementation for the sake of simplicity and time restrictions. The technique is then instead of raycasting, it is based on box colliders, meaning the objects will only be rendered when the character is within the relevant colliders. Figure A.20 shows three colliders for the bedroom objects. When the character is inside one of the colliders, the bedroom objects will then be rendered, otherwise not.

Listing A.26 shows the script related to the line of sight. In this case, the script is related to the bedroom objects. Starting in lines 1-7, all the relevant variables for the colliders are declared,

(38)

A.7. Line of Sight Portfolio A. Unity

Figure A.20:A top view of the house ground and environment. Three colliders are used to check when the bedroom should be rendered.

in this case, it is three colliders. Line 9 is a variable for storing objects with the same tag in an array of typeGameObject. This will be useful for fast search of objects with the same tag, since all objects in the house is tagged with a ”tag”. Lines 11-14 is theStart()method, which only stores the tag with ”Bedroom” in the array,objectsTag. Lines 16-33 is theUpdate()method which is here the checking is handled, to check whether the character is inside of the colliders. If that is true, then all the objects in theobjectsTagarray will be rendered. If none of the colliders are true, then the renderer will be deactivated.

Listing A.26:Line of Sight for Bedroom

1 var trigger1 : LOSCollider; 2 var trigger2 : LOSCollider; 3 var trigger3 : LOSCollider; 4

5 var _trigger1 : boolean; 6 var _trigger2 : boolean; 7 var _trigger3 : boolean; 8

9 var objectsTag : GameObject[ ] ; 10

11 function Start( ){ 12

13 objectsTag = GameObject.FindGameObjectsWithTag(”Bedroom”) ;

14 }

15

16 function Update( ){ 17

(39)

A.8. Day and Night Cycle Portfolio A. Unity

19 _trigger2 = trigger2.getEnter( ) ; 20 _trigger3 = trigger3.getEnter( ) ; 21

22 f o r(var objs : GameObject in objectsTag){ 23

24 i f(_trigger1 == t r u e | | _trigger2 == t r u e | | _trigger3 == t r u e){ 25

26 objs.renderer.enabled = t r u e;

27 }

28

29 i f(_trigger1 == f a l s e && _trigger2 == f a l s e && _trigger3 == f a l s e){ 30

31 objs.renderer.enabled = f a l s e;

32 }

33 }

34 }

A.8

Day and Night Cycle

Instead of having a static directional light as the sun, a day and night cycle with a moving di-rectional light, a corresponding halo and a changing brightness was planned to be implemented. The implementation is based upon a Unity YouTube tutorial BurgZergArcade [2010]. But due to complex implementation and time restrains, it never got to work properly. The rotation and the duration worked fine. The problem was setting the start placement of the sun, as well as changing the directional lights brightness to match. A secondary problem, that was not directly related to the day/night cycle script, was that the halo that represent the sun did not properly react to the colliders in the house. This meant that you sometime could see the ”sun” directly through the houses wall. But inside and outside.

(40)

Portfolio B

3DS MAX

B.1

The House

The original idea was to browse online modelling sites for free models of a complete house to use in the project. A house was found, downloaded and imported into 3DS Max. As it turned out, the model was not, contrary to expectations, made up of several smaller parts, but was one big element. This made it impossible to manipulate the house the way it was necessary for the project, i.e. the roof would have to be surgically removed in order to access the interior. Other house models were downloaded and tested but the problem remained the same. In the end it was decided to create a model of our own. A suitable floor plan of a house was found on the Internet. Figure B.1

Figure B.1:Floor plan of the house model.

From this plan and its metric dimensions walls were erected using 3DS Max and saved to the Unity Pipeline. In Unity the simple building was placed in the world and using the game engine

(41)

B.1. The House Portfolio B. 3DS MAX

we took a walk through the house. Exploring the house on foot the dimensions of the rooms seemed unnaturally small. To remedy this problem the building’s length and width were scaled up by fifty percent while the height was kept to its original dimension. Figure B.2

Figure B.2:Early house model from floor plan.

Modelling in 3DS Max was learning by doing, as the related Animation Techniques course had had its first lessons cancelled. We first tried constructing the walls with the Box Standard Prim-itive, but later found the Walls object type in AEC Extended to be easier to use. Following the wall’s erection, they were segmented, polygons were removed and the borders were bridged to make space for doors and windows. Figure B.3

(42)

B.2. Texturing Roof and Walls Portfolio B. 3DS MAX

A roof was modelled separately and merged with the house upon completion. The house struc-ture was now complete and ready for texstruc-ture application and interior decorating. Figure B.4

Figure B.4:House model with roof.

B.2

Texturing Roof and Walls

Applying textures to the roof and walls revealed another problem. In shaping the walls and roof, vertices and segments had been moved and stretched. This causes the applied textures to scale with the segments, causing distortion in the mapping. Figure B.5

Figure B.5:Texture distortion because of segment stretching.

With a UVW Map modifier the stretching is corrected. Figure B.6

Since a plane had been used to create the roof, textures applied were laid like a carpet across the roof. This meant that the roof tiles were laid correctly only on the positive y axis. To fix this problem the bitmap was copied into three other bitmaps. Each additional bitmap was turned 90

(43)

B.2. Texturing Roof and Walls Portfolio B. 3DS MAX

Figure B.6:UVW Maps on both roof and walls.

degrees to fit the parts of the roof aligned to the positive x axis and the negative y and x axis. The roof was split into four different ID’s and the correct bitmap was applied to each ID.

Had a pyramid standard primitive been used instead of a plane for the roof, the textures would automatically have been aligned correctly, which would have saved a lot of work.

However, the method of using several UVW maps did not work properly when imported into Unity. Unity only read one of the UVW maps and applied that to all the textures, once again mapping the textures across the roof like a carpet. To fix this second obstacle the Unwrap UVW modifier was used. This allows the use of the UV Editor. With the UV Editor all the faces of the roof were rotated and scaled individually to reach the same level of realism which had previously been achieved with several UVW maps. This method also meant that only the original bitmap picture was needed and the other rotated bitmaps could be deleted. Figure B.7

Applying textures to the walls presented a brand new problem. One wall object needed up to three different textures. This sets it apart from the roof, because the roof had several different faces, but only one texture. Instead of managing every face of the wall with its proper texture in the UV Editor, a quicker, but less precise option was used, which is to use a single UVW Map for the entire wall and scale it to fit all textures approximately. This method saved a lot of time.

(44)

B.3. Furniture Portfolio B. 3DS MAX

Figure B.7:Using the UV Editor to apply roof textures.

B.3

Furniture

The original idea which was pursued, was that the models needed could be downloaded from free 3D model websites. We found objects for furniture and decoration for most of the parts that were needed to furnish the house model. A complete list of needed objects had been compiled and as an object was downloaded, tested in 3DS Max and approved for use, the object would be crossed out on the list. The objects were then textured as needed and afterwards exported as FBX files for use in Unity.

As more and more objects were opened in Unity the performance of the program dropped quickly and made us realize that nearly all of the objects we had acquired on free 3D model websites had a polygon count that was much too high to be used effectively in a real time simu-lation.

The Internet was searched far and wide for low polygon models, but such models were very rare to find. Most available models had all very high polygon counts and either had to be opti-mized greatly or a new model had to be made in its stead.

Several ways of optimizing existing models were discovered and developed in our quest for real time performance. One simple but time consuming way is to attach all elements of an object into one or a few large element. This simple method results in less draw calls in Unity, improving performance. This method was used on all objects before exporting them to FBX files for Unity. Another way of optimizing downloaded high polygon models was to convert them from Ed-itable Mesh to EdEd-itable Poly, this method would sometimes reduce the polygon count substan-tially. This method only works if the object has been created with a tri-mesh, which means when it is converted to editable poly the triangles are fused into rectangles effectively reducing the

(45)

B.3. Furniture Portfolio B. 3DS MAX

polygon count.

To further improve downloaded high polygon models, 3DS Max has a modifier called Opti-mize which can be applied to objects to reduce the number of polygons. This modifier would sometimes produce good results, but at other times the object would be made too rough by the modifier and sometimes the modifier would do nothing at all.

The last way of reducing the polygon count of downloaded high polygon models, was to manu-ally remove faces, vertices, polygons, and elements from the models. This, of course, is very time consuming and more often than not it would be more effective to remodel the needed piece of furniture from scratch.

After having modelled the furniture, appropriate textures would be gathered from the Internet and applied to them, after which they would be exported once again and the texture files placed in the correct folder for Unity to import everything.

(46)

Portfolio C

Test 1

C.1

Process and Decisions

While planning the test. A lot of decisions had to be made. These could potentially have a great impact on the data and therefore the outcome af the test. Because of this it was important to figure out possible outcomes of the particular choices. Especially if there was a ’dividing road’ with distinctly opposite choices.

An example of this could be the colouring of the house. On one hand, if the house is the same colour then the test subject will likely recognize the two houses of the same colour as being the same house. This will cause the subject to base the second size bid on the information and percep-tion of both viewings. On the other hand, if the houses are different colours and one is perceived significant larger than the other then it could be due to how colours effect the spacial perceptions in real world instead. The same concerns applies to using different interior or furniture place-ment.

An example of a more technical decision is regarding how the fly-through is made. All of the individual objects are created and modelled in 3DS Max and are afterwards imported and placed accordingly in Unity.

Having the house in both programmes, and both programmes having a tool to make a fly-through film, results in another possible ’cross road’ where the data outcome may or may not be significantly altered. Using the 3DS Max tool gives some more freedom with camera angels and movement, but is more time consuming. Using Unity for both presentations ensures almost the same quality of output in the two, as well as same render and camera settings. This elimi-nates another variable. As a note to this, ensuring that the ratio and pixel size is the same is also a good idea.

The abstract thinking process is often difficult and cannot alone guarantee that the test is well planned and without subtextual misleading of test subjects. Though it can greatly reduce the risk of that happening. When in doubt, the answer was to choose what would end up giving the test the least number of variables. The less variables a test has, the more likely the conclusion is to be correct.

(47)

C.2. Test 1 Material List Portfolio C. Test 1

C.2

Test 1 Material List

• Simple house in 3D, complete with some interior and basic exterior. No need for high level of realism and details.

The house is 320 square meter. • Four different showings of the 3D house.

1. Interactive Unity-program.

2. Passive fly-through movie, also made in Unity. 3. Field Of View 40 house/room.

4. Field Of View 50 house/room. • Interview manuscript.

• Test room. • Powerful laptop. • Mouse.

C.3

Questionnaire and Meaning

The first part of the questionnaire is devoted to basic demographic information. As well as some more specific information regarding the subjects’ estate status and -history.

The test subjects will be introduced to the two presentations one after the other. After each individual presentation they are asked to estimate the total size of the house and their level of confidence in that.

After having tried both presentations, they will be asked to comment them individually and compare them to each other. This is based on a dynamic interview with an highly active facili-tator who interprets and writes down any comments. The questions regard general experience with the product, what worked well and what did not work well, and what advantages one had compared to the other. Next the subjects are asked to write down any suggestions for improve-ment.

After testing the two different FOV builds, the subjects are asked to estimate the size of the rooms. This is done with the same premise as the two presentations of the entire house. Lastly the subjects are asked which FOV build they preferred and why.

C.4

Test 1 Questionnaire

(48)

Spørgsmål

Første demo (Fly-through eller Unity): __________

Deltager information

1.

Alder: __________

2.

Køn: __________

3.

Bolig status:

4.

Antal boliger boet i: __________

5.

Arkitektonisk eller byggemæssig præget uddanelse: __________

a.

Hvis ja, hvorledes er forløbet: ________________________________________

Efter første fremvisning

1.

Hvor mange kvadratmeter vil du gætte på huset i denne fremvisning er: __________

2.

På en skala fra 1-10, hvor 1 er meget usikker og 10 er meget sikker, hvor sikker er du på dit

bud: __________

Efter anden fremvisning

1.

Hvor mange kvadratmeter vil du gætte på huset i denne fremvisning er: __________

2.

På en skala fra 1-10, hvor 1 er meget usikker og 10 er meget sikker, hvor sikker er du på dit

bud: __________

References

Related documents

Als wichtigste Reaktion, die eine derartige Verände- rung des Standards bedingt, ist der Protium-Deuterium-Aus- tausch zu nennen: Im Falle einer Markierung mit Deuterium muss in

From the survey and FGDs, it is revealed that the participants of this study believed that teachers’ lack of knowledge of ICT and its integration, lack of

In the absence of research on prescribing antibiotics in primary health care centers in Depok, it was necessary to conduct a study to evaluate whether antibi- otics were

Svendsen (1999) beskriver i boken ”Kjedsomhetens filosofi”, at i søkingen etter opplevelser, blir også opplevelser ofte det eneste interessante. Alt blir vurdert ut fra overflaten,

All the TLIF models supplemented with left UPSF increased annulus stresses in extension (12 ~ 33% higher), right lateral bending (14 ~ 45% higher), left axial rotation (11 ~

A health body withholding is when a PCT/LHB is closing a contract and has decided to retain payment prior to the actual contract closure date, to ensure there is sufficient

-Graduate Dean’s Dissertation Research Grant, (May 1, 2009- University of California, Riverside April 30, 2010) -Graduate Division Fellowship Award, (September

Udržanie štandardných ukazovate ľ ov pri hmotnosti a tvare, ako aj znášky a plodnosti si vyžaduje odborne vyspelú osobu, správnu výživu a plemenitbu (MALÍK, 1983)..