• No results found

NTID Center on. Rochester Institute of Technology. National Technical Institute for the Deaf.

N/A
N/A
Protected

Academic year: 2021

Share "NTID Center on. Rochester Institute of Technology. National Technical Institute for the Deaf."

Copied!
39
0
0

Loading.... (view fulltext now)

Full text

(1)

NTID Center on A C C E S S T E C H N O L O G Y Rochester Institute of Technology National Technical Institute for the Deaf

www.ntid.rit.edu/cat Research Supported by Cisco http://www.rit.edu/ntid/cat/cisco

S

IGNING

A

VATAR

S

YSTEM

February 2012

I

NTRO DUCTION

This white paper will address the use of 3D virtual avatars for communication using sign language in a distributed setting. An avatar, for our purposes, is defined as a graphical representation of a person that is generated by a computer or an imaging device.

In order to better understand the concerns of deaf and hard-of-hearing individuals, the National Technical Institute for the Deaf (NTID) Center on Access Technology has been commissioned by Cisco System, Inc. to investigate access solutions for deaf and hard-of-hearing individuals related to 9-1-1 telephone response systems.

The Center on Access Technology is located in the National Technical Institute for the Deaf (NTID), one of the eight colleges of Rochester Institute of Technology (RIT). NTID, an internationally recognized leader

in providing postsecondary education, provides technical and professional programs to more than 1,300 mainstreamed deaf and hard-of-hearing students. In addition, approximately 100 deaf faculty and staff members are employed at NTID/RIT. The Center is in a unique position to study and research the needs of deaf and hard-of-hearing individuals first-hand.

I

DEALIZED

S

YSTEM

O

VE RVI EW

Below illustrates an overview of an idealized Signing Avatar system that takes spoken audio as input and produces the sign language equivalent as performed by an avatar in a virtual space. The individual processes of the system are described in more detail below.

NTID Center on

A C C E SS T EC H N O L O G Y

Rochester Institute of Technology

National Technical Institute for the Deaf

www.ntid.rit.edu/cat

March

2014

TelePresence Technologies

with Professional Sign Language

Interpreting Services:

Face-to-Face and Remote

Communication for Deaf and

Hard-of-Hearing Users – Phase II

March 2014

Research Supported by Cisco Systems, Inc.

(2)

2

TelePresence Technologies with Professional Sign Language Interpreting Services: Face-to-Face and Remote Communication for Deaf and Hard-of-Hearing Users - Phase II

March 2014

http://www.ntid.rit.edu/cat/projects/cisco-grant/work-documents

Project Team: Christine Monikowski

Project Co-Leader, Professor at NTID/RIT, Department of ASL and Interpreting Education National Technical Institute for the Deaf Rochester Institute of Technology 52 Lomb Memorial Drive

Rochester, NY 14623

E. William Clymer

Project Co-Leader, Associate Professor at NTID/RIT, Associate Director of Center on Access Technology National Technical Institute for the Deaf

Rochester Institute of Technology 52 Lomb Memorial Drive

Rochester, NY 14623

Gary Behm

Assistant Professor of Engineering Studies at NTID/RIT,

Director of Center on Access Technology Innovation Laboratory National Technical Institute for the Deaf

Rochester Institute of Technology 52 Lomb Memorial Drive

Rochester, NY 14623

Kelly Masters Research Consultant Masters Consulting 50 Random Knolls Drive Penfield, NY 14526

(3)

3

Table of Contents

Introduction... 4

Research Methodology... 6

Physical Environment………... 6

Subjects………... 8

Interpreters………... 8

Evaluation Instrument ………... 8

Data Collection and Analysis ……….……... 9

Summary of Findings ...10

Research Scenario 1 ..………... ...10

Research Scenario 2... 13

Research Scenario 3... 16

Research Scenario 4... 19

Research Scenario 5... 22

Research Scenario 6... 25

Research Scenario 7... 28

Results……….………... 31

Overall Recommendations for Continued Research ... 34

References... 35

(4)

4 NTID Center on

A C C E S S T E C H N O L O G Y Rochester Institute of Technology National Technical Institute for the Deaf

www.ntid.rit.edu/cat Research Supported by Cisco

http://www.ntid.rit.edu/cat/projects/cisco

T

ELE

P

RESENCE

T

ECHNOLOGIES WITH

P

ROFESSIONAL

S

IGN

L

ANGUAGE

I

NTERPRETING

S

ERVICES

:

F

ACE

-

TO

-F

ACE AND

R

EMOTE

C

OMMUNICATIONS

FOR

D

EAF AND

H

ARD

-

OF

-H

EARING

U

SERS

--

P

HASE

II

March 2014

I

NTRO DUCTION

The goal of this project was to refine “best practices” when interpreters are used to support Cisco’s TelePresence (TP) for deaf or hard-of-hearing (D/HH) participants using a slightly different approach – working remotely and focusing on a Deaf person as the primary source of information.

TP is a large display web-based videoconference system. During the summer of 2012, the initial phase of the project was completed, focusing on face-to-face interpreting and adding a few simple remote scenarios. This project took that work to the next level.

Work was conducted over two days: four scenarios on July 24 and three scenarios on July 25, 2013 (total of seven scenarios). The scenarios on the first day used either TP or TP and a remote site for interpreters. All the scenarios on the second day were actually conducted in concert with a remote site, the Clarke School for the Deaf1 in Northampton, Massachusetts. We included deaf subjects, hearing subjects, and interpreters. Researchers used feedback from all participants as well as their own observations to refine the “best practices” for working with interpreters in such settings. If Deaf students are to work successfully with innovative technology, we need to continue to assess how this can be accomplished. The scenarios all featured a Deaf person as the focus of communication, with a variety of audiences. All scenarios were

designed with common university activities in mind. For example, three scenarios were created to model job interviews (either for coop placements or for post-graduation) with a potential employer (Deaf), a

representative from the company’s Human Resources office (hearing), and a Deaf student as job applicant. There was an interpreter for each interview. In addition, a professional development/outreach experience for professionals was developed.

1Many thanks to Ms. Claire Troiano and colleagues at the Clarke Schools for their contribution to this

(5)

5

The first scenario had the interpreter in the same room as the applicant (using CTS3210), while the

employer and HR rep were together in a separate room (using CTS1300). The second scenario separated all three in different locations: remote interpreter (google+), applicant (CTS3210), and employer/HR

(CTS1300). The third scenario again separated all three, with a slight modification: remote interpreter (google+), applicant (google+), and employer/HR (CTS1300).

The key evaluation element for all the scenarios was the level of satisfaction and effectiveness of the

professional interpreter for all the D/HH participants. This was measured with a feedback questionnaire and guided discussions at the end of each session, where detailed notes were recorded.

The level of interaction for deaf participants varied among scenarios, each modeling one of two types of communication: 1) two-way or interactive, identified as interviews in our scenarios or 2) full participation within a meeting and related multimedia resources (addressed on day two with participants from the Clarke School).

(6)

6

R

ES EARCH

M

ETHO DOLOG Y

P

H YSI C A L

E

N VI R ON MEN T

The following table illustrates the different research scenarios used to determine best practices for Cisco TelePresence for D/HH individuals. Google+ was introduced in several of the scenarios for the purpose of testing remote interpreting.

TelePresence and Google+ Research Scenarios

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 1 Job Interview

Interactive two-way

Cisco CTS1300 1-Screen/3-Cameras None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 3-Screens/3-Cameras 2 Interpreters (Team) 4 Deaf Students

Scenario 2 Job Interview Interactive two-way

Cisco CTS1300 1-Screen/3-Cameras

View Interpreter via Google+ on Laptop

None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 3-Screens/3-Cameras

View Interpreter via Google+ on Laptop

None 5 Deaf Students

Remote Google+ on Desktop 2 Interpreters (Team) None

Scenario 3 Job Interview Interactive two-way Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio

Cisco CTS1300 Google+ on Laptop

Projection on TelePresence Screen

None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 Google+ on Laptops

Projection on TelePresence Screen

None 5 Deaf Students

(7)

7

TelePresence and Google+ Research Scenarios -- continued

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 4 Shared Discussion

Full Participation

Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio

Cisco CTS1300 Google+ on Laptops

Projection on TelePresence Screen

None Deaf Presenter

Hearing HR Manager 2 Deaf Students

Cisco CTS3200 Google+ on Laptop

Projection on TelePresence Screen

None 2 Deaf Students

Remote Google+ on Desktop 2 Interpreters (Team) None

Scenario 5 Professional Development Full Participation

Cisco CTS1300 at NTID

1-Screen/3-Cameras 2 Interpreters (Team) Deaf Presenter

Videoconferencing Technology at

Clarke School

1-Screen/1-Camera None 3 Deaf Participants

3 Hearing Participants

Scenario 6 Professional Development Full Participation

All laptops and videoconferencing technology muted Cell or conference room phone used for audio

Cisco CTS1300 at NTID

1-Screen/3-Cameras View Interpreter via Google+ on Laptop

None Deaf Presenter

Videoconferencing Technology at Clarke School

1-Screen/1-Camera View Interpreter via Google+ on Laptops

None 3 Deaf Participants

4 Hearing Participants

Remote Google+ on Desktop 2 Interpreters (Team) None

Scenario 7 Professional Development Full Participation

Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio Cisco CTS1300 at NTID Google+ on Laptop Projection on TelePresence Screen

None Deaf Presenter

Videoconferencing Technology at Clarke School Google+ on Laptops Projection on Videoconferencing Screen

None 3 Deaf Participants

3 Hearing Participants

(8)

8

S

U BJ EC TS

The deaf subjects for the investigations that took place on July 24, 2013 included a variety of undergraduate students and staff recruited to participate. Faculty and staff members served as the presenter and human resources representative for the scenarios that involved job interviewing and recruitment. The deaf and hearing subjects for the investigations that took place on July 25 were recruited from the Clarke School. Those scenarios simulated professional development workshops.

The research plan was reviewed by RIT Institutional Review Board (IRB) and was approved. Each subject, including interpreters, was given an informed consent form which provided specific information about the study. The nature of the project and subject involvement was fully explained and questions were answered by project staff. All individuals who participated in the scenarios signed the informed consent form along with a permission form to use photographic and video images as a part of the reporting and dissemination process. All signed forms are on file in the project office.

I

N TERP RE TERS

Three local freelance interpreters were contracted for the scenarios. Their backgrounds varied from one who graduated from NTID’s BS degree in English/ASL Interpretation in 2008, one who graduated from that same program in 2009, and one who has over 30 years of experience – although none has any experience working in the video relay setting and none are nationally certified. We wanted a variety of backgrounds because one thing we wanted to know was how complicated and/or challenging would the technology be, given how much technology is impacting our daily lives. These interpreters deal with technology in business, in higher education, etc.; these scenarios would be a new experience for them.

E

VA LU A TI ON

I

N ST RU MEN T

The evaluation instrument consisted of 19 questions, changing only slightly from the first phase of this research project conducted in August 2012. Two questions were added regarding experience and comfort level using Google+.

The types of questions included rating scale, open-ended and classification. Rating scale questions were based on a 4-point scale ranging from “excellent” to “poor” or a 5-point scale ranging from “extremely comfortable” to “not at all comfortable,” “extremely successful” to “not at all successful” and “very natural” to “not at all natural.”

Participants were asked, in open-ended format, to explain why the meeting was or was not successful, their preference for the interpreter to be in the same room or on the screen, what went well during the session, and what recommendations they have to make TelePresence sessions more successful in the future. In

(9)

9 addition, participants were provided the opportunity to write-in additional comments. A copy of the

evaluation instrument is provided under Appendix A.

D

A TA

C

O LL EC TI ON A N D

A

N A LYSI S

The evaluation was conducted using a self-administered methodology. After each scenario, evaluation forms were completed on-site by all participants. After the evaluation forms were collected, a brief discussion was conducted to gather additional feedback.

Data obtained from the evaluation forms were tabulated by scenario and broken down by classification type (presenter, interpreter, student/participant) and site (Cisco CTS3210, Cisco CTS1300, Clarke School,

Interpreter Remote via Google+).

Most of the findings are presented using percentages. For all rating questions, the total responding to the question was used as the percentage base. For all other types of questions, the total sample, within each scenario, was used to compute percentages. The percentages for individual response categories do not always add up to 100%. This results from either rounding factors, a small percentage of no answers, or multiple responses provided by participants. In addition, all open-ended questions were coded in an effort to quantify responses.

(10)

10

S

UMMARY OF

F

INDIN GS

R

ESEA RC H

S

C EN A RI O

1

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 1 Job Interview

Interactive two-way

Cisco CTS1300 1-Screen/3-Cameras None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 3-Screens/3-Cameras 2 Interpreters (Team) 4 Deaf Students

Research scenario 1 consisted of a deaf presenter and a hearing human resources manager, located at a 1-screen/3-camera site (Cisco CTS1300), interviewing prospective job applications. The second site, 3-screens/ 3-cameras (Cisco CTS3210), included four deaf students and two interpreters teaming. PowerPoint (PPT) media was displayed, originating from the Cisco CTS1300 site.

Site 1: 1-screen/3-camera site (Cisco CTS1300) included one deaf presenter and one hearing human

resources manager interviewing prospective job applications utilizing PPT media.

Photo 1

Site 2: 3-screen/3-camera site (Cisco CTS3210) included four deaf students and

two interpreters teaming. Photo 2

(11)

11

S

C E N A R I O

1

F

I N D I N G S

Two of the seven participants (29%) from scenario 1 rated this meeting successful. All (100%) of the participants mentioned significant problems when students attempted to ask the presenter a question. As explained, the students and

interpreters were unable to easily capture the presenter’s attention because the technology (Cisco CTS1300) does not allow the presenter to view all of the participants at the same time. Further, once the student or

interpreter captured the presenter’s attention and the student started to sign their question, the

voice-activated camera switched away from the student to the voice interpreter. The voice interpreter attempted to resolve the problem by moving behind each student as they signed their question. Although the camera remained on the signing student, the solution was very distracting for all involved. In addition, the signing

interpreter said she was positioned too close to the TelePresence screen which distorted her view of the participants located at the other site and obstructed her view of the PPT. Participants also mentioned tracking problems on the supporting TelePresence monitors (Cisco CTS3200) showing the PPT (originating from Cisco CTS1300).

Nearly half (43%) of the participants rated the position of the interpreter as good. The position of the interpreter, in the same room located between two screens, was satisfactory for three out of the four deaf students, but not ideal for the presenter or interpreters. The primary concern was the close proximity of the signing interpreter to the screen, their inability to view the PPT easily, and the voice interpreter having to be mobile to prevent camera change.

Both interpreters and deaf students said they prefer to be in the same room. The interpreters mentioned that it is easier to interpret because they can see all of the students at the same time and gauge their level of understanding of the discussion/material. Deaf students said it is easier to capture the interpreter’s

attention and establish a connection when the interpreters are located in the same room.

I found it difficult and confusing to do my job. As students asked questions, I would naturally voice, but when I used

my voice the camera focused on me and not the signer. I tried holding back my voicing until the student was finished, but it was awkward and not smooth at all.”

~ Interpreter 0% 25% 50% 75% 100% Presenter (n=1) Interpreters (n=2) Deaf Students (n=4) Presenter (n=1) Interpreters (n=2) Deaf Students (n=4) Extremely Successful 0% 0% 0% Successful 0% 50% 25% Neither 0% 50% 0%

Not Successful (Net) 100% 0% 75%

(12)

12 What went well during this session?

71% Video quality of TelePresence / Able to view signs clearly

14% Presenter signed for himself / Able to look at one screen

14% Ability to view PPT on separate screen

14% Clear audio

What recommendations do you have to make TelePresence more successful in the future?

86% Ability to manually adjust the active camera (e.g., button, clicker, image recognition to capture signer)

43% Provide picture-in-picture on screen (to view what others are viewing)

29% Provide presenter and interpreters/remote interpreters with ability to see all participants and PPT at same time

14% Additional screen/monitor for interpreter (whose back is toward screen)

The presenter can see video if only someone is speaking. It’s a problem for deaf and hard of hearing

who use only sign.

~ Deaf Student Almost all (71%) of the participants

mentioned the exceptional video quality of the TelePresence system and being able to read signs very easily and clearly. One of the students mentioned that he liked having to look at only one screen because the presenter was able to sign for himself. The presenter said it was very helpful to be able to view the PPT on the supporting

TelePresence screen, and the signing interpreter appreciated the clear audio.

Almost all of the participants (86%) suggested including a feature in the TelePresence system to allow for the voice-active cameras to be manually adjusted by way of a button or clicker or by identifying the speaker through image recognition software. Participants mentioned several other suggestions to ensure

successful communication flow that includes incorporating picture-in-picture on the screen (43%), providing the presenter and

interpreters with the ability to view the PPT and all of the participants at the same time (29%), and including an additional monitor for the signing interpreter positioned directly in her line of vision (14%).

A button for the deaf participants is needed in order to be able to click the camera on them and hold it on them so the

voicing interpreter doesn’t pull the camera away. ~ Interpreter

(13)

13

R

ESEA RC H

S

C EN A RI O

2

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 2 Job Interview

Interactive two-way

Cisco CTS1300 1-Screen/3-Cameras

View Interpreter via Google+ on Laptop

None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 3-Screens/3-Cameras

View Interpreter via Google+ on Laptop

None 5 Deaf Students

Remote Google+ on Desktop 2 Interpreters (Team) None

Research scenario 2 involved three locations. The first site, 1-screen/3-camera site (Cisco CTS1300) included one deaf presenter and one hearing human resources manager interviewing prospective job applicants. The second site, 3-screens/3-cameras (Cisco CTS3210), consisted of five deaf students/applicants. The third site included two interpreters teaming at a remote location utilizing Google+ on a desktop computer. Participants at sites one and two were able to view the remote interpreter on their laptops via Google+ while interacting with each other through TelePresence (TP). The interpreter was projected on all of the extra TelePresence screens in lieu of PowerPoint (PPT) Media.

Site 1: 1-screen/3-camera site (Cisco CTS1300) included a deaf presenter and hearing HR manager interviewing potential job applicants. Remote interpreter is using Google+

and projected on extra TP screen. Photo 3

Site 2: 3-screen/3-camera site (Cisco CTS3210) consisted of five deaf students/applicants.

Remote interpreter is using Google+ and projected on extra TP screens. Photo 4

Site 3: Two interpreters teaming at a remote location utilizing Google+ on a desktop computer.

(14)

14

S

C E N A R I O

2

F

I N D I N G S

Nearly half (44%) of the participants rated scenario 2 successful. Three out of the five deaf students (60%) agreed that the meeting was a success, mentioning the meeting having good communication flow. The presenter said he liked having the interpreter at one location, but mentioned the video quality of Google+ as being only adequate. Most of the participants (78%) said they have used Google+ before; however, only 57% mentioned being comfortable (very comfortable/ comfortable net) using the program. Participants who have used, and are comfortable using Google+, rated the meeting more successful (extremely successful/successful net) than participants who have never used or are not comfortable using Google+ (75% versus 20%, respectively). The presenters and interpreters mentioned experiencing significant audio feedback and echoing problems while using Google+ with the

TelePresence system. The presenters and interpreters also mentioned problems associated with managing turn taking and switching between speakers. The presenter said he had difficulty focusing on his presentation material, the interpreter, and the participants all at the same time. The

back-up interpreter experienced ergonomic difficulties. She said, while using Google+, she had to lean over in a hunched position in order to click on current speaker. Similar to the previous scenario, participants mentioned experiencing difficulty capturing the attention of the presenter in order to comment or ask a question. Students also said that the remote interpreter’s signs were sometimes blurry and fell below the frame, and that there was too much lag time through Google+.

It is nice to have one location for the interpreter, but the quality of video for Google+ was just okay.

Also, this scenario made it difficult to focus on the interpreter and participants

in addition to my materials. ~ Presenter 0% 25% 50% 75% 100% Presenter (n=2) Interpreters (n=2) Deaf Students (n=5) Presenter (n=2) Interpreters (n=2) Deaf Students (n=5) Extremely Successful 0% 0% 0% Successful 50% 0% 60% Neither 50% 100% 20%

Not Successful (Net) 0% 0% 20%

Overall, how successful was your meeting? (Scenario 2)

I was the back-up remote interpreter. It was easier to team interpret in this scenario, but ergonomically it was very uncomfortable. I had to

stay in a hunched position with my hand on the mouse, ready to click on the current speaker. It was

difficult to be back-up and control Google+.

(15)

15 Almost two-thirds (63%) of the participants rated the effectiveness of the interpreter on the screen as either excellent (25%) or good (38%). Similarly, those students who said they are comfortable (extremely

comfortable/comfortable net) with Google+ rated the on screen interpreter more effective than students who said they are not comfortable (not comfortable/not at all comfortable net) with Google+ (100% excellent rating versus 0% excellent rating, respectively).

What went well during this session?

44% Remote location of interpreters / One focus / No camera change

33% Smooth flow of communication / Easily controlled and managed

22% Video quality of TelePresence / Able to view signs clearly

What recommendations do you have to make TelePresence more successful in the future?

33% Mute laptops and videoconferencing speakers to eliminate audio feedback when using Google+ with TelePresence / Use cell phone or conference phone for audio communication

33% Provide presenter and interpreters/remote interpreters with ability to see all participants and PPT at same time / Additional monitor

22% Establish guidelines for turn taking at beginning of each session

22% Eliminating pixel problem on supporting TelePresence screens

22% Decrease tracking/shadow problem and lag time on Google+

11% Provide picture-in-picture on screen (to view what others are viewing)

11% LED light to indicate active camera

11% Include a separate TelePresence screen for on-site or remote interpreter as part of TelePresence system

11% Incorporate touch screen option for Google+ (for purpose of switching current speaker more quickly and ergo)

Eliminate the voice and echo problem by muting the systems. Integrate a center location for the interpreter

into Cisco System without having to use Google+. ~ Presenter Forty-four percent (44%) of the

participants mentioned that they liked the remote location of the interpreters because there was only one central focus and the camera did not change when the interpreter voiced the students’ questions. Other

participants mentioned the meeting being easily managed and having good communication flow (33%) and excellent video quality (22%). Participants mentioned several areas of opportunity to make TelePresence more successful in the future. One-third (33%) of participants suggested utilizing a cell phone or conference phone for audio communication when using Google+ with TelePresence or videoconferencing systems. As explained, muting the volume on laptops/desktop computer and videoconferencing speakers eliminates echoing and audio feedback problems. Similarly, 33% suggested providing the presenters and interpreters with the ability to view PPT/media and all of the participants all of the time. Other participants suggested establishing guidelines for speaking/turn taking prior to each meeting (22%), eliminating the pixel problem on supporting TelePresence screens (22%), offering picture-in-picture feature (11%), and providing LED light to indicate active camera (11%).

The presenter suggested integrating a separate TelePresence screen for interpreting so that there is not a need to use Google+ (11%). Recommendations specific to Google+ included decreasing tracking/shadow problems when signing and lag time (22%) and incorporating a touch screen option (11%).

(16)

16

R

ESEA RC H

S

C EN A RI O

3

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 3 Job Interview

Interactive two-way Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio

Cisco CTS1300 Google+ on Laptop

Projection on TelePresence Screen

None Deaf Presenter

Hearing HR Manager

Cisco CTS3210 Google+ on Laptops

Projection on TelePresence Screen

None 5 Deaf Students

Remote Google+ on Desktop 2 Interpreters (Team) None

Research scenario 3 involved three locations all utilizing Google+ on shared laptops or desktop computers. The first site (Cisco CTS1300) included one deaf presenter and one hearing human resources manager interviewing prospective job applicants. The second site (Cisco CTS3210) consisted of five deaf students/applicants. The third site included two interpreters teaming at a remote location. The active Google+ frame was projected on the TelePresence screens. To combat audio feedback problems experienced during scenario 2, all laptops and Cisco TelePresence speakers were muted. Sites utilized either a cell or conference room phone at volume 6+ for audio communication.

Site 1: One deaf presenter and one hearing HR manager interviewing potential job applicants utilizing Google+.

Projection of Google+ is shown on TP screen. Photo 6

Site 2: Five deaf students/applicants participate in interview utilizing Google+ on a laptop. Projection of

Google+ is shown on TP screens. Photo 7

Site 3: Two Interpreters teaming at a remote location utilizing Google+

(17)

17

S

C E N A R I O

3

F

I N D I N G S

Thirty-eight percent (38%) of participants from scenario 3 rated the meeting successful. The other participants (62%) rated the meeting as neither successful nor unsuccessful. Presenters and interpreters mentioned

communication flow (e.g., speaking and turn taking) as being the biggest obstacle when using Google+. Other participants mentioned the difficulty seeing participants’ signs when they appear on the small frame, before having the opportunity to click the signer to the one large frame. One student said he noticed that the interpreter often had to ask

participants to repeat their questions and/or comments.

Nearly two-thirds (63%) of the participants rated the effectiveness of the on screen interpreter as either excellent (13%) or good (50%). The remaining participants rated the effectiveness of the on screen

interpreter as fair (38%). There was no correlation between participants' level of comfort using Google+ and their effectiveness rating.

The

What went well during this session?

25% Smooth flow of communication / Easily controlled and managed

25% Google+ good video quality for laptop

13% Remote location of interpreters / One focus

All participants must allow time for everyone to figure out who is speaking next. Turn taking

seems to be the biggest issue.

~ Interpreter

Google+ has only one big screen on the window, so I have to force myself to select interpreter or presenter in order

to see the signing clearly.

~ Deaf Student 0% 25% 50% 75% 100% Presenter (n=2) Interpreters (n=2) Deaf Students (n=4) Presenter (n=2) Interpreters (n=2) Deaf Students (n=4) Extremely Successful 0% 0% 0% Successful 0% 50% 50% Neither 100% 50% 50%

Not Successful (Net) 0% 0% 0%

Overall, how successful was your meeting? (Scenario 3)

The student participants mentioned the flow of

communication being much better than the previous scenarios (25%) and expressed satisfaction with the video quality on Google+ (25%). The presenter said he liked having one frame dedicated exclusively for the interpreter (13%).

(18)

18 What recommendations do you have to make TelePresence

more successful in the future? 38% Google+ option to provide split screens of equal size

22% Eliminate audio feedback by muting laptops and videoconferencing speakers

22% Establish guidelines for turn taking at beginning of each session

13% Google+ option to lock in participant of choice on large screen (e.g., presenter, interpreter, PPT)

13% Include a split or separate TelePresence screen for on-site or remote interpreter as part of TelePresence system

Need to establish a signal to tell participants when to switch speaker to large frame, like say their name.

Provide option for split screen.

~ Interpreter Several participants (38%)

recommended providing a split screen option on Google+ that allows two frames of equal size and proportion shown at the same time. The interpreters suggested

eliminating audio feedback by muting technology (22%) and establishing speaking/turn taking guidelines at the beginning of each meeting (22%). As explained, there needs to be a pause in between speakers to provide all participants the opportunity to switch the current speaker to large frame on Google+. One of the presenters suggested providing the option to lock in the participant of choice (e.g., presenter, interpreter) on the large screen within Google+ (13%). The constant switching and clicking to make the speaker large can be distracting, taking focus away from the discussion. Lastly, one of the students suggested TelePresence offer a feature comparable to Google+, but include a split or dedicated screen for the interpreter (13%).

Determine who is the most important person in the meeting to see and lock that person in on the large frame.

It’s too hard to watch the switching. ~ Presenter

Be more like Google+, but include a split or dedicated screen for the interpreter.

(19)

19

R

ESEA RC H

S

C EN A RI O

4

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 4 Shared Discussion

Full Participation

Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio Cisco CTS1300 Google+ on Laptops Projection on TelePresence Screen

None Deaf Presenter

Hearing HR Manager 2 Deaf Students Cisco CTS3200 Google+ on Laptop Projection on TelePresence Screen

None 2 Deaf Students

Remote Google+ on Desktop 2 Interpreters (Team) None

Research scenario 4 involved three locations all utilizing Google+ on laptops or desktop computers. The first site (Cisco CTS1300) included one deaf presenter, one hearing human resources manager, and two deaf students having an interactive two-way shared discussion. The second site (Cisco CTS3210) consisted of two deaf students. The third site included two interpreters teaming at a remote location. The active Google+ frame was projected on the TelePresence screens. Similar to the previous scenario, audio feedback was combated by muting the laptops and Cisco TP speakers and utilizing a cell or conference phone at volume 6+.

Site 1: One deaf presenter, one hearing HR manager, and two deaf students having shared discussion utilizing Google+.

Projection of Google+ is shown on TP screen. Photo 9

Site 2: Two deaf students participate in shared discussion utilizing Google+ on a shared laptop. Projection of Google+ is shown on TP screens. Photo 10

Site 3: Two Interpreters teaming at a remote location utilizing Google+

(20)

20

S

C E N A R I O

4

F

I N D I N G S

Three-quarters (75%) of all participants from scenario 4 rated this meeting successful, including three out of the four deaf students. Almost all of the participants said they attributed the meeting’s success to the speaking and turn taking guidelines that were established and communicated at the beginning of the meeting. The two participants that did not rate the meeting successful mentioned technology related problems including audio feedback and poor video quality of one of the

supporting TelePresence screens (Cisco CTS3200). Google+ was projected onto the supporting TelePresence screens for clarity purposes in both of the

TelePresence rooms (Cisco CTS3200 and Cisco CTS1300). Eighty-six percent (86%) of the participants rated the effectiveness of the on screen interpreter as either excellent (29%) or good (57%). Participants’ level of comfort with remote/on screen interpreting increased with each scenario. Participants were less likely to prefer the interpreter to be in the same room in scenario 4 than scenario 1 (13% versus 71%, respectively). In scenario 4, most (75%) of the participants said they do not have a preference on whether

the interpreter is in the same room or on screen. Students commented that they are fine with either option. The interpreters said they prefer on screen while using TelePresence because it resolves problems related to the voice activated camera.

Communication during the meeting was much improved once we were instructed to wait and let the interpreter

designate the next speaker.

~ Deaf Student

Slower pace – pausing between speakers – works better for interpreter and the deaf participants.

~ Presenter 0% 25% 50% 75% 100% Presenter (n=2) Interpreters (n=2) Deaf Students (n=4) Presenter (n=2) Interpreters (n=2) Deaf Students (n=4) Extremely Successful 0% 0% 0% Successful 100% 50% 75% Neither 0% 50% 25%

Not Successful (Net) 0% 0% 0%

Overall, how successful was your meeting? (Scenario 4)

By identifying the speaker prior to speaking or signing allowed us to click on the person to put in large screen to

see signing better.

(21)

21 What went well during this session?

75% Smooth flow of communication / Easily controlled and managed

25% Google+ good video quality for laptop

13% Audio quality improved / No audio feedback

What recommendations do you have to make TelePresence more successful in the future?

50% Establish guidelines for turn taking at beginning of each session

38% Develop a more natural method for asking questions

25% Establish minimal standards of connectivity, hardware, resolution, etc. to ensure successful experience

13% Designate one meeting participant as the Google+ large screen controller for all participants

Make sure that once someone has finished speaking the next responder waits a few seconds before speaking (for the interpreter to catch up). It’s still difficult to

stop people and ask for clarification. ~ Interpreter Three-quarters (75%) of the

participants mentioned that the session presented a smooth flow of communication between presenters, interpreters, and students. The presenters and interpreters said the session was much easier to control and manage. Two of the

participants (25%) commented that the video quality of Google+ on a laptop was satisfactory. Another participant appreciated the

adjustments made to the audio that eliminated the feedback problems experienced in earlier scenarios (13%).

Half (50%) of the participants reiterated the importance of establishing speaking/turn taking guidelines at the start of each meeting; however, participants continued to experience difficulty capturing the presenter’s attention to comment or ask a question. Several participants (38%) agreed that there is a need to develop a more natural method for

interrupting the discussion in order to ask a question or clarify a

response. Other participants suggested ensuring success by establishing minimal technical standards (e.g., connectivity speed, hardware, image resolution, etc) (25%) and designating one meeting participant as the screen controller for all participants when utilizing Google+ (13%).

There was a lag maybe due to connectivity problems. Need to develop standards to ensure

successful communications.

(22)

22

R

ESEA RC H

S

C EN A RI O

5

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 5 Professional

Development

Full Participation

Cisco CTS1300 at NTID

1-Screen/3-Cameras 2 Interpreters (Team) Deaf Presenter

Videoconferencing Technology at Clarke School

1-Screen/1-Camera None 3 Deaf Participants

3 Hearing Participants

Research scenario 5 consisted of one deaf presenter and two interpreters teaming at a 1-screen/3-camera site (Cisco CTS1300) located at NTID. The deaf presenter was conducting a professional development workshop with individuals from Clarke School. The second site located at Clarke School, utilizing videoconferencing technology, included three deaf and three hearing participants.

Site 2: 1-screen/1-camera site (videoconferencing technology) consisted of six participants

representing Clarke School, three deaf and three hearing.

Photo 13 Site 1: View of NTID site (Cisco CTS1300) from Clarke

School. Two interpreters (teaming) at NTID. One

interpreter positioned next to presenter on same screen. Photo 12

(23)

23

S

C E N A R I O

5

F

I N D I N G S

Eighty-five percent (85%) of all the participants from scenario 5 rated this meeting either extremely successful (14%) or successful (71%). One participant rated the meeting neither successful nor unsuccessful

mentioning that the voicing interpreter was difficult to follow because she was voicing off screen. Others participants mentioned positive comments about the flow of communication; however, are not completely satisfied due to window glare at the Clarke site and audio problems at the start of the meeting. In addition, the signing interpreter experienced ergonomic problems because she had to lean in closer to the built-in speaker in order to be better heard.

Two-thirds (66%) of the

participants rated the position of the interpreter as either excellent (22%) or good (44%). The voicing interpreter was originally

positioned off screen but moved on screen, next to the presenter, because the hearing participants at Clarke School requested a visual. One participant mentioned that deaf and hard-of-hearing people that are non-ASL users may be uncomfortable with an interpreter being in the same room and may prefer the interpreter to be on the screen. All (100%) rated the effectiveness of the interpreter on screen as either excellent (80%) or good (20%).

My body position was extremely uncomfortable. I had to lean way over the table so that my mouth was closer to the

built-in table microphone. I had to speak louder than normal and still, from time to time, Clarke participants could

not hear me well. My neck hurts.

~ Interpreter

Much easier to follow when interpreter and presenter are both visible.

~ Hearing Participant

Much easier to follow when interpreter and presenter are both visible.

~ Hearing Participant 0% 25% 50% 75% 100% Presenter (n=1) Interpreters (n=2) Participants (n=4) Presenter (n=1) Interpreters (n=2) Participants (n=4) Extremely Successful 0% 0% 25% Successful 100% 100% 50% Neither 0% 0% 25%

Not Successful (Net) 0% 0% 0%

Overall, how successful was your meeting? (Scenario 5)

Deaf and hard-of-hearing students may feel uncomfortable with an interpreter’s presence, therefore on screen is very confidence boosting.

(24)

24 What went well during this session?

44% Video quality / Able to view signs clearly

33% Smooth flow of communication / Easily controlled and managed

22% Position of interpreter next to presenter on same screen

What recommendations do you have to make TelePresence more successful in the future?

67% Show presenter and interpreter on screen at all times / Side-by-side or split screen / Coming into frame when needed is awkward

33% Establish guidelines/standards on how to conduct a meeting using TelePresence (e.g., systems training, technical standards,

environmental standards, etc.)

33% Provide an additional microphone for interpreter that is not tied into the videoconferencing system

22% Provide a larger screen/camera angle when interpreter shares screen / Need space to sign

Additional microphones needed for the interpreters that are not connect to the video cameras.

~ Presenter Nearly half (44%) of the participants

mentioned the video quality of the TelePresence and videoconferencing systems as being exceptional. Participants explained that they were able to see signs very clearly. Other participants mentioned positive comments about the flow of communication (33%) and the position of the interpreter (on screen and next to presenter) (22%).

Many participants (67%) suggested the need to show both the presenter and the interpreter on the screen at all times, either side-by-side or split screen. As explained, having the interpreter come onto the screen only when she was needed was awkward and distracting.

Participants said they feel having a visual of the interpreter is essential in order to follow the discussion easily.

Several participants (33%)

recommended establishing training guidelines and standards on how best to conduct a meeting using TelePresence. Participants specifically mentioned the need to conduct an audio check before each meeting, assess environmental factors such as background

lighting/window glare, white noise, and identify minimal technical standards related to connectivity, and image resolution.

When sharing, a slightly larger screen would be ideal. When you sign you need elbow room. I ended up sitting behind the presenter a bit but then he could not see when I finished voicing, so the other interpreter started voicing

over the top of me.

~ Interpreter Other participants (33%) suggested providing the interpreters with portable microphones that are not tied into the TelePresence/videoconferencing system. Portable microphones for the interpreters would resolve audio, ergonomic, and voice activated camera issues. The interpreters also suggested increasing the size of the screen or camera angle in order to have enough space to sign when sharing a screen (22%).

Provide training in TelePresence. I have had more success with FaceTime and Skype.

(25)

25

R

ESEA RC H

S

C EN A RI O

6

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 6 Professional

Development

Full Participation All laptops and videoconferencing technology muted Cell or conference room phone used for audio

Cisco CTS1300 at NTID

1-Screen/3-Cameras View Interpreter via Google+ on Laptop

None Deaf Presenter

Videoconferencing Technology at Clarke School

1-Screen/1-Camera View Interpreter via Google+ on Laptops

None 3 Deaf Participants

3 Hearing Participants

Remote Google+ on Desktop 2 Interpreters

(Team)

None

Research scenario 6 involved three locations. The first site, 1-screen/3-camera site (Cisco CTS1300) at NTID included one deaf presenter conducting a professional development workshop. The second site, 1-screen/ 1-camera (videoconferencing technology), consisted of six participants (three deaf and three hearing)

representing the Clarke School. The third site included two interpreters teaming at a remote location utilizing Google+ on a desktop computer. Participants at sites one and two were able to view the remote interpreter on their laptops via Google+ while interacting with each other through videoconferencing technology. To minimize audio feedback, all laptops and speakers were muted. Cell or conference phones were used for audio.

Site 1: 1-screen/3-camera site (Cisco CTS1300) included one deaf presenter conducting a professional development

workshop. Remote interpreter is shown on laptop via Google+. Photo 14

Site 2: 1-screen/1-camera site (videoconferencing technology) consisted of six participants from Clarke

School (3 deaf and 3 hearing). Remote interpreter is shown on laptop via Google+. Photo 15

Site 3: Two Interpreters teaming at a remote location utilizing Google+

(26)

26

S

C E N A R I O

6

F

I N D I N G S

More than three-quarters (78%) of participants from scenario 6 rated the overall success of the meeting as either extremely successful (11%) or successful (67%). Several participants from Clarke School said they liked having control over their screen and being able to change the view on their laptops while using Google+. This scenario was rated very highly considering less than half (44%) of the participants from this scenario said they have experience using Google+, and only two of these participants mentioned being comfortable with the program. Those that were not completely satisfied mentioned technical difficulties relating to audio

feedback and the computer freezing momentarily while using Google+. The presenter commented that there is a lot of effort involved when using Google+ with the TelePresence System. Other participants mentioned too much lag time on Google+, difficulty interrupting the discussion, and the session not being long enough to properly evaluate. All (100%) of the participants rated the effectiveness of the interpreter on screen as either excellent (56%) or good (44%). Participants said they liked having one visual focus for the interpreter and being able to control and manage their own view on Google+.

Very helpful to have control over screens in Google+. It felt more personal, like the speaker was in the

room talking directly to me.

~ Hearing Participant 0% 25% 50% 75% 100% Presenter (n=1) Interpreters (n=2) Participants (n=6) Presenter (n=1) Interpreters (n=2) Participants (n=6) Extremely Successful 0% 0% 17% Successful 0% 50% 66% Neither 100% 50% 17%

Not Successful (Net) 0% 0% 0%

Overall, how successful was your meeting? (Scenario 6)

Some technical issues like the screen freezing. I also found it hard to interrupt to ask for camera

adjustments or sound adjustments.

~ Interpreter

The signing on Google+ had no issues. If the audio feedback is addressed, then it would be excellent.

There was feedback on microphone through computer speakers.

(27)

27 What went well during this session?

44% Separate screen for interpreter / Central focus

33% Google+ good video quality for laptop

22% Smooth flow of communication / Easily controlled and managed

22% Quality interpreters

11% Clear audio

What recommendations do you have to make TelePresence more successful in the future?

33% Establish guidelines for turn taking at beginning of each session

33% Establish minimal standards of connectivity, hardware, resolution, etc. to ensure successful experience

33% Eliminate lag time/tracking pixels while using Google+ / One second delay max

11% Provide training on TelePresence/Videoconferencing/Google+

11% Eliminate audio feedback by muting laptops and TelePresence/Videoconferencing speakers

11% Show presenter and interpreter on screen at all times / Side-by-side or split screen

11% Ask participants of interpreter preference (e.g., oral, sign, combo)

The interpreter being on screen was more personal and direct, but still very subtle and less intrusive. However, everyone needs to be using the same technology at once.

Everyone needs to be able to see the PPT. ~ Hearing Participant Several participants (44%) said they

liked the interpreters having their own separate screen because it provided participants with one central focus. One-third (33%) of participants mentioned the video quality of Google+ being satisfactory for a laptop. Other participants mentioned the smooth flow of communication (22%), quality interpreters (22%), and clear audio (11%).

Almost all of the participants mentioned a recommendation relating to the need to develop guidelines/best practices on how to best use TelePresence and how to best use TelePresence utilizing Google+. Specific suggestions included developing speaking/turn taking guidelines (33%),

establishing minimal technological standards (33%), providing training on TelePresence and Google+ (11%), and explaining how to eliminate the audio feedback problems when using Google+ with TelePresence/ videoconferencing systems (11%). One-third (33%) of participants mentioned recommendations specific to Google+. As explained, there is a need to reduce the lag time and eliminate the tracking pixel problem. Participants mentioned a maximum one second delay as the goal/standard.

Other participants suggested always showing the presenter and

interpreter on screen at all times either side-by-side or split screen (11%), and identifying in advance participants’ interpreter

needs/preferences (11%),

Thirty percent (30%) of participants

Need to get better on using Google+. Need to have no more than one second delay on screen as well as voice too.

Some pixel problems occasionally, need to fix that. ~ Deaf Participant

(28)

28

R

ESEA RC H

S

C EN A RI O

7

D

E S C R I P T I ON

Communication Sites Technology at Site Interpreter at Site Participants at Site Scenario 7 Professional

Development

Full Participation Utilizing Google+ as Primary at all 3 Sites All laptops and videoconferencing technology muted Cell or conference room phone used for audio Cisco CTS1300 at NTID Google+ on Laptop Projection on TelePresence Screen

None Deaf Presenter

Videoconferencing Technology at Clarke School Google+ on Laptops Projection on Videoconferencing Screen

None 3 Deaf Participants

3 Hearing Participants

Remote Google+ on Desktop 2 Interpreters (Team) None

Research scenario 7 involved three locations all utilizing Google+ on shared laptops or desktop computers. The first site (Cisco CTS1300) included one deaf presenter conducting a professional development workshop. The second site (videoconferencing technology) consisted of six participants from Clarke School (three deaf and three hearing). The third site included two interpreters teaming at a remote location. The active Google+ frame was projected on the videoconferencing screens at sites one and two. To combat audio feedback, laptops and videoconferencing technology were muted. Either a cell or conference room phone was used for audio.

Site 1: One deaf presenter conducting a professional development workshop utilizing Google+ on a laptop. Projection of Google+ is shown on TP screen. Photo 17

Site 2: Six individuals from Clarke School (three deaf and three hearing) participate in professional

development workshop utilizing Google+ on laptops and cell phones. Photo 18

Site 3: Two Interpreters teaming at a remote location utilizing Google+

(29)

29

S

C E N A R I O

7

F

I N D I N G S

Overall, 44% of participants rated scenario 7 successful (extremely successful/successful net). Four out of the six participants from Clarke School rated this meeting either extremely successful (33%) or successful (33%). The two remaining Clarke School participants rated this meeting unsuccessful (33%). The participants that rated this meeting successful mentioned smooth flow of communication and having the ability to control and change the view on their laptop through Google+ to the person of their choice (e.g., presenter, interpreter). Participants who were not completely satisfied mentioned problems with turn taking and having difficulty capturing the presenter’s attention to comment or ask a question. The presenter mentioned experiencing difficulty following the many screens while maintaining focus on the interpreter. Participants commented that it is difficult to see signs when the participant signing is on the small screen, and adding that sometimes the signs were blurred. Other participants mentioned experiencing a few technical difficulties at the beginning of the session related to logging onto Google+, computer freezing, and audio feedback.

Three-quarters (77%) of the participants rated the effectiveness of the interpreter as either excellent (33%) or good (44%). Again, the participants said they like having control of their own screen and one focus because it is easier to follow.

Turn taking was a bit of a problem. While still voicing for our presenter, I would see a deaf participant responding on the small screens but I hadn’t finished and the small screen was too small to

see before it was clicked to enlarge.

~ Interpreter

Excellent clarity and ability to view either speaker or interpreter and switch back and forth.

~ Hearing Participant 0% 25% 50% 75% 100% Presenter (n=1) Interpreters (n=2) Participants (n=6) Presenter (n=1) Interpreters (n=2) Participants (n=6) Extremely Successful 0% 0% 33% Successful 0% 0% 33% Neither 100% 100% 0%

Not Successful (Net) 0% 0% 33%

Overall, how successful was your meeting? (Scenario 7)

It was a challenge for me to follow many screens and maintain focus on the interpreter.

(30)

30 What went well during this session?

44% Smooth flow of communication / Easily controlled and managed / Better once turn taking guidelines were established

11% Google+ good video quality for laptop

11% Remote location of interpreters / Separate screen

What recommendations do you have to make TelePresence more successful in the future?

56% Google+ option to provide split screens of equal size

44% Establish guidelines for turn taking at beginning of each session

33% Google+ on laptop ideal for meetings that take no longer than one hour

11% Designate one meeting participant as the Google+ large screen controller for all participants

11% Eliminate lag time/tracking pixels while using Google+ / One second delay max

11% Google+ captioning option

Find a way to have a split screen share, so you can see a PPT, and a large image of a person or two people, but be

given that personal choice to change.

~ Hearing Participant Forty-four percent (44%) of the

participants mentioned smooth flow of communication once the

speaking/turn taking guidelines were established. Other participants mentioned the video quality on Google+ being satisfactory (11%) and liked having the interpreter on her own screen (11%).

However, more than half (56%) of the participants suggested

incorporating a split screen option in Google+ to allow for two screens to be open at the same time that are of equal size and proportion. Forty-four percent (44%) of participants reiterated the importance of establishing

guidelines for speaking/turn taking at the beginning of every meeting. One participant suggested

designating one meeting participant as the controller for all participants. Other participants mentioned specific recommendations for Google+ including a maximum one second lag time (11%) and offering a option for captioning (11%).

Several of the participants (33%) agreed that Google+ on laptops would be ideal for meetings that take one hour or less.

I would like to be able to have a method for controlling turn taking. Watching the presenter, it seems he was a wee bit laggy/blurry trying to keep up. This process

wouldn’t work for hours using just a laptop.

~ Interpreter

The challenge is to get the speaker on big screen which caused a delay by moving the cursor to get the speaker on screen. Have a moderator control the big screen.

(31)

31

R

ESULTS

Overall, the participants were impressed with all of the iterations we experimented with and they all saw value in working with this kind of technology. The quality of the video in TP to TP (scenario 1) and TP to videoconferencing (scenario 5) was outstanding and although not as good, the quality of using Google+ on a laptop was seen as more than adequate. All participants agreed that once the initial learning curve was mastered, there was a good flow of communication – this included the clarification of how to take turns, who was in control, the placement of the interpreter, etc; once these challenges were addressed, everyone felt the communication events were quite successful. Connected to this, the feedback from the earlier scenarios indicated that the interpreters needed to be in the same room rather than at a remote site. Once all

experienced remote interpreters and once guidelines were clearly established, the problem was resolved. As a result, the feedback shifted to preferring the interpreter and the presenter on the same screen to better focus on the message (much more reflective of “live” situations). And lastly, all participants seem to agree that success was dependent on the style of interaction – i.e., a more agenda-driven meeting rather than a typical interactive classroom experience. One concern was that “natural” interactions may have to be adjusted to take advantage of new technology.

There was one over-arching theme that arose from the feedback – the deaf participants (students and presenter) and the interpreters wanted to be able to see everyone involved as well as see the additional media (powerpoints) and to see themselves as well. As in our previous study (October 2012), it is important to understand the VRS system that is so commonly used by members of the D/HH community. When a VRS call is made, the D/HH person sees him/herself in a small tile on the screen. It helps them ensure that the visual ASL can be clearly seen. This is not an option when using TP.

Another theme that arose was the need for guidelines and/or standards to help the participants navigate this experience. Participants specifically mentioned the need to conduct an audio check before each meeting, assess environmental factors (background lighting, visual noise, etc.), and identify minimal technical standards related to connectivity and image resolution. Several participants had experience with FaceTime and Skype so they understood the basic concept of TP; however, given the goals of our scenarios (i.e., more formal educational experiences) they recommended guidelines and/or standards to be shared among all participants prior to the event. In addition, because of the number of participants involved, it was recommended that guidelines for turn taking be established. This would definitely help the interpreters anticipate who was going to “take the floor.” And, although all of the deaf participants have had experience working with interpreters in a variety of settings (e.g.., classroom, business meetings, etc.) the use of this technology often did not allow them to see both the interpreter and the presenter at the same time. In a face-to-face situation, both Deaf people and interpreters expect to see each other – if there is any question as to the success of the communication, everyone has the option to “check” and ask for clarification or repetition. Several times, the feedback indicated a desire to have something akin to a “picture in a picture” configuration to support the visual needs of both the Deaf people and the interpreters. Related to this, interpreters were sometimes at a loss when they needed to ask for a camera (visual) or audio adjustment. Since they were not on screen to everyone they sometimes felt disconnected and unable to monitor the flow of communication, which is what they do when working face-to-face. Again, this relates to turntaking and the larger issue of being able to see everyone.

(32)

32

R

E C OM M E N D A T I ON S

Guidelines/Training – It is important to establish speaking/turn taking guidelines before every meeting and

have this information shared among all participants. This is true for conducting successful face-to-face meetings when deaf and hearing participants are working with interpreters. Working with Google+ adds an additional layer of complexity. Time is required to pause, to see who is participating, to ensure the

interpreter is comfortable with the flow of communication. It would also be helpful if everyone involved had some training with TP and Google+, including simple audio checks; a review of environmental issues would help to ensure a smooth and successful interaction. This training could also include minimal standards regarding hardware, video quality, etc., reducing the interruptions that occur during the interactions.

Seeing Everyone – The presenter and the interpreters need to be shown at all times in split or shared screens.

If the interpreter is sharing a screen with a presenter, a larger screen is needed to allow for his/her signing space (not enough “elbow room” to sign).

Echo/Feedback – Turn off audio on laptops and videoconferencing systems when using Google+ in order to

combat echo and audio feedback problems.

Team of Interpreters - For most events with more than one D/HH consumer, two interpreters are required.

As happened in our previous work (October 2012), interpreters prefer working together at the same site and indicated that being able to see not only each other but all the participants (deaf and hearing) is vital to the success of the process. Interpreters need to see both consumers to monitor the success of the

communication. Sign language requires eye contact; it helps regulate the conversation and ensures

comprehension. When interpreters and D/HH people do not have direct access to each other’s faces, there is always doubt about the success of the interaction.

Within Telepresence Systems

Provide Picture-in-Picture Feature on TP – Rather than devote a separate display within the TP environment,

a movable and scalable picture-in-picture feature would greatly improve the integration of a professional interpreter within the primary videoconference space.

Active Cameras and Microphones – An LED light on all video cameras would help identify the active camera.

Perhaps incorporate a button or switch to manually select voice-activated cameras. Because voice-activated cameras can be triggered by interpreters speaking, provide voice interpreters with portable microphones so they do not trigger an active camera of a signing participant, but their voice is heard across the system.

Build in Access Features in TP – TP systems should have the capability to always include a picture-in-picture

capability for interpreters as well as movable and scalable caption displays.

Ergonomic Solutions for Interpreters - When interpreters are within a TP environment and voice for a signing

participant, they have to lean into the “live” microphone in order to be heard by participants. Again, remote microphones that can be used across the system without activating camera would be very helpful.

(33)

33

Within Google+ Hangouts

Speaker and Interpreter Equal Size in Display Window - An option to split the primary screen in equal sections

would dramatically improve the “usability” of the system for all users when interpreting is needed.

Short Meetings –It became obvious to all participants in our study that the shorter the Google+ meetings the

better. Google+ simply cannot provide the quality transmission and video quality as well as the user interface typical of TP systems. It is tiring to participate on smaller, less sophisticated systems.

“Meeting Manager” Capability – An option within Hangouts to permit one user to manage the various

displays in order to control the display of individuals, documents, interpreters/captions would facilitate sessions where there are more than three or four individuals participating. However, this suggestion needs to be balanced with giving each participant the ability to manage and control their own screen display options.

Workgroup Document Display with Videoconference Windows – When using Hangouts it was not possible for

participants to maintain the images of the individuals in the session along with presentation displays, like PowerPoints or word processing documents.

(34)

34

O

VERALL

R

ECO MMENDATIONS FOR

C

O NTINUED

R

ES EARCH

The 12 months between our first systematic investigation of interpreting within videoconference environments and our second effort saw a dramatic increase in the availability and wider acceptance of videoconference systems for both computer and tablet platforms. While there are more technical options and individuals are more comfortable using videoconference systems, the provision of professional interpreting within more formal settings still requires planning on the part of the meeting organizers, professional interpreters and technical support staff to ensure successful event.

As was mentioned above, D/HH people are becoming more and more comfortable with advanced technology (FaceTime, Google+, Skype); the use of VRS interpreting services also continues to grow. At this writing, there are no statistics to indicate an increase in the use of VRS servi

References

Related documents

extension may be available to the spouse and any eligible children receiving continuation coverage if the employee or former employee dies, becomes entitled to Medicare benefits

Also in 2013, the City's Rent Stabilization and Housing Division continued to promote green building standards in the City's affordable housing stock by working with developers of

Considering the above analysis and interpretation of the core constructs and personal constructs held by women leaders who have avoided or overcome derailment, a new framework

At RIT, the TE benefit is available to the children (same child eligibility as described under the Tuition Waiver benefit above) of regular full-time employees with at least five

Deaf or hard-of-hearing students may apply for transfer admission to programs offered at the National Technical Institute for the Deaf (NTID) or to any other RIT college..

Pre-Baccalaureate Programs If you’re interested in a bachelor’s degree program in the Kate Gleason College of Engineering, College of Health Sciences and Technology, College

• Your direct deposit is used to purchase a Zero-Percent Certificate of Indebtedness (C of I), which does not earn any interest, but is used as a source of funds to purchase

Those individuals currently pursuing training in healthcare report the limited availability of quality access services, particularly sign language interpreters with