As Jameson (2013) has noted, “Heuristic testing concerns what experts think; usability testing concerns what users do” (p. 398). Although heuristic testing is certainly valuable, it does not measure how users interact with an object of study; usability testing, on the other hand, explores how real people actually use something for a particular purpose (Barnum, 2011). Nielsen (1993), Nielsen and Mack (1994), Rubin, Chisnell and Spool (2008), Mayhew (1999), Dumas and Redish (1999), Rogers, Sharp and Preece (2011), and Goodman, Kuniavsky and Moed (2012) have established principles of usability testing that have become standard in the engineering as well as the computer and software design industries. After the proliferation of the World Wide Web in the 1990s, these principles were used to determine how people read, navigated, and used web pages (Garrett, 2010; Krug, 2014). However, usability testing and user-centered design are being increasingly applied to other industries and situations, including the way health care workers use electronic health records (Lowry et al., 2012), the way students read electronic textbooks (Lim, Song and Lee, 2012), or the way children collaborate on wikis (Hadjerrouit, 2012). Miller-Cochran and Rodrigo (2006) used usability testing to determine how well students could navigate their online composition courses, and Panthee (2014) conducted a usabilitystudy of the common learning management system (LMS) Blackboard to determine how well it worked in fostering student agency and invention in the writing process. Jameson (2013) advocated usability testing in the business communication classroom as a way to enhance learning and build empathy for user experiences. Tang, Winoto and Leung (2014) used usability testing to determine whether groupware truly fostered the collaborative learning environment designers intended. None of these studies or scholars have examined GAFE specifically, nor the use of cloud-computing in the composition classroom. Miller-Cochran and Rodrigo (2010) noted that design must “anticipat[e] users’ needs and expectations” (p. 1), and Eyman (2009) stated that usability must always be “coupled” with design (p. 223). Discussions and descriptions of online and digital writing pedagogies, including Brabazon (2002), Cook (2005), the Conference on College Composition and Communication (2013), and DePew (2015), have reminded teachers that the pedagogy and not the technology should drive the tool’s implementation. Faculty are not in the position to redesign the
5. The respondents were critical of the encryption policies. Not all of the consumer-based, free VoIP systems investigated in this usabilitystudy indicated the use of encryption within their systems. Respondents had to search for the information in the systems’ policies; once it was found it was difficult to tell if the video stream was encrypted. Personal information, email, and other data seemed to be encrypted, but some of the policies did not specify methods of encryption for video streams.
Near Field Communication (NFC) is becoming more prevalent in smartphones. Using NFC tags enables additional information to be added to physical objects which can be access via a mobile app, which could be of particular use in tourist environments where users have limited knowledge of the area. This paper presents an innovative use of NFC-tags within static tourist maps with the aim of making the maps easier to use and providing additional information without obscuring the map itself. A prototype map and app system was developed and tested with the aim of assessing user interaction with the technology. The results showed that the concept was of interest to the users, but that improvements in usability are required.
There have been markedly fewer usability tests on tutorials and other modular content than there have been on library sites in general. Evaluation of these tutorials tends to be more focused on their instructional effectiveness than their usability (Germain, Jacobson and Kaczor; Holman; Nichols, Shaffer, and Shockey). Bury and Oud assert that the goals of a tutorial are different than the goals of a library Website. The goal of this type of application is learning, rather than finding information or accessing services (Bury and Oud 58). The authors summarize the major difference between a tutorial and a library site, remarking, “[u]sers need to approach a tutorial with patience and attention, and a tutorial is typically less finite and task oriented than a library web site” (58). In order to deal with this distinction, the authors designed a different kind of usability test for the tutorial, focusing on navigation, design, layout and presentation of information, interactivity, content, use of language, and tests (60-61). The goal of this study was not only to find out what users did, as is usually the case in task-oriented usability tests, but also to find out what they said about the tutorial, and what their general impressions were (Bury and Oud 59). While the focus on qualitative data and user-centered design found in that line of research does inform this study, Bury and Oud’s particular technique for testing the usability of a library learning module is more appropriate here. The purpose of this study was to find out generally how usable and useful the learning objects are for undergraduate students, rather than measuring how quickly or effectively these students complete a series of tasks.
Usability issues regarding problems moving between pages and lessons were categorized here. With a single exception, no one claimed to have any problems with navigation. Participant 4 did express frustration when trying to go back into the tutorial to answer the first PubMed question, having trouble getting to the page he was looking for. Rather than use the contents page, he used the “previous page” arrow found at the bottom of all pages. In fact, all users used the arrows at the bottom of the page for navigating. Participant 4 later explained that he was confused about the side-links, thinking that they were for getting between lessons, rather than a listing of the individual pages among the present lesson. “Sometimes there was navigation issues, with the side-links….at first I thought these were all the pages that I’d already viewed. I didn’t realize it was just for one lesson.” Since this issue also gets at an issue of structuring the lesson, this issue was tallied under “Structure of Lessons” as well.
This usability test was conducted in February 2012 as a first step in understanding biologists’ preferences and interactions with the web interface of the Dryad repository of biological data. These interactions could be reduced to three distinct workflows: (1) Learn about Dryad (2) Find and download data (3) Upload data. With the results of a card sorting procedure, a dendrogram was created that will be useful in organizing information about Dryad itself in accordance with user expectations. When asked to find information about Dryad, biologists reacted to the interface similarly to non-biologists. The users preferred standard web practices like categorizing content into clearly defined groups marked by location and color over less-standard practices like green link highlighting. When asked to find and download data, however, the biologists differed from typical web users; the biologists initially preferred to search for most information rather than to browse. Data uploading was successful, but fraught with difficulties. Though information regarding what should go into each entry field was present, it was not acknowledged or recognized by most subjects, so they were uncertain about what to input into each field.
Because the research on the usability of library tutorials is limited, the authors performed their own, testing the usability of an online information literacy tutorial at Wilfrid Laurier University. While the authors had already gathered student feedback, they wished to find specific problems in areas of navigation, design, layout, interactivity, use of language, and content. However, a task-oriented usability test was not appropriate for their goals. Instead, a subject pool of students was recruited and asked to read through the tutorial and answer a series of questions on a handout. The test concluded with a brief verbal interview. The researchers sought to gain “students’ impressions” of the tutorial, rather than “their level of efficiency or success in carrying out specified tasks” (59). This methodology revealed areas that needed improvement: use of jargon, broken links, text- heavy pages, and overly simple content (61). As a result of the test, the library re-
In the fall of 2011the Duke University East Campus Libraries Web Assessment Group was formed (ECL WAG) to assess the use and performance of the Lilly Library and the Music Library websites. Among the initiatives approved by ECL WAG was a task-based usability test. After analyzing usage of the sites using Google analytics and completing an environmental scan of comparable institutions’ library websites and a literature review, members of ECL WAG determined what the core functions of each site were and completed a cognitive walkthrough of these tasks. Through this process, the committee identified potential points of failure in the ability of the system to aid in the completion of the tasks deemed to be essential to the sites’ intended function. The committee also appointed the author of this paper to design and implement the testing of the Music Library site. Thus, this paper presents the findings of the task-based usability testing of the Duke University Music Library. The data presented here will be used to redesign the site and will hopefully contribute to the understanding of the current state and usage of library websites and music library websites.
According to Nielson’s findings in 1993, 15 users are needed in order to find all usability issues of a system. However, findings from Nielson’s research show that aggregates of evaluators reach the point where the number of new usability issues found diminishes around ten evaluators. For this reason, in a real world where a design process is iterative under a limited budget and resource, normally five subjects are considered to be sufficient to find major usability issues. Nielson suggests conducting usability tests with five subjects iteratively, instead of testing only one time with fifteen subjects (Nielson, 1993).
This study provides empirical evidence that validates Redish’s observation that “Ease of use— what we typically focus on in usability testing—is critical but not sufficient for any product” (2007, p. 104). The ease of use indicators in this study, both quantitative and qualitative, suggested that our users found the prototype handbook and its visuals to be an “easy-to-use solution” to their perceived needs. And our client would very likely have been entirely satisfied with the ease of use data we provided. However, had the study’s design solely focused on what the client told us they wanted, we would have failed both our client and the users. The true complexity of the tasks would not have been revealed, the lack of “utility” in the handbooks would not have manifested itself (Redish 2007), and our clients would never have seen the need to “think outside the box” of traditional handbook design. They would have lacked the cognitive dissonance that motivated at least some of the creative thinking that ultimately led to the successful handbook they ultimately published.
Several groups, including IBM (Karat, Brodie, Karat, Vergo, & Alpert, 2003) and the Memorial University of Newfoundland Libraries (McGillis & Toms, 2001) performed their own usability studies for the redesign of their websites. IBM Corporation performed a heuristics evaluation together with usability testing to understand the value of personalizing a website to users’ needs (Karat, Brodie, Karat, Vergo, & Alpert, 2003). The usability testing suggested that users only care for personalization to the degree needed to complete the task at hand. At the library website for the Memorial University of Newfoundland, usability testing consisting of participants performing six tasks, follow-up questions, and a website usability survey resulted in the researchers learning about the problems users faced with the website (McGillis & Toms, 2001). All of the different types of usability testing give researchers insight into specific usability problems as well as ideas for usability enhancement in general. Although there have been extensive usability testing of the Internet and general usability guidelines developed for the Internet, there is a gap in the literature in identifying usability issues and guidelines for specific types of websites such as travel sales websites. Therefore, a usabilitystudy was performed to identify the usability problems as well as recommendations for improvement of three travel sales websites as
This paper reports on a usabilitystudy testing the HIVE Vocabulary Server (HIVE). The overall research goal was to identify usability issues needing attention in order to improve the effectiveness the HIVE system. This usabilitystudy compared results for two targeted user groups: librarians and scientists. The study employed a formal usability testing approach and formative evaluation. Usability test results for the first build of HIVE were positive. Librarians indicated that HIVE is a useful tool and could assist their work. Scientists responded positively about HIVE’s support for identifying a greater selection of relevant terms describing their data. The main usability problems identified were related to the absence of adequate information via the HIVE interface. These include the absence of descriptive information about individual HIVE- supported vocabularies; lack of evidence about the status of vocabularies during an
You get much more out of a usabilitystudy if you can compare design alternatives. While this may not sound like a way to be more cost-effective, I think it is in the long run. Too many design teams get locked in to one basic design solution early on, then they just fine-tune that, perhaps through iterative usability testing. This is what Bill Buxton (2007) calls “getting the design right,” which he contrasts with “getting the right design.” By starting down one design path too early, you may very well miss other significantly different designs that are much better. But many design teams often will resist pursuing alternatives, perhaps because of the pressure of schedules and resources. So you might have to be the “evangelist” for comparing alternatives.
According to Angello(2010); Msagati (2014) majority of undergraduate students in academic institutions in Kenya are not aware of availability of general and scholarly databases subscribed by their respective universities, this greatly affects usability of general and scholarly database. This supports (Schmidt, 2007) argument that over reliance on internet search engines and Google in particular by students, is mainly due to lack of awareness of the existence of general and scholarly databases among the students. To reverse this trend Naqvi (2012) opines that libraries should come up with strategies of creating awareness to users on general and scholarly databases available in the library. Through active publicity programmes, and also develop a strategic plan that will improve usability of electronic database by students. In addition, university libraries should exploit other avenues of creating awareness of general and scholarly electronic databases such as use of social media and e- mail alerts system. This will eventually lead to usability of general and scholarly databases database in institutions of higher learning thus supporting students educational and research needs, the study sets to establish sources of awareness and awareness levels of general and scholarly databases among undergraduate students at Moi university in Kenya.
Usability is the important factor in measure of quality of a software system. It not only saves money but fulfils user expectations as well. There are many benefits of usability from user‟s point of view as they satisfy with the product quality, its efficiency and performance. This way they will also develop confidence and trust in the product. From providers point of view also there are benefits of usability such as reduced training time and costs, reduced errors, reduced support costs, reduced development time and costs. The Institute of Electrical and Electronics Engineers , proposes as a definition for usability as “the ease with which a user can learn to operate, prepare inputs for and interpret outputs of a system or a component”. Shackel  defined usability as “the artifact's capability, in human functional terms, to be used easily, effectively and satisfactorily by specific users, performing specific tasks, in specific environments”. Nielsen  defines usability as “the measure of quality of experience of user while using the product”. Usability is defined in ISO 9241-11  as the “the extent to which a product can be used by specified users to achieve specified context of use”. Subsequently, ISO/IEC 9126-1  classified usability as one of the components representing internal and external software quality, defining it as “ the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions”.
Desktop GUI enables better interactions for testbed users and provides higher usability. Testbed users perform experimental configurations and are able to conduct the experiments with ease as compared to CLI. However, users need to perform some installation processes on the testbed client terminal b e f o r e its desktop GUI based testbed interface can be used. Furthermore, if any changes occur when upgrades or improvements to the testbed management system are made, the desktop GUI software needs to be redeveloped and realigned to the c h a n ge s on testbed management system. At the s a me time, users need to reinstall the software to get new updates and features. Other challenges exist, particularly when the developed desktop GUI software is not multi-platform in nature and therefore this limits the type of client platform that can be utilised when using the testbed facilities.
Among many of the ranking, Webometrics ranking is chosen because it is one of the popular choice when it comes to measuring HEI website rank. It uses design and weighting of indicators as its methodology. The four indicators that are considered are size, visibility, rich files and scholar. This research is focusing on the usability of HEI websites and the methodology used by Webometrics did not reflect the website usability performance. Therefore, the research based usability measurement index (Chrispin, 2010) is needed to provide a ranking that present the HEI website usability performance. It will be valuable to HEI all around the world because they can use this rank to determine the usability of their website.
operating in this field” (p. 395). The social scientists in Ellis’s (1989) study achieved this behaviour by seeking out people that knew key references and authors in the area, by reading reviews of materials, by consulting bibliographies, abstracts, indexes and library catalogues and by using previously collected or newly recommended starter references. Many of the physicists interviewed by Ellis et al. (1993) were provided with starter references by their PhD supervisors. Librarian intermediaries generally carried out surveying for the engineers and research scientists in Ellis and Haugan’s (1997) study, who surveyed the literature not only to find background information for new parts of an R&D project, but also to generate ideas for new projects or to carry out pre-studies prior to a project. In addition, one-third of Meho and Tibbo’s social scientists started their research from their own personal collection first, which would include both primary and secondary sources (see Meho and Tibbo, 2003). Ellis (1989) highlights that ‘starting’ behaviour may be supported in the design of electronic resources by alerting users to key ideas or documents and providing them with overviews of research areas to facilitate later chaining. Ellis also states that this behaviour can be supported by helping users to identify review-type or heavily-cited materials and by providing an indication of the sources that publish material in the required area. These design recommendations are sometimes, but not often, supported in current electronic resources. Chaining - “Following chains of citations or other forms of referential connections between material”