In order to compare the language of the end-user UT participants to the language of the evaluator‟s reports, I conducted a comparative discourse analysis on the transcripts that were created from the video recordings of the three usabilitytests and from the audio recording of the group meeting in which the oral usability reports of all three sessions were discussed. The transcripts were segmented first into conversational turns and then into clauses. Conversational turns begin “when one speaker starts to speak, and ends when he or she stops speaking” (Johnstone, 2002, p. 73). Clauses are considered “the smallest unit of language that makes a claim” and such granularity is useful in this analysis as the speakers often presented multiple ideas per turn (Geisler, 2004, p. 32). Two raters with significant experience in usability testing and heuristic evaluation were asked to compare the transcript from each usability test to the transcript of the oral usability reports; additionally, the raters were allowed to refer to the audio and video recordings. It took each rater about 1.25 to 1.55 times longer than the recording time to assess the transcripts. The raters typically read the short oral report first and listed the findings from the report. Then they read the transcripts from the UT and listed the findings from the session. They then read through the transcripts an additional time and referred to the recordings themselves, if they wished (each rater referred to a video recording of the usability test twice, but neither referred to the audio recordings of the oral reports). In this comparison, the raters were asked to identify items that were mentioned in both the usability test and in the oral report, items that were mentioned in the usability test but not in the oral report, and items that were mentioned in the oral report but were not mentioned in the usability test. In a way, this classification of usability findings is similar to Gray and Salzman‟s (1998) hit, miss, false alarm, and correct rejection taxonomy; however, in this current study, the goal is not to determine the true accuracy of the reported finding (which, according to Gray and Salzman, would be nearly impossible to ascertain), but the accuracy of the reported finding as compared to the utterances of the usability testing participants. After an initial training period, the two raters assessed the data individually. Ultimately, across all three categories, the raters had a percent agreement of 85% and a Cohen‟s Kappa of 0.73, which, according to Landis and Koch (1977), demonstrates good agreement.
The search-dominant users will normally go direct for the search engine when entering a website. Search- dominant users are task-focused and not curious in looking around the site. This may explain why the younger participants used the search engine more often, since young people are well known for being less patient and task-focused. In contrast, the link-dominant users prefer to search on the website with the help of the links available. Only when the link-dominant users get off-track, the search engine may be used. Mixed-behaviour users shift between the link-dominant and search-dominant behaviour. Thus, in view of the fact that half of the website users are search-dominant, it is valuable to make sure that the search engine on a website is helpful. Nielsen (1997) suggests that a search bar should be available on every page on the site. In this way the search-dominant users do not have to look for the search engine, which can evoke annoyed reactions. Killoran (2013) and Redish (2012a) also acknowledged the importance of a search engine to enhance the navigation within a website. Nielsen (1997) mentions that it is critical to make search systems more usable by incorporating spelling checks, synonym expansion, and showing results relative to the structure and importance of the site. These advices were confirmed by the participants of the usabilitytests of this research, as multiple participants were annoyed by the fact that the search bar on the website of GBTwente had no spelling check or synonym expansion, and the results were not structured logically. In order to make search engines even more customer-oriented, it is necessary to analyse and learn from what users search for to be able to write with the words that the site visitors use.
To be more specific, two kinds of prototypes have been made based on our AR design concept: physical prototype and digital prototype. Physical prototype is similar to paper prototype of the screen-based products, digital prototype is similar to interactive prototypes which are made through prototyping tools such as Sketch, Invision or even simple power point slide show. To be clearer, the aim of this study is to learn and explore whether interaction designers or psychological students, especially who has no AR or 3D programming skills, can easily make low-fidelity prototypes for conducting AR usabilitytests. The reason for making these two kinds of prototype for augmented reality is twofold. One is the fidelity of real environment. For example, the physical prototype, we can have the reality to use, which is like users are using the AR in the real environment. However, for digital prototype, we lose the real environment to test since we simulate the real environment on the flat screen. The other reason is the fidelity of digital interaction. For physical prototype, since everything is made with low- fidelity, testers need to manipulate the interactions step by step manually. On the other hand, for digital prototypes, the interactions can be made through interactive prototyping tools. It is common that designers use interactive tools as mid-fidelity or high-fidelity prototypes for screen-based products compared to paper prototype. In our case, it is hard to tell which one of the fidelity is higher than the other one since both of them have their advantages and disadvantages in each dimension. The results would provide initial insights for future recommendations of conducting AR usability test using low-fidelity prototypes.
Software development organizations consist of marketing, design, project management, development and quality assurance team. It is important for the many teams within the organization to understand the benefits and limitation of incorporating various usability testing methods within the software development life cycle. Some reasons for poor usability include effort prioritization conflicts from development, project management, and design team. The part played by the usability engineer is to get involved as the heuristic judge and facilitate the development and design efforts are based on usability principles and at the same time adhering to the project time period. Two approaches for usability inspection methods consist of user experience testing and expert review or more commonly known as Heuristic Evaluation (HE). This paper focuses on understanding the strength of HE as a methodology for defect detection. The results show the strength of the HE as a usability testing methodology in capturing defects and prioritizing design efforts and development. The results also increase the need for integrating traditional heuristics with modified heuristics customized to the domain or field of the project being tested such as E-Government. Describes an innovative methodology developed for usabilitytests of the IEEE PCS Web site that combines heuristic evaluation and task-based testing. Tests conducted on the PCS Web site has evaluated whether the site facilitated members' ability to find information and participate in discussions, as well as developers' are capable to find, contribute, and manage administrative information on the site. The distinctive social characteristics of Communities of Practice (CoPs) provide context for tailoring design heuristics for informational Web sites that serve the needs and interests of CoP members. The discussion gives important on technical communication principles that apply not only to evaluating the effectiveness of the PCS Web site design but also to all centralised technical communication products and media that increasingly demand user participation.
BIM has helped melding the way buildings are designed, analysed, constructed and managed. Currently, many theories of its potentials associated with various tools to its application are available. It could be the answer to many problems faced by construction manager. However, using it seems to be misleading. It frustrates users and owners to the extent of not desiring to use it again (Hardin, 1980). Although technical advancement of BIM in the field of construction management is important, capturing lessons learned and best practices are critical to improve the usefulness of this technology. It cannot be refined unless users get involved and share their experiences with others. Therefore, tackling non-technical barriers is needed to bridge the gap between technology, end-users and their processes. It can be approached by conducting usabilitytests. By inviting user involvement, capturing better decisions relating to usability can generally be obtained (Hayat et al. 2015).
Usability factors. Although a separate procedure was used to detect usability issues (see “Approach 2: UsabilityTests” below), participants in the structured interview also reported that either they or their colleagues encountered issues with the app’s usability. These mostly concerned the reporting app’s drop-down menus. These menus, which are used for classifying incidents, were described as containing too many options to be used quickly, including some options which were ambiguous or seemed irrelevant to the particular category of incident being reported. However, issues with the specifics of the drop-down menus were not simply a matter of there being “too many” options. One participant reported that certain types of common incidents were left off of the menus, and suggested that the app could be used much more quickly if additional incident categories were added. One unit manager mentioned an ongoing problem that impacted the usability of the submitted incident reports. In some incidents involving equipment, reports did not specify the situation or manner in which equipment was malfunctioning, making it impossible to know what diagnostics or repairs are needed if the problem is not obvious upon seeing the equipment. The participant attributed this to a belief among the staff of the unit that submitted the report that the equipment was not actually broken but had just been used incorrectly. Finally, the analyst who took part in the unstructured interview identified one issue with the usability of the aggregate interface: that selecting the followups made by unit managers does not consistently include the followups in the generated aggregate reports. It was not clear whether this issue impeded analysis or was merely irritating.
Keep others informed about what’s going on. Some of the things you might want to let others know about include upcoming usabilitytests, highlights of recent usabilitytests, new lessons learned, and pointers to relevant usability findings and news from the outside world. The appropriate mechanism for conveying this kind of information will largely depend on the culture of your company. Printed newsletters have fallen out of favor in many companies because of the perceived or actual cost. They’ve largely been replaced by email newsletters and posting on the intranet. At some companies (including ours), the intranet homepage can be personalized by each employee by adding or deleting “bricklets” of information. For example, we do a weekly update of a “Usability News” bricklet that over 1,500 employees have chosen to put on their homepage.
The data from the usabilitytests supported the hypothesis, and revealed areas of confusion when people navigate the interface of each website to find information about the local chapter of national non-profit organizations. This section examines a few hurdles that multiple participants faced during the usability test. First, all 15 participants meandered through at least one of the websites that they tested to identify the appropriate path for the desired information or content. Six of the 15 (40%) voiced annoyance about intrusive requests to provide personal information. Six of the 15 (40%) participants commented on the different map visualizations. Finally, 4 of the 15 participants (26.7%) stated irritation about frequent requests for monetary donations on the sites.
In this paper we describe a set of tech- niques we found suitable for building multi-modal search applications for au- tomotive environments. As these ap- plications often search across different topical domains, such as maps, weather or Wikipedia, we discuss the problem of switching focus between different do- mains. Also, we propose techniques use- ful for minimizing the response time of the search system in mobile environment. We evaluate some of the proposed techniques by means of usabilitytests with 10 novice test subjects who drove a simulated lane change test on a driving simulator. We re- port results describing the induced driving distraction and user acceptance.
Software verification and validation is one of most difficult activities related to software development. It can represent more than 60% of total costs particularly when agile methodologies are embraced. Usabilitytests are critical activities related to software testing because they are very influenced by participant's emotions. Moreover, financial costs regarding to usabilitytests are proportional to how many participants are involved. Thus, the software industry spends billions of dollars on software testing annually, but failures can still be easily found, maybe because the testing process is incorrect or participants’ usability reports are not validated before statistical analysis. In other words, outliers were not detected neither removed.
Problems with the size of the problems to investigate The parallel tracks allowed the User Experience Team to iterate designs before they were implemented. However, we still had to deal with the reality of cycles that were only two to four weeks long. We could complete small designs in this timeframe, but complex designs required more than four weeks to finish. We needed to figure out how to do usability investigations for designs spanning more than one Agile cycle. Furthermore, the overall speed of Agile UCD was much faster than when we were doing waterfall UCD. We had to move much more quickly toward design solutions with a fewer number of usabilitytests within a release. Design chunking: Breaking designs apart into
and satisfaction.” The five factors discussed in the definition above is based on Nielson’s (1993) five attributes. Learnability refers to the ease of use in learning the system to enable users to quickly begin working their system tasks. Efficiency looks at how productive the system user can be once having learned the software. The memorability attribute refers to the user being able to recall how to use the system even after a certain period of time has elapsed. An example of this may be an individual who goes on vacation for three weeks being able to immediately remember upon return how to use their system. Usability testing also involves looking at the number of errors that users of the system make and once an error is made if a user is easily able to recover from the error. The last attribute tested is the level of satisfaction users have from interacting with the system. Satisfaction attributes primarily consist of how pleasant the system is to use.
With the trend of economic globalization and localization services, employees are distributed in different regions, the old enterprise information system in closed environment has been difficult to support all the business in enterprises, also cannot meet the need of information sharing between upstream and downstream enterprises and partners in the supply chain. The new business model requires companies to have distributed information systems, remote access and other characteristics. VPN (virtual private network) is high cost and lack of flexibility, Web services-based information sys- tem can achieve low-cost real-time collection to process and share distributed information, which is the ideal model of enterprise information system. However, there is a big gap in current usability between the Web services and old desk- top applications. This paper combines the usage patterns, business needs of enterprise information systems and techni- cal characteristics of Web services, proposes the usability requirements of enterprise information systems based on Web services from different views of internal users, external customers and strategic partners.
After getting a list of HTML elements we also found such elements which are more contextual rather than elementary. For an example a page is known as document, a page is designed using several components but a page itself has some events that may be triggered by another basic element within but eventually context of page is more of a consideration while page is changing state. For example onbeforeunload, onload, onpageshow, onpagehide, onresize, onscroll, onunload are such events  when page is changing the internal state. These type of states become more crucial when we consider the context of usability. While designing, the orientation of elements are subjected to page and display size sometimes. Page preference is also an attribute of page but containing contextual data or settings of user which plays a great role in usability as we assume. Components like Navigation Menu uses hierarchy and categorization to illustrate the content to user which directly subjected to usability characteristic. Hierarchy of any co-related data has been always challenging to be presented in usable way and author in  on a different context of file system has also worked on same scope. Navigation menu categorization has been also a considerable concern for designing navigation as stated in . Then the data grid is a complex presentation of data which normally comes with view of categorized data with sorting, filtering and paging options. Another context is notification which normally refers to the context of communicating users with some response of an event, action, processing request initiated over a time period.
Google search engine has become predominant among undergraduate students because of friendly user interface, free text searching of the content of public web pages as well as superior in coverage and accessibility. On the other hand library systems are superior in quality of results and accuracy is similar in both systems. Usability of general and scholarly electronic databases in university libraries is crucial in establishing whether the databases have the potential of satisfying students’ research and educational needs. General and scholarly databases are known for good subject coverage, unique features and because the systems are accessed through the internet, improved information sharing among undergraduate students is achieved (Fialkoff, 2005; Brophy&Bawden, 2005).However, for effective usability of the databases university librarians should intensify their awareness campaign concerning availability of general and scholarly electronic databases, use of e-mail alert systems, workshops and seminars should be adopted and awards should be considered for those who use general and scholarly electronic databases for their information need as a way of promotion (Naqvi, 2012).
1 INTRODUCTION In the last few years, there is a fast growth on mobile subscriptions leading to a high demand for mobile applications (referred to as mobile apps). Around 7.9 billion subscriptions were reported in 2018 and 8.9 billion are predicted to be available in 2023 . This state faced software engineers with the challenge of developing high usable applications to improve their attractiveness and competitively in the new market. In this context, several research works are elaborated to evaluate the usability of mobile applications , . The evaluation is usually performed once the system is implemented using traditional methods such as laboratory experiments and field studies. These methods involve activities that require a large amount of resources (e.g. participant users, recording systems, usability lab, etc.). Besides, a lot of rework is usually required to go back to the design and made changes. This is not always trivial considering their cost and complexity. Recently, a new paradigm in the software engineering domain was adopted and accepted as a promising technique to alleviate the cost and complexity of the changes: Model-driven Engineering (MDE). In such an approach, interest is focused on conceptual models that represent the system abstractly. These models undergo a series of transformations to obtain the final application, in particular, the user interface. Due to this transformation process, an intrinsic mechanism of traceability between conceptual models and the final application is established. Based on this mechanism, changes made in the conceptual models are reflected in the final application. Some research works have demonstrated that analyzing these models to improve their usability is likely to preserve this usability at the final application, at least to some extent , , and . The objects of these studies are traditional desktop and web applications. However, the mobile device features (e.g. small screen size, data entry methods, limited capacity, and power process) introduce some new challenges that must be considered by mobile application engineers. The most important one is, without a doubt, to introduce a usability evaluation method that suits for mobile applications and considers their features. The present paper addresses this issue and proposes to incorporate usability engineering as part of the mobile application development process which follows MDE principles. The aim is to evaluate the usability of mobile applications early in an MDE development process (from the
The objective of this paper is to identify all the major factors of the usability for CBSS and to integrate them to propose a usability model suitable for CBSS. Usability is the quality attribute that is observed at run time , which indicates that usability is real-world issue. This makes soft computing technique ideal for estimating usability of CBSS, as soft computing deals with many uncertainties . Different soft computing technique has been used by various researchers for different purposes [5, 6, 7]. In this paper, proposed model is used to measure the usability by using two different fuzzy techniques i.e., centroid and bisector method in MATLAB. Ten different input values are taken to measure the usability and to identify that which technique (centroid or bisector) is more stable. Experimental results are also validated by using Center of Gravity (COG) and Mean-Max method. The main contribution of this paper is to make developers able to evaluate the usability of CBSS, so that if any usability flaw exists in the system, it can be removed.
factors with the largest impact on usability”. On this usability notion, it is useful to note the recent survey with usability professionals by Hertzum and Clemmensen (2012). In their survey, they reveal that, in general, usability professionals’ notion of usability is mainly goal-related and individual-level focused rather than organizational- and environmental-level focused. This finding sheds further light on the usability notion as it is understood by the usability professionals. Undoubtedly, usability is well accepted by software product designers as vital for the commercial success of a software product. Dix et al. (1998) also identifies a set of principles to support usability grouped into three categories, namely, learnability, flexibility and robustness; by doing so, these writers manifest a rather objective and engineering view on usability.