Top PDF An Exploration of Multi-touch Interaction Techniques

An Exploration of Multi-touch Interaction Techniques

An Exploration of Multi-touch Interaction Techniques

example, the tone curve consists of four parameters that can be adjusted: highlights, lights, darks and shadows. The values of each of such parameters are used to perform a complex set of mathematical calculations to alter the pixel color values. Even though photographers may not fully comprehend the details of the imple- mentation, they understand the resultant visual change. As their expertise with the software grows, their knowledge of the relationship between the several groups of pa- rameters increases. However, this does not imply that given a photograph, an expert photographer can uniquely identify the desired parameter values without interaction. There is no correct answer to these operations. The user interface exposes a set of knobs that feed into complicated algorithms that ultimately change pixels. Details are abstracted away from the user to simplify the experience, while delivering as much power from the algorithms as possible. Providing more knobs would overwhelm the user, where as providing fewer might take away control. The software attempts to find a balance between the two.
Show more

145 Read more

Investigation of the use of Multi-Touch Gestures in Music Interaction

Investigation of the use of Multi-Touch Gestures in Music Interaction

of exploration of musical structures centred on traditional harmonic and rhythmic theory, timbre is explored instead. Granular synthesis originated with two publications by Dennis Gabor, “Theory of Communication” and “Acoustic Quanta and the Theory of Hearing” (Gabor, 1944 & 1947). In these papers, Gabor proposes that any sound can be described by a granular, or quantum, representation. Therefore, it would be possible to synthesize both sampled sounds and digital waveforms in terms of granular properties. Each sample is divided up into small ‘grains of sound’ that can be manipulated in real-time. As the threshold of human pitch and amplitude recognition has been estimated to be roughly 50 milliseconds, grain durations are generally between 10-60 milliseconds (Lee, 2000). A pictorial representation of a grain of sound is shown in Figure 4.7.
Show more

306 Read more

Designing multi-touch tabletop interaction techniques to support co-located Group Information Management

Designing multi-touch tabletop interaction techniques to support co-located Group Information Management

An interaction technique was identified as the interaction between a user and the system to complete a task. The task could be as simple as clicking Back on a web browser. A generic interaction technique was described as a gesture invoked on an object to support a task. A list of standard gestures was identified that users can employ to accomplish tasks in an efficient manner. These gestures including resize, tap, flick and rotate were implemented in the three systems discussed in this chapter. It was identified that compound gestures, which combine simple gestures to perform a task, could be used to support certain GIM tasks. Three existing multi-touch systems were reviewed to gain insight into the advantages and limitations that each system presents. Results showed that the multi-touch tabletop systems supported collaboration and allowed for effective group work to be conducted. A limitation was the on-screen keyboard, which allowed users to input data. The problem with the on-screen keyboard was that there was no tactile feedback, which notified the user that a button has been pressed.
Show more

164 Read more

Comparing Tangible and Multi-touch Interaction for Interactive Data Visualization Tasks

Comparing Tangible and Multi-touch Interaction for Interactive Data Visualization Tasks

Other research has explored visualization systems where tan- gible objects did not only act as controls, but also physically visualized the data or where data were projected virtually onto the objects. Tangible views used lightweight objects asso- ciated with a tabletop to explore visualizations, by moving the objects on or above the tabletop’s surface [21]. Tangi- ble models of biological molecules enhanced with augmented reality have been developed and found to help in improving and communicating understanding [8]. Emerge is a physical dynamic bar chart developed to support analysis technique. Initial insights from a user study evaluating 14 analysis-based interaction techniques found interacting with the physical sys- tem to be intuitive and informative [22].
Show more

9 Read more

From Dance to Touch: Movement Qualities for Interaction Design

From Dance to Touch: Movement Qualities for Interaction Design

Introduction While large scale multi-touch displays are becoming increasingly popular, the actual approach for designing their interfaces is mainly driven by HCI: touch is taken as a pointing input (mouse metaphor) and research focuses on new interaction techniques in terms of performance and usability for a given task. Moreover, while some works try to extend conventional mouse metaphors in large displays [6] [20], they usually try to adapt human gestures to the interface (technology- driven design) instead the opposite.

7 Read more

Abstract of Principles and Applications of Multi-touch Interaction by Tomer Moscovich, Ph.D., Brown University, May 2007.

Abstract of Principles and Applications of Multi-touch Interaction by Tomer Moscovich, Ph.D., Brown University, May 2007.

Bimanual interaction methods can be categorized as techniques where the hands are used symmetrically, such as steering a bicycle, and techniques where they are used asymmetrically, such as peeling a potato. Guiard puts forward an influential model of cooperative, asymmetric bimanual interaction [53] which attempts to explain the advantage of manual specialization. According to the model, the hands are coupled through the arms and body to form a kinematic chain, with the non-dominant hand as the base link. The model predicts many properties observed in asymmetric bimanual interaction. The first, is that the non-dominant hand serves to set a dynamic reference frame for the dominant hand’s operation. Handwriting, where the non-dominant hand keeps the paper in the dominant hand’s most effective work-area is a good example of this. The second, is a scale differences in motion where the dominant hand acts on a finer scale both spatially and temporally than the non-dominant hand. The third is non-dominant hand precedence in action, as dominant hand action is not sensible before its reference frame is set. Hinckley et al. [59] confirms the reference frame roles of the hands in cooperative action. The model is widely used as a guideline for designing bimanual interaction (for example, Kurtenbach et al.’s T3 system [70]), and also explains why the benefits of two handed interactions do not extend to task that fail to meet Guiard’s description. For example, a study by Dillon et al. [36] found only a nominal benefit in using two mice for distinct tasks.
Show more

114 Read more

Multi-contact tactile exploration and interaction with unknown objects

Multi-contact tactile exploration and interaction with unknown objects

tactile data gives more precise shape information. Continuous exploration Finally, we are particularly interested in continuous exploration with several fingers, as it is better suited to reconstruct the shape of an object thanks to the flexibility of the multiple degrees of freedom available. Humans do not release and grasp several times an object in order to recognize it by touch, they rather follow the surface with their fingers. Indeed, iterative touches take more time and the object’s position may be lost when the contact is broken. The first works focused on the reconstruction of parametric models of objects: already in 1990, Allen, inspired by exploratory procedures from Lederman and Klatzky (1987), explored objects modeled by superquadrics with a contour following method that used the model’s parameters to compute a trajectory (Allen and Michel- man, 1990; Stansfield, 1991). In Heidemann and Schopfer (2004), a tactile sensor array is moved around the surface of a convex-shaped object while passively ro- tating to follow the slope. The time-series of 2D pressure profiles are fed to several neural networks for classification after local PCA for feature extraction and dimensionality reduction. Vision can also be coupled to tactile information in order to reduce the data to lower dimensionality using a multimodal dimen- sionality reduction technique (Kroemer et al., 2011) and help the classification of textures. Time series of tactile data are very high-dimensional and decreasing the dimensionality is done by using synchronized visual features with a multi-modal mapping method. This is achieved by finding lower dimensional representations where the classification performance is improved. In another application, contin- uous probing is used to identify surfaces by mobile robots (Giguere and Dudek, 2011). A probe uses an accelerometer attached near its tip in contact with the ground to collect data describing the surface on which the robot is moving. Classification is done by analyzing selected features of the data from fixed time windows. In Maekawa et al. (1995) and especially Okamura et al. (1997), tactile sensor arrays on fingertips and palm are used to gather data while rolling and sliding fingers on an object during haptic exploration. During the exploration, some fingers are responsible for grasping while the others explore the surface by rolling or sliding on it. Since the object’s position and orientation are tracked by assuming pure rolling during the phase when the object is being moved, features detected by the tactile sensors can be added to a model of the object.
Show more

175 Read more

Multi Touch: An Optical Approach (Comparison of various techniques)

Multi Touch: An Optical Approach (Comparison of various techniques)

Abstract- This paper will explain how the Multi-touch technology presents a wide range of new opportunities for interaction with graphical user interfaces, allowing expressive gestural control and fluid multi-user collaboration through relatively simple and inexpensive hardware and software configurations. We as the developers of the low cost multi-touch table draw our experience to provide the practical knowledge to build and deploy applications on the multi-touch surface. This will include the hardware and software requirements, comparison of various optical techniques and implementation of the multi- touch surface.
Show more

6 Read more

Designing Intuitive Multi-touch 3D Navigation Techniques

Designing Intuitive Multi-touch 3D Navigation Techniques

6 Conclusion and future work In this paper, we proposed an original methodology based on user-centered practices and optical flow analysis to address the problem of designing intuitive multi-touch navigation techniques for 3D environments. User-centered practices allow to define the navigation commands from the user's perspective while the optical flow analysis provides guidelines for defining intuitive multi-touch gestures to perform these com- mands. We instantiated this methodology for tasks articulated around the review of interior designs, which led to the design of a new interaction technique, Move&Look. The comparison of this technique to state of the art ones in a controlled experiment showed its overall superiority and revealed usability problems with the others. These results provide a first validation of the proposed design methodology. The methodol- ogy should be applied in other navigation contexts in order to further assess its effec- tiveness. The robustness of the proposed RST classifier should be formally evaluated, and it can certainly be improved. Even if participants did not complain about it, we observed them flattening their rotation gestures for the circle around command, prob- ably because they unconsciously followed the corresponding optical flow. Our classi- fier could be modified to better take into account this oval shape, for example.
Show more

19 Read more

The Haptic Touch toolkit : enabling exploration of haptic interactions

The Haptic Touch toolkit : enabling exploration of haptic interactions

The second category of haptic devices are multi- dimensional (usually > 3 DOF) closed-loop devices such as Immersion’s PHANToM [15] or Novint’s Falcon [18]. Pro- gramming these interfaces can require significant expertise [16], and its difficulty has been identified as an obstacle in the popularization of this type of interfaces [21,23]. Multi- ple APIs exist that enable higher-level programming of these haptic devices. Many are low-level and provide ac- cess to specific haptic devices (e.g., Novint’s HDAL [19]), although there are also ongoing efforts to provide general- ized haptic libraries that work on several devices (e.g., H3D [24], ProtoHaptic [6], and OpenHaptics [9]; also see survey in [11]). Some introduced drag-and-drop interfaces for al- lowing non-programmers to customize haptic systems (e.g., [23]). In general, these APIs assume a model of interaction based on the simulation of physical objects and surfaces, often directly linked to 3D virtual reality models.
Show more

8 Read more

Comparing Tangible and Multi-touch Interaction for Interactive Data Visualization Tasks

Comparing Tangible and Multi-touch Interaction for Interactive Data Visualization Tasks

Epistemic actions are used to change the physical world and simplify tasks, rather than move towards a goal [10]. Findings from previous studies suggest that tangible interaction and TUIs use epistemic actions and thus encourage more effective and efficient motor-cognitive strategies to solve tasks [1, 7]. We hypothesized (H2) that the adoption of epistemic actions would result in more efficient exploration during data visu- alization when interaction took place with tangible objects rather than multi-touch. The results supported H2, as partic- ipants explored combinations more efficiently. Repetitions and unnecessary explorations were reduced and more effective strategies were adopted. This in turn reduced the time spent on a task for tangible, compared with multi-touch, interaction. As video recordings were not used, the full Artifact, Tool and Body (ATB) framework [6] could not be adopted for data analysis. However it proved helpful in the identification of the types of epistemic actions that participants adopted based on a combination of screenshots of participants’ results and the investigator’s observations. While epistemic actions were noted for the tabletop TUI for the majority of participants, it was not the case for the multi-touch interface. The rest of this section discusses the epistemic actions performed with the tabletop TUI.
Show more

9 Read more

Using multi-touch interaction techniques to support Collaborative Information Retrieval

Using multi-touch interaction techniques to support Collaborative Information Retrieval

6.3.2 Practical The primary practical contribution of this research was the development of Co-IMBRA, a functional CIR prototype that can be used by a team to collaboratively search the Internet or browse document collections. The user evaluation resulted in very positive feedback, both in the questionnaires and in the comments made by the participants. Several of the participants said they had a lot of fun using Co-IMBRA and some expressed a desire to use the system again in future. The prototype itself is therefore a practical contribution. A reusable library of user controls, the InformationControlLibrary, was developed. The InformationControlLibrary extends user controls to support a variety of information types such as HTML documents, images, text and multimedia. Utility controls for searching and logging are also available in the library for searching the Internet and logging actions taken during the use of a collaborative system. The library is written in C# and is easily extensible to support other information types by extending the base Information Control class. Developers may wish to use, adapt or extend this library for use in future multi- touch CIR systems, representing a secondary practical contribution.
Show more

160 Read more

Multi-Touch Collaboration for Software Exploration

Multi-Touch Collaboration for Software Exploration

Within the CocoViz project, we build on explorative software vi- sualizations such as polymetric views [11] and aim for a better un- derstanding and interaction of the software with cognitive glyphs [1]. We worked on reducing the effort needed to do software visu- alization with automated comprehension tasks [4] and offered con- cepts to find relevant aspects in a dataset such as audio [3, 5]. Until recently we considered only individual engineers exploring the software. This assumption is reasonable for situations where the engineer is not familiar with the software, and he is wanting to learn about the general components in his system. In a maintenance context where we want to keep track of changes, or find out about the impact of certain changes, we end up tagging the found issues in the visualization and occasionally contacting the engineer owning that code over and over again. The engineers involved with a code fragment know about earlier design decisions, current bugs, or a planned change to their code.
Show more

6 Read more

A survey on haptic interaction techniques in the exploration of large and scientific data sets

A survey on haptic interaction techniques in the exploration of large and scientific data sets

As a result of recent advances in computer simulation, in data storage and in measuring processes, scientific and industrial applications are now increasingly generating large multi-dimensional matrices representing a lot of scientific properties of the studied phenomenon. In these scientific and abstract data sets since most of the time the target is not well defined or necessitates knowledge that can not be formalized, automatic exploration techniques are not relevant for extracting meaningful informa- tion. Thus in medical data segmentation, a field where a lot of automatic segmentation methods could be counted, Vidholm and Agmund [37] note that the problem remains unsolved since the methods are not general enough. However, human-centred data miming techniques reveal to be useful for the extracting of meaningful patterns from these resources [29]. Capabilities of Virtual Reality (VR) immersive technologies offer an adequate environ- ment for these approaches. In such system, multi- sensory feedbacks (visual, haptic, auditory…) and multimodal input (speech, gesture, tracking…) may be exploited in order to provide the most intuitive interaction [9] [28], and thus allow the user to fully take advantage of all his perceptual abilities in the
Show more

7 Read more

A Collaborative Multi-Touch UML Design Tool

A Collaborative Multi-Touch UML Design Tool

We propose a system to overcome the disadvantages by creating a collaborative multi-touch design tool for UML diagrams. Multi- touch is a novel interaction technique that enables the manipulation of graphical entities with several fingers at the same time. This technique makes direct interaction and collaboration of multiple users much easier compared to WIMP interfaces (window, icon, menu, pointing device). The main advantage over traditional UML modeling tools is the collaborative approach: Using large multi- touch screens, multiple users can work with complex diagrams si- multaneously. Even if users are not familiar with UML design tools, they can use intuitive gestures to draw their part of the di- agram. To facilitate the design process, we have added advanced techniques, e.g., the generation of Java code skeletons from the UML diagram or the automatic layout of the diagram using graph drawing algorithms. The key advantages of such a system, com- pared to a classical whiteboard, are evident: The tool features a virtually unlimited canvas on which the items are drawn, it allows classes to be moved and modified without the need to re-create the whole class, and it supports panning to move to a new area of the canvas to continue drawing. When moving classes around, the rela- tionship edges are automatically adjusted. In the end, the generated diagram can be exported in digital form to allow further distribution and automatic processing.
Show more

6 Read more

Multi user interaction with molecular visualizations on a multi touch table

Multi user interaction with molecular visualizations on a multi touch table

Wu and Balakrishnan [48] have developed RoomPlanner, an interior planning tool which is operated by a wide range of gestures. This is one of the first studies to include hand gestures, not just interaction with fingers. In 2006, a large part of these gestures have been included by Wu et al. [49] in a study about gesture registration, relaxation and reuse. These three techniques are studied to provide a more easy, relaxed and uniform way for users to learn and perform gestures. Wobbrock et al. [47] have performed a study to find out which gestures the users make when confronted with a task, such as copying or deleting an item. They not only collected a large number of gestures, but also classified them to find out which kinds of gestures are performed most often. This resulted in a set of gestures that can be used to control an operating system, with actions such as move, select, drag, minimize/maximize or copy/paste. Hancock, Carpendale and Cockburn [10] have studied gesture interaction with shallow-depth 3D objects. Shallow-depth means that there is no depth in the environment, apart from the depth of the 3D object itself. They studied gestures to control a small cube with one, two and three fingers. This is one of the few studies to examine 2D gesture interaction with 3D objects. In his PhD thesis [27], Tomer Moscovich also proposes a scheme for interaction with 3D objects, using three fingers to control rotation in all three dimensions. User studies have not been reported, however.
Show more

48 Read more

My AppCorner : personal interaction areas on public multi-touch displays in a smart campus environment

My AppCorner : personal interaction areas on public multi-touch displays in a smart campus environment

In this thesis, we introduce a new way of interacting with public multi-touch displays named My AppCorner, with the goal of finding the efficiency and practicality of the system also with the goal of giving the user the sense of ownership. The system gives the users the option to create their own space inside the screen, move it around and decide what application suits them best. The system takes advantage of two main interaction techniques: “Drag and drop” and “hold”. User-centered design is used to design and implement the interface in order to meet the needs of a smart university public space. A system testing was conducted with the main target users being university students, staff and visitors. The main applications used were related to a university setting including; campus map, bus timetable and university newsfeed. Results showed that participants liked the idea of creating everything dynamically, having their own window and interaction with other users. Overall, the system is an appropriate solution for multi-user interaction on a smart campus setting.
Show more

69 Read more

A deeper look into multi-touch gaming

A deeper look into multi-touch gaming

The first survey, about what game genre is most popular for the multi-touch technology, 112 were invitations and 30 finished the surveys. Of these 37 were 11 pro- gamers, 3 were developers, 13 were casual-gamers and 5 were females, of these 5 females, 2 are gamers. The survey has 11 questions, the first question asks the user his or hers name or handle (their online nickname). The reason behind this question, is to have the users take the survey more serious and put more thought into their answers. The survey continues to ask questions about the users experience with multi-touch technology, and how much and what type of games they play. These questions are important in determining the individual answers as a user response with great insight into multi-touch technology and dedication to games has a higher understanding of how a game can be designed. A response where the user does not like multi-touch or games can bring new ideas or see innovative elements that a hardcore gamer will overlook and there responses can be used in fine tuning how a game can be played. The 4th question asks the user what type of games the user mostly plays on their own free time. It is a multiple choice question with predefined genres popular for the PC and consoles.
Show more

98 Read more

Multi-touch interaction principles for collaborative real-time music activities: towards a pattern language

Multi-touch interaction principles for collaborative real-time music activities: towards a pattern language

A significant number of multi-touch applications for real- time music currently exist, thanks to three main factors: firstly, the current popularity of personal and shared multi- touch devices (e.g. smartphones, tablets, interactive table- tops, among others); secondly, the existence of facilities to develop creative applications on them; and lastly, con- sumer interest in these creative products. Some of these musical applications are designed for collaborative activ- ities, although we generally find more support for collab- oration in software designed for other fields—e.g. multi- player games or collaborative productivity tools—where a variety of appropriate interaction principles are used. We propose that a study of the most prominent of these prin- ciples could be used to provide better support for collab- oration by improving the interaction design of real-time
Show more

5 Read more

Multi-objective aerodynamic design exploration using multi-fidelity methods and pareto set identification techniques

Multi-objective aerodynamic design exploration using multi-fidelity methods and pareto set identification techniques

An efficient methodology for aerodynamic design exploration has been presented. The Pareto front, which represents the best possible trade-offs between conflicting objectives, is identified in a point-by-point manner using fast multi-fidelity aerodynamic models. The proposed approach has a number of features that distinguish it from other surrogate-assisted aerodynamic design exploration methods available in the literature: (1) multi-fidelity models are utilized to quickly locate one point on the Pareto front, (2) the Pareto front is determined point-by-point in the vicinity of the initial point, (3) the algorithm does not rely on population- based metaheuristics, (4) design space confinement is not required, (5) objective function aggregation is not required, and (6) objective function gradients are not needed. Comprehensive numerical studies involving both analytical test cases as well as aerodynamic shape optimization problems demonstrate the computational efficiency and robustness of the proposed approach. Furthermore, the algorithm has been shown to have good (here, close to linear) scalability with respect to the dimensionality of the design space. Future work will investigate extensions of the technique to handle more complex problems in terms of a larger number design variables as well as the degrees of freedom of the simulation model.
Show more

155 Read more

Show all 10000 documents...