Читать книгу Designing for Gesture and Tangible Interaction - Mary Lou Maher - Страница 11

Оглавление

CHAPTER 2

Tangible Interaction Design

2.1 WHAT IS A TANGIBLE INTERACTION?

Tangible User Interfaces (TUIs) have emerged as an interface and interaction style that links the digital and physical worlds (Ullmer and Ishii, 2000; Shaer and Hornecker, 2010). An early definition of tangible interaction was introduced by Ishii and Ullmer (1997) as an extension of the idea of graspable user interfaces (UIs): they argued that tangible interaction allows users to grasp and manipulate bits by coupling digital information with physical objects and architectural surfaces.


Figure 2.1: Graspable object. Based on Fitzmaurice (1996, p. 4).

TUIs employ physical objects with a direct correlation to digital objects as an alternative to traditional computer input and output devices for control (e.g., mouse) and display (e.g., screen) (Fishkin, 2004). A person uses their hands to manipulate one or more physical objects via gestures and actions such as pointing, clicking, holding, and grasping. A computer system detects the movement, changes its state, and provides feedback (Petridis et al., 2006). TUIs are designed to build on our experience and skills from interacting with the non-digital world (Ishii and Ullmer, 1997; Shaer and Jacob, 2009). TUIs offer the possibility of natural interfaces that are intuitive and enjoyable to use as well as easy to learn (Shaer, 2008). TUIs have the potential to enhance learning and problem solving by changing the way people interact with and leverage digital information (Shaer and Jacob, 2009). Current research in tangible interaction includes understanding the design and cognitive implications of TUIs, developing new technologies that further bridge the digital and the physical, and guiding TUI design with knowledge gained from empirical studies.

The goal of this chapter is to provide an overview and general framework for the design of tangible interaction, including consideration of the role of gesture and the impact on cognition. We believe that TUIs have an impact on cognition because they provide affordances that encourage and facilitate specific gestures and actions, making some cognitive activities easier. TUIs change the way we interact with digital information via physical affordances that are distinctly different from pointing and keyboard/mouse interaction. This chapter explores physical manipulation as an interaction design space. TUIs trigger various gestures and have potential for exploring information through novel forms of interacting and discovering. The chapter presents the concepts and design issues of TUIs through two examples: the Tangible Keyboard and Tangible Models. They exemplify two approaches to integrating TUIs with traditional interaction design: the Tangible Keyboard design examines the metaphor of a keyboard where each key is a tangible object; Tangible Models design examines physical interaction with 3D objects as proxies for 3D digital models.

2.1.1 TANGIBLE KEYBOARD

Figure 2.2: Pattern Maker application on the Tangible Keyboard. Tangible Keyboard video available on: https://drive.google.com/file/d/0B4S5ptYjjjuGbFEyX2ljUk90LVU/view?usp=sharing.

Tangible Keyboard is a tangible computing platform that adopts a keyboard metaphor in developing tangible devices for touch screen tablets. The tangible interaction design has a focus on supporting composition tasks and the potential for enhancing creative cognition through spatial arrangement. The Tangible Keyboard design provides separate interaction spaces for composition tasks: the whole composition is displayed on the tablet, and the individual pieces of the composition are manipulated on tangible interactive objects. Individual elements are displayed on tangible interactive objects (inspired by Sifteo cubes™), and these smaller displays are manipulated to create a composition on a larger touch display tablet (Merrill et al., 2012). A variety of different gestures and actions on the tangible objects serve as the basis for the interaction design of the Tangible Keyboard. The larger display on a tablet provides visual feedback for compositions and the touch screen allows users to interact with on-screen content. The affordances of the Tangible Keyboard build on the idea of creating keys, similar to the keys on a keyboard, where the symbols on the keys are interactive, and the keys can be rearranged to create variety of creative patterns. Figure 2.2 illustrates the Tangible Keyboard design with the Pattern Maker application.

2.1.2 TANGIBLE MODELS

Figure 2.3: Tangible Models interaction design for CAD modeling.

Tangible Models is a tangible computing platform that combines a touchscreen tabletop system with augmented reality that integrates tangible objects on a horizontal display to support 3D configuration design tasks (Kim and Maher, 2008). This tabletop system provides a physical and digital environment for co-located design collaboration. The tabletop system runs a computer-aided design (CAD) program to display a plan view of a 3D design, with physical augmented reality blocks representing objects and their placement on the plan view. Tangible Models interaction design uses 3D blocks with markers that reference 3D models in the ARToolKit (https://artoolkit.org/). Using ArchiCAD (http://www.graphisoft.com/archicad/), Tangible Models allows the user to arrange 3D models from a library, such as walls, doors, and furniture. The ArchiCAD library provides pre-designed 3D objects that can be selected, adapted, and placed in the new design. Tangible Models interaction design comprises selection and rearrangement actions on blocks to explore alternative configuration designs. By rearranging 3D models as physical actions on blocks, the affordances of this UI reduces cognitive load by providing direct manipulability and intuitive understanding of the spatial relationships of the components of the design. Figure 2.3 illustrates the Tangible Models platform using 3D models of furniture from the ArchiCAD library.

2.2 WHY IS TANGIBLE INTERACTION INTERESTING?

TUIs represent a departure from conventional computing by connecting digital information with graspable objects in the physical world (Fishkin, 2004). Fitzmaurice (1996) defines five core properties as the major differences between tangible interaction devices and mouse/keyboard interaction devices:

1. space-multiplexing of both input and output;

2. concurrent access and manipulation of interface components;

3. strong specific devices;

4. spatially-aware computational devices; and

5. spatial re-configurability of devices.

A hallmark of TUIs is specialized physical/digital devices that provide concurrent access to multiple input devices that can control interface widgets as well as afford physical manipulation and spatial arrangement of digital information and models (Fitzmaurice, 1996; Fitzmaurice and Buxton, 1997; Shaer and Hornecker, 2010). These characteristics affect the way tangible interaction is designed. In addition, tangible interaction is contextual: the design is strongly affected by the context of use. The Tangible Keyboard is designed for composition of elements that do not have a corresponding 3D physical object, such as words, numbers, or 2D shapes. The Tangible Models platform is designed for the composition of elements that have a 3D physical counterpart. We explore these 5 factors and their characteristics to better understand design principles for TUI in the context of the Tangible Keyboard and Tangible Models.

2.2.1 SPACE-MULTIPLEXED INPUT AND OUTPUT

Space-multiplexed input and output involves having multiple physical objects, each specific to a function and independently accessible (Ullmer and Ishii, 1997). Time-multiplexed input and output occurs when only one input device is available (for example, the mouse): the user has to repeat edly select and deselect objects and functions (Shaer and Hornecker, 2010). For example, the mouse is used to control different interaction functions such as menu selection, scrolling windows, pointing, and clicking buttons in a time-sequential manner (Jacko, 2012). TUIs are space-multiplexed because they typically provide multiple input devices that are spatially aware or whose location can be sensed by the system. As a result, input and output devices are distributed over space, enabling the user to select a digital object or function by grasping a physical object (Shaer and Hornecker, 2010; Patten and Ishii, 2007).

1) Tangible Keyboard

Tangible Keyboard has space-multiplexed input/output devices, which enables graspable rearrangement of the elements of a composition. This design provides a distinct approach to composition that is not supported by the traditional keyboard or mouse owing to the ability to manually rearrange subsets of a composition and control the content on the subset being manipulated by referring to the composition on a larger display. With space-multiplexed input, each function to be controlled has a dedicated transducer, each occupying its own space (Fitzmaurice and Buxton, 1997). For example, in a Pattern Maker application on the Tangible Keyboard, each cube can be used to manipulate a shape, a color, and a scaling function. While the input devices are used to manipulate and input the composition, they also provide a visualization of subsets of the composition that can be repeated or rearranged as input.

2) Tangible Models

Tangible Models also has space-multiplexed input/output devices. The individual input/output blocks are each associated with a 3D digital model that is visible on the vertical display. The 3D models are rearranged on the tabletop in reference to a plan view of the composition, with visual feedback of the 3D scene on the vertical display. These multiple 3D blocks allow direct control of digital objects as space-multiplexed input devices, each specific to a function and independently accessible. The application of Tangible Models to the configuration design of rooms on a floor plan layout allows the user to assign 3D models such as walls and furniture from a library to each block. The user can rearrange the blocks to explore various design configurations by reaching for and moving the block as a handle for the 3D model.

2.2.2 CONCURRENCY

A core property of TUIs is space-multiplexed input and output. This allows for simultaneous, but independent and potentially persistent selection of objects. TUIs have multiple devices available, and interactions that allow for concurrent access and manipulation of interface components (Fitzmaurice, 1996). In a traditional graphical user interface (GUI), one active selection is possible at a time and a new selection should be done in order to undo a prior one. Time-multiplexed input devices have no physical contextual awareness and lack the efficiency of specialized tools. The ability to use a single device for several tasks is a major benefit of the GUI, but given the nature of interaction, where only one person can edit the model at a time, the GUI environment may change interactivity in collaborative design (Magerkurth and Peter, 2002). However, TUIs can possibly eliminate many of the redundant selection actions and make selections easier. In terms of collaborative interactions, the TUI environment enables designers to collaborate on handling the physical objects more interactively by allowing concurrent access with multiple points of control (Maher and Kim, 2005).

1) Tangible Keyboard

Tangible Keyboard focuses on the user experience during a creative task in which the user has multiple tangible objects that are manipulated to compose and synthesize elements of new design. Multiple tangible objects offer greater flexibility, allowing each input device to display different types of function. In the Pattern Maker application, each input device displays a single shape, color, or scale. A shape can also be modified on a cube by rearranging these different shapes, colors, and scales. Shape cubes can be manipulated independently but also modified with color or scale cubes to create new design patterns. Concurrency is achieved through simultaneous access to multiple physical devices where each one displays its own shape, color, or scale.

2) Tangible Models

Tangible Models provides a similar experience, but with each tangible object assuming the geometric properties of a 3D object. The spatial rearrangement of the 3D models is directly correlated with the 3D composition. Concurrency is achieved through simultaneous access to multiple 3D models, each on a separate physical object. A protocol study of designers using Tangible Models, described in Kim and Maher (2008), showed that users were more focused on the spatial and functional relationships among the individual 3D objects than on the properties of each object when compared to an interaction design that was time-multiplexed (keyboard and mouse). With the direct, naïve manipulability of physical objects and rapid visualization, designers in the TUI environment produced more cognitive actions and completed the design tasks faster.

2.2.3 STRONG SPECIFIC DEVICES

TUIs provide strong specific devices for interacting with the system. This offers more efficiency because the tangible objects are designed to be more specialized and tailored for working on a given task in order to increase the directness and intuitiveness of interactions (Le Goc et al., 2015; Hornecker, 2005). The design of appropriate physical representations is a very important aspect of tangible interface design (Ullmer and Ishii, 2000). To create strong specific devices, the most common approach is to utilize existing objects into which position sensors or ID tags are inserted. Alternatively, strong specific devices are achieved with Augmented Reality (AR), where each physical device is associated with a virtual object. The user interacts with a virtual object by manipulating the corresponding physical object (Waldner et al., 2006). While seeing virtual imagery superimposed on physical objects, the user perceives interaction with the digital object. These specialized interactive objects may lack generality and therefore may not be effective for some tasks. This loss of generality may be overcome by the advantages provided by task-specific physical tools (Fitzmaurice, 1996). Tangible user interaction with physical objects that have a specialized form and appearance offer affordances normally associated only with the physical object.

1) Tangible Keyboard

In the case of the Tangible Keyboard, the form is constant (cube-like objects with a display) and the appearance (display) is variable. The image on the display is designed to fit the context of the tasks supported by the application. The affordances of these specific devices are those associated with the shape of the object and the content on the display. In the Pattern Maker application, shape cubes are rearranged to form patterns and color cubes are tilted to pour a new color on a shape. These strong specific devices do not have the generality of the mouse for selecting any function, but provide strong feedback on the functions enabled by the application.

2) Tangible Models

With Tangible Models, simple 3D blocks as tangible devices are rearranged on a tabletop system with a vertical display of the 3D scene. Each block is associated with a single 3D model, providing a strong specific device for creating a composition of a scene or spatial design. With static mappings and multiple input objects, 3D blocks as tangible input elements can be expressive and provide affordances specific to the object they represent. The visualization of each 3D block directly indicates its meaning or function while the user is moving the pieces and making a composition.

2.2.4 SPATIALLY AWARE COMPUTATIONAL DEVICES

Spatially aware computational devices and spatial configurations are important concepts in embodied interaction. A physical object in TUIs is typically aware of its spatial position and is registered with a central processing unit. Both position and orientation are critical pieces of information to be sensed (Fitzmaurice, 1996; Valdes et al., 2014). Proximity information is possible through communication to a central processing unit or independent sensors on each device. Applications that are more graphic than alphanumeric benefit from having spatially aware input devices, as graphical tasks are inherently spatial in nature (Fitzmaurice, 1996). Spatially aware computational devices allow users to interact with complex information spaces in a physical way by changing the spatial position and orientation of tangible devices.

1) Tangible Keyboard

Tangible Keyboard is built on the hardware/software platform of Sifteo cubes™ and the sensors include adjacency awareness and accelerometers. These sensors allow the cubes to be aware of other cubes and the movements of each cube, such as shaking, tilting, and turning over. By communicating with a central processor, the rearrangement and movements of the cubes can be mapped onto input events related to the composition task. For the Pattern Maker application, spatial awareness of the individual devices allows the user to form compositions and to modify the shape, color, and scale of elements of the composition.

2) Tangible Models

Tangible Models is built on the software platform of the ARToolkit, in which spatial awareness is achieved by a camera that senses the location of predefined markers (Kato et al., 2001). The assignment of each marker to a 3D model allows the superposition of the visualization of the 3D model on the block. The identification of a marker in the physical space sensed by the camera allows the movement of the block in the physical space to be tracked and visualized in the digital space and displayed on the tabletop for the plan view and on the vertical screen for the perspective view.

2.2.5 SPATIAL RECONFIGURABILITY OF DEVICES

Tangible objects are discrete, spatially reconfigurable physical objects that represent and control digital information. Tangible objects enable reconfiguration, which provides the feeling that users are actually holding and rearranging the information itself. According to Fitzmaurice (1996), the spatial reconfiguration of physical elements such as placement, removal, orientation, and translation are the modes of interaction with tangible interfaces. Those physical controls generally communicate with the surrounding environment and contribute to its overall function and use. The value of discrete, spatially reconfigurable interactive devices goes beyond the value in grasping and rearranging the devices because the physicality of the device serves as a cognitive aid by providing an external cue for a particular function or data item. Users can rapidly reconfigure and rearrange the devices in a workspace and customize their space to suit their own needs in task workflows and task switching (Fitzmaurice, 1996). While this can be achieved in some WIMP interfaces, the use of tangible devices makes this reconfiguration as simple as holding the device and moving it to a new location.

1) Tangible Keyboard

Tangible Keyboard enables configuration of elements within an application or across applications. In the Pattern Maker application, each cube can represent a shape, color, or scale. Within that application, the display and meaning of a specific cube can be changed from a shape to a color to a scale. Across applications, when, for example, comparing the Pattern Maker application to the Silly Poems application, the display on a cube can be changed from a shape to a word. Seeing the Tangible Keyboard, even when it is not being used, conveys an interaction design of physical manipulation and spatial configuration of the physical elements within the interaction design.

2) Tangible Models

Tangible Models enables configuration of models within or across applications. Within an application, the 3D model associated with each block can be changed by selecting another object from the library to assign to a specific block. Across applications, the 3D models available to be selected for each block can be changed by selecting a different library of models. If configuring a structural engineering design, the library would comprise beams, columns, and floor panels instead of furniture and walls. Seeing a Tangible Models device, even when it is not being used, conveys an interaction design of physical manipulation due to the physical presence of blocks on a tabletop system.

2.3 WHAT ARE KEY DESIGN ISSUES FOR TANGIBLE USER INTERFACES?

TUIs show promise to significantly enhance computer-mediated support for a variety of application domains, including learning, problem solving, and entertainment. Also, TUIs offer the possibility of interfaces that are easy to learn and use. However, TUIs are currently considered challenging to design and build owing to a lack of existing software applications that can take advantage of continuous and parallel interactions, the lack of standard interaction models and abstractions for TUIs, and the need to cross disciplinary boundaries to effectively link the physical and digital worlds (Shaer et al., 2004; Shaer and Hornecker, 2010). This section discusses four design issues of TUIs based on Shaer and Jacob (2009);

1. designing interplay of virtual and physical;

2. selecting from multiple gestures and actions;

3. crossing disciplinary boundaries; and

4. the lack of standard input and output interaction models.

Explorations of these design issues provide us with an increasingly clearer picture of the strengths and limitations of TUIs. Good design aims to bring out the strengths and to alleviate weaknesses. In this section, we discuss some of the design issues of TUIs. However, it is important to note that TUI research is a growing and rapidly evolving field, and our understanding of the implications of TUI design requires further investigation. Building a TUI is a complex process that encompasses multidisciplinary knowledge, including computer science, design, and cognitive sciences. Many researchers and interaction designers have introduced a variety of techniques for designing and building novel TUIs. However, TUIs are not yet widely used commercially. Yet TUIs provide physical interfaces that have greater potential to reduce cognitive load and offer an intuitive interaction to support activities such as learning, problem solving, and design.

2.3.1 DESIGNING INTERPLAY OF VIRTUAL AND PHYSICAL

TUIs can be considered a specific implementation of the original notion of ubiquitous computing, which aimed at allowing users to remain situated in the real world, while retaining the primacy of the physical world (Shaer and Hornecker, 2010; Wellner et al., 1993; Leigh et al., 2015). Since TUIs provide physical objects in order to interact with the virtual environment, they rely on metaphors that give physical form to digital information. The TUI designer determines which information is best represented digitally and which is best represented physically (Shaer and Jacob, 2009; Bakker et al., 2012; Want et al., 1999). Tangible Models is good example, because this platform uses augmented reality where digital images are superimposed on tangible blocks blending reality with virtuality. The ARToolKit is used to rapidly develop augmented reality applications. Spatially manipulated tangible blocks sit and operate on a large horizontal display. When designers manipulate multiple blocks, each block allows direct control of a virtual object by communicating digital information visually to the user. Through manipulating 3D tangible blocks, the designers also gain tactile feedback from their interaction (Abdelmohsen and Do, 2007; Anderson et al., 2000). TUI developers consider design issues such as physical syntax (Ullmer, 2002), dual feed-back loop (digital and physical), perceived coupling (the extent to which the link between user action and systems response is clear) (Hornecker and Buur, 2006), and observability (the extent to which the physical state of the system indicates its internal state) to make physical interaction devices understandable (Shaer and Jacob, 2009). It is a challenge to develop frameworks to provide the vocabulary for developing TUIs that link the virtual and physical. Therefore, the discussion, comparison, and refinement of designs with respect to these issues is often performed in an ad-hoc way that does not facilitate generalization (Shaer and Jacob, 2009).


Figure 2.4: (a) BUILD-IT bricks. Used with permission (Fjeld et al., 1997); (b) Interior Design application paddle. Used with permission (Kato et al., 2001); (c) ARTHUR wand. Used with permission (Nielsen et al., 2003).

1) Tangible Keyboard

Designing for Gesture and Tangible Interaction

Подняться наверх