Читать книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt - Страница 15

Оглавление

3

Multisensory Haptic Interactions: Understanding the Sense and Designing for It

Karon E. MacLean, Oliver S. Schneider, Hasti Seifi

3.1 Introduction

Our haptic sense comprises both taction or cutaneous information obtained through receptors in the skin, and kinesthetic awareness of body forces and motions. Broadly speaking, haptic interfaces to computing systems are anything a user touches or is touched by, to control, experience, or receive information from something with a computer in it. Keyboard and mouse, a physical button on a kitchen blender, and the glass touchscreen on your smartphone are energetically passive haptic interfaces: no external energy is pumped into the users’ body from a powered actuator. Most readers will have encountered energetically active haptic feedback as a vibrotactile (VT) buzz or forces in a gaming joystick, a force feedback device in a research lab, or a physically interactive robot. Much more is possible.

When we bring touch into an interaction, we invoke characteristics that are unique or accentuated relative to other modalities. Like most powerful design resources, these traits also impose constraints. The job of a haptic designer is to understand these “superpowers” and their costs and limits, and then to deploy them for an optimally enriched experience.

Both jobs are relatively uncharted, even though engineers have been building devices with the explicit intention of haptic display for over 25 years, and psychophysicists have been studying this rich, complex sense for as many decades. What makes it so difficult? Our haptic sense is really many different senses, neurally integrated; meanwhile, the technology of haptic display is anything but stable, with engineering challenges of a different nature than those for graphics and sound. In the last few years, technological advances from materials to robotics have opened new possibilities for the use of energetically active haptics in user interfaces, our primary focus here. Needs are exposed at a large scale by newly ubiquitous technology like “touch” screens crying out for physical feedback, and high-fidelity virtual reality visuals that are stalled in effectiveness without force display.

Glossary

Active [human sensing]: On the human side, active sensing entails deliberate and attentionally focused seeking of information through the haptic sense, usually combined with motor movement. People use different exploratory procedures to examine properties of objects (e.g., weight, texture, shape) [Klatzky et al. 2013, Lederman and Klatzky 1987].

Ambient interfaces. Information displays that operate in the user’s attentional periphery [Weiser and Brown 1996], only moving into awareness either when they increase in salience because of urgency, or when the user chooses to focus on them.

Crowdsourcing. The leveraging of large communities of users to perform computation, generate ideas, or provide feedback on media [Kittur et al. 2008]. For example, many researchers and UX designers use online tools such as Amazon’s Mechanical Turk (http://www.mturk.com) to quickly gather feedback on designs or questions that can be shared visually.

Cutaneous sensations come from the skin and can include vibration, touch, pressure, temperature, and texture [Lederman and Klatzky 1987].

Design activity. A collection of related tasks performed during media design that can help when thinking about design. We suggest browse, sketch, refine, and share as distinct activities or stages of haptic making.

Energetically active [haptic display]: In contrast, to energetically passive displays an energetically active display can be nonconservative, depending on its control law, and has the capacity to pump more energy (sourced from a wall plug or battery) into the interaction than it takes out. This can manifest as instability such as jitter and growing oscillations.

Energetically passive [haptic display]: On the machine side, a display is energetically passive if it is “conservative”—i.e., it puts no more energy into the interaction than it takes out [Colgate and Brown 1994]. A trivial example is a device without access to external or long-term stored power: for example, when you compress a spring, the device stores only the energy you place into it, and when you release the spring, this simple interface restores the same energy back to your hand that you put into it. Such a device will not be unstable or jittery; and thus, to say that a haptic display feels “passive” is usually a positive. A brake is one kind of (potentially) powered haptic display that cannot, by design, ever be active: it can only remove energy from the interaction, never add to it, and thus while it is limited in what it can do, it usually feels steady and stable.

Facet. A set of related properties describing an aspect of an object [Fagan 2010]. In haptics, multiple facets can be used in combination to capture different cognitive schemas that people unconsciously use to describe and make sense of haptic signals. For example, to describe vibrations, people might at different moments choose to use physical, sensory, emotional words, metaphors or usage examples.

Force feedback usually involves displays that move and can push or pull on part of the body. They generally need to be grounded against something—a table, a car chassis, or another part of the user’s body—to provide this resistance or motive force.

Haptic is a term referring to both cutaneous sensations gained from the skin, also referred to as tactile feedback, and the kinesthetic sense, which involves internal signals sent from the muscles and tendons about the position and movement of a limb [Goldstein 1999, Lederman and Klatzky 1997].

Haptic feedback comprises devices that display to either the kinesthetic and cutaneous senses.

Haptic icons are different terms used to refer to structured abstract messages (tactile or force) that encode information [Maclean and Enriquez 2003, Brewster and Brown 2004]. More specific terms refer to such encoding in haptic submodalities.

Haptic interfaces are devices that display force feedback or tactile feedback in the course of an interaction.

Haptic phonemes. See haptic icon.

Haptic vocabulary. A set of haptic signals paired with their meanings, which as a group convey a set of application-related information elements to users. To be usable and learnable, a haptic vocabulary will have some kind of structure or naturally apparent meaning that a user can scaffold to quickly learn more elements once the first few have been understood [MacLean 2008b].

Individual differences. Variation among users in sensing, interpreting and valuing a haptic signal.

Kinesthetic signals are sent from muscles and tendons. They include force production, body position, limb direction, and joint angle [Goldstein 1999, Lederman and Klatzky 1997].

Passive [human sensing]: In contrast, to active human sensing, in passive sensing the recipient feels a touch that has not been sought and may not be anticipated. Its interpretation is thus not framed by intent or an exploratory purpose, and may be experienced and interpreted differently. In design terms, active touch may yield better information transfer, but requires both a higher level of cognitive engagement and access to the display with a body element that can explore, such as a finger.

Schema. An existing mental structure or set of ideas that can be used to make sense of, interpret, or frame design for a haptic sensation, e.g., recognizing two pulses as a heartbeat [Fagan 2010, Seifi et al. 2015].

Tactile icon. See haptic icon.

Tactile feedback comprises devices that render a percept of the cutaneous sense: for example, using vibration, temperature, texture, or other material properties to encode information. This term is often used interchangeably with more specific types of tactile feedback, e.g., vibrotactile feedback and thermal feedback.

Tacton. See haptic icon.

Thermal feedback specifically refers to the use of temperature to encode informaion.

Vibrotactile feedback specifically refers to the use of vibration to encode information.

3.1.1 Chapter Scope and Coverage

This chapter outlines what makes haptic design different. As an aid to comprehension, readers are referred to this chapter’s Focus Questions and to the Glossary for a definition of terminology.

We start with how our haptic sense is different (Section 3.1). Then, in four stages we distill insights from 20 years of designing haptic experiences ourselves and from studying skilled and novice designers as they work, often while using tools we have crafted for them. First, we establish why and when we should bother, by going through potential haptic contributions within a multimodal interaction (Section 3.2). Section 3.3 is about morphology: What can it consist of? At a high level, designing in a haptic medium is similar to any other; it’s the details that differ. Therefore, we will examine how by traveling through a conventional user experience (UX) design process (Section 3.4). We conclude by overviewing a few frontiers where we believe that accelerating innovation will soon pay off in solving many of the design obstacles we have identified (Section 3.5).

3.1.2 Nature of the Haptic Sense

A number of attributes together give a specific suitability profile to haptic media: simple messages graded in salience and nature, with availability corresponding to the user’s ability to physically access them.

The haptic sense is distributed and multi-parametered. A complex diversity of skin and muscle mechanoreceptors permit the broad range of what we can physically feel: temperature, texture, forces, motion; a brush of fur, a breeze, a droplet of cold water, a swat or bump, road vibrations, a subtle weight shift of a heavy object we’re carrying. Sensory density and distribution changes across the body, and different receptors command differential response speed and specificity [Choi and Kuchenbecker 2013, IJsselsteijn 2003, Lederman and Klatzky 2009, Klatzky et al. 2013]. Imagine a machine that could sense—and make sense of—so many different things. It would require a lot of different sensors, plus compute power and sophisticated neural learning to integrate their diverse input. It is the same for living organisms.

Haptics can be involved in bidirectional active sensorimotor exploration or query, or passive sensory reception of touches applied to one’s body by another person or thing. Much of our touching is in support of manipulation, and it is through manipulation that we can physically explore and sense environments. A designer must consider which information directly pertains to a manipulation, and whether this can be displayed to the body during a manipulation in a manner consistent with expectations drawn from real-world experience.

Active and passive touch have different relationships to attention [Sarter 2013], providing different affordances and requirements for design. Passively experienced sensations may be an ambient interface, background source of information [MacLean 2009], which makes it to conscious attention only if there’s room; if salient, they’ll capture attention. Active exploration is usually in the attentional foreground; when a toucher is seeking something, he will probably notice it if it’s there.

Perceived on the body, haptic perceptions are personal, private, and challenging to share [MacLean 2008a]. They involve social norms for interpersonal touching, as well as the appropriateness and safety or hygiene of touching other people and their belongings. Constant availability requires constant contact; otherwise, the user must know when to reach for a display. Haptics-suitable applications will have a built-in contact opportunity (a car seat, an object the user is already holding, or a wearable device); or can be designed holistically into a larger scene.

Compared to visual and auditory channels, people tend to use touch for low-density information transfer. That said, the degree to which visually impaired individuals are able to extract greater density suggests this may indicate more about learning and communication norms than fundamental potential. With today’s technology, haptic media is usually displayed at lower information density than vision and even audition, but, conversely, it can be more convenient, immediate, and appropriately intrusive. Well-situated and timed signals can be extremely helpful to users as notifications, progress monitors, and manipulation-relevant details presented directly to the hand.

Meanwhile, the ability of haptic display to ambiently convey more qualitative information is relatively untapped.

3.1.3 Novelty of Haptic Media to Humans

Our haptic vocabulary for physical sensations is relatively impoverished, impacting users’ ability to describe, communicate, and possibly even to perceive distinctions. While there have been and will be many efforts to create haptic lexicons, both in terms of abstract properties and their perceptibility [Maclean and Enriquez 2003, Ternes and MacLean 2008, Guest et al. 2011, Seifi et al. 2015] and for specific applications [Chan et al. 2008, Tam et al. 2013, Cauchard et al. 2016], it may be equally important to develop users’ ability to describe what they can feel and thus develop their appreciation of nuance—similarly to how novice wine lovers learning of olfactory and gustatory discernment is scaffolded by sommelier vocabulary [Obrist et al. 2013, Hwang et al. 2011, Lawless 1984].

Beyond the question of vocabulary, most users are not accustomed to processing synthetically encoded haptic meaning. It is not a skill learned slowly since early childhood, like visual reading. Even for relatively simple communications, other modalities employ a sensory design language whose cultural foundations have developed and been imparted over years: westerners have learned to associate a graphical recycle bin icon with file deletion. For now, haptic applications may thus be limited to very easily acquired vocabularies, but the skill shown with longer training [Swerdfeger 2009] promises greater sophistication as the medium becomes more widespread.

3.1.4 How People Differ in Their Experience of Haptic Media

Variations among individuals in their experience of haptic sensations mean that specific design elements may not work for everyone. There are at least three levels at which such individual differences appear, each with its own design significance.

In haptic perception, individual mechanoreceptors register signals with varying resolutions (analogously to visual color-blindness), evident in nonuniform tactile threshold and difference detection abilities [Lo et al. 1984], and typically investigated with psychophysical studies which exclude subjective components. For subtle sensations such as programmable friction, differences among people become more prominent [Levesque et al. 2012]. Tactile acuity also declines with age, suggesting this channel is not ideally targeted for seniors [Stevens 1992, Stevens and Choo 1996]. There is empirical evidence that the perceptual space of sensations is impacted by these differences; for example, people varied in categorizing natural textures according to a 2D vs. a 3D perceptual space [Hollins et al. 2000].

At the level of haptic processing and memory, numerous studies on human ability to identify and parse tactile patterns exemplify differences in ability to process and learn haptic stimuli, with tactile the most frequently studied, e.g., [Epstein et al. 1989]. In particular, an early study by Craig [1977] suggests two groups—learners and non-learners—in a spatio-temporal pattern matching task with the Optacon. A more recent study on a variable friction display reports notable differences in users’ recognition of friction patterns and their spatial density [Levesque et al. 2012]. People also differ in the degree to which they rely on touch for hedonic or information gathering purposes, suggesting modality-specific processing needs and abilities [Peck and Childers 2003]. Haptic processing abilities can be improved with practice: visually impaired individuals often develop exceptional tactile processing abilities independently of their degree of childhood vision, demonstrating substantial brain plasticity [Goldreich and Kanics 2003].

Because synthetic tactile feedbacks tend to be abstract, meanings must be mapped to signals. In the absence of a shared understanding for what these stimuli signify, meaning-mapping is driven by personal experience [Schneider and MacLean 2014, Alter and Oppenheimer 2009]. Individual differences in describing and preferring haptic sensations are thus dominated by personal schemas of interpretation and sense-making [Seifi and MacLean 2013, Seifi et al. 2015, Levesque et al. 2012].

3.1.5 Designing for Differences

How can design practices accommodate and leverage such extensive differences in perception and interpretation?

Haptic researchers have been looking for common themes in users’ perception from the start, and many do exist. Shared interpretations can be translated into guidelines for designing sensations that are distinguishable and expressive for at least significant group of users. For example, most individuals agree with urgency being represented by higher vibrotactile energy and frequency values. Common cultural connotations can also be transferred from other modalities. Audition contributes an understanding of rhythm [Brown et al. 2006a], and auditory icons can be mimicked to achieve a comparable shared perception in haptic counterparts, whether a direct translation or exploitation of underlying design principles and parameters. For example, van Erp and Spapé [2003] transforms 59 short music pieces into vibrotactile patterns, while Ternes and MacLean [2008] builds a large set of vibration icons using rhythm and pitch (frequency).

Large individual differences in haptic perception necessitate evaluating designs at scale, with a larger participant pool. Crowdsourcing evaluation of haptic designs is an enabling new direction (Section 3.5.2).

While guidelines enable haptic design for users in the large, support for customization is key to design effectiveness for individuals [Seifi et al. 2014, Seifi et al. 2015, Ledo et al. 2012]. Applications should enable individual haptic meaning-mapping by allowing users to choose desired settings or mappings for a piece of information. The ability to tune pre-designed sensations or create new ones can further support users in tweaking a signal to their specific usage context and preferences.

It will often be necessary to provide non-haptic backup modalities. Some individuals will be unable (e.g., for reasons of sensory, cognitive, or situational constraints) or unwilling to utilize haptic feedback, ever or in some situations. Interaction designers must allow users to mute or switch to other modalities when needed. When a haptic element is the primary form of information display, as discussed in Section 3.2.3, this may require automatic translation between haptics and other modalities like audio and visual [Hoggan and Brewster 2007].

3.1.6 Designing for Current Haptic Technologies

Several factors make haptic design challenging from a technical standpoint today. Hardware elements are typically able to render just one perceptual haptic submodality: vibration or force, shape, texture, shear, or temperature. These hardware elements are difficult to integrate, resulting in sensations very different from touching in the real world. Hardware also differs greatly in expressive nature and degree, even for a given submodality, and there is a large impact of hardware configuration (weight, materials, etc.) on the resulting sensations.

As a consequence, haptic effects generally must be designed for a specific hardware element, and cannot easily be transferred to another actuator of a different mechanism, manner of being worn, or performance. Moreover, there is a general dearth of tools and expertise for haptic design in industry, and shortage of examples and accepted practices to draw on. Tool development is a priority for the field, and we will offer a perspective of the space that tools do, and must, jointly cover in Sections 3.4.3 and 3.5.3.

3.2 Interaction Models for Multimodal Applications

The touch sense is routinely used in a close partnership with other modalities, which must be considered at design time. Here we examine multimodal interaction holistically by analyzing several scenarios in terms of their interactive goals and features (Sections 3.2.1 and 3.2.2); zoom in to look at the roles haptic sensations take with other modalities (Section 3.2.3); and examine the contribution of haptics to those interactions (Section 3.2.4).

We begin by considering how a multimodal interaction can be structured in terms of goals and design element parameters. We will use the scenarios laid out in Table 3.1 to show how their interactive goals and features define interaction requirements; then further build on these examples for the rest of the chapter. These structures are generally not orthogonal or mutually exclusive; they might appear alone or in combination.

3.2.1 Goals of a Multimodal Interaction

A holistic interaction is often dominated by a particular information display objective. For example, it might provide, notify, and/or guide, deploying a variety of sensor modalities as appropriate. The interaction goal can shift according to the user’s momentary need, and a display can reconfigure its utilities. To illustrate, a common current approach for a navigation interface on a mobile or wearable device is to guide with “push” auditory directives and/or vibrotactile feedback about an upcoming turn; when the user needs more detail, the map is provided on a graphical screen (scenarios [S1] and [S2] in Table 3.1).

Table 3.1 A set of scenarios are used throughout to illustrate some possible multimodal interaction goals (Section 3.2.1), and roles that a haptic component might take within it (Section 3.2.3).


When provided or offered, information is continuously available. It can be accessed at the user’s will, or offered as an ambient stream where the user may consume or ignore it. It might be functional, e.g., indicating the time remaining on a clock or progress toward a goal on a wearable display [S1], or adding dexterity-enabling sensory layers to remote surgery context [S4]. It could enrich an experience (watching a haptically augmented movie [S3]). An interface might escalate an ambient information display channel to notify level (transitioning to a higher salience and discrete medium) when it becomes crucial.

In notify, information is pushed to the user when it becomes of interest, or ready. Notifications can vary in salience, including sub-attentional; but a conceptual differentiation from provided information is that it is event-based, rather than continually available.

A guiding display supports user movement and action, in real or virtual space or processes. Guiding can be continuous, e.g., steering assistance [Forsyth and MacLean 2006]; or periodic or occasional, e.g. when a wearable exercise device gives pace feedback [Karuei and MacLean 2014] [S1]. There are many other types of guiding interfaces, such as software wizards that take a user through steps of a complex configuration task, but these may not be as well suited to haptic participation. Guidance can be attentionally dominant or backgrounded, especially once well learned, as when the view of the road and traffic ahead nonconciously influences one’s speed control of a car.

3.2.2 Parameters of a Multimodal Interaction

The larger goals of a multimodal interaction expose design parameters that will define how an interaction can play out, and are a step toward setting its requirements. All modalities can potentially be called upon for these design elements; some may work better than others in a given situation, and redundancy may be called for. We detail some of the interaction parameters that may need to be resolved.

The manner of access may be push (the user is notified that information is available) or pull (the user queries for the information). Query can entail degrees of information availability: waiting (already displayed—just needs to be looked at or touched); ready to display upon request; or in need of fetching, with some time lag—perhaps even with a notification when it does arrive.

The interaction’s information origin may be endogenous or exogenous. Origin is key to a user’s conceptual understanding of what a display element means, and relevant to how it should be portrayed to the user. Data to be conveyed might be sourced endogenously from the primary user, whether voluntarily or through sensing (e.g., current running pace or effort; one’s personal emotive state, which you wish to share with someone else; time elapsed since you last stood up). Or it might come from outside, exogenously: time for a meeting to start, a target that has been met, a notification of an externally derived event, a feature available in media being felt in a virtual environment or identified by an automatic algorithm in media that is being perused.

A signal in any modality may occur once, recur occasionally or periodically, or appear continuously—from an information stream or channel being monitored, or from discrete events. Consider how an interface could provide, notify, or guide with differing degrees and types of recurrence.

Information may be supplied in the user’s attentional foreground or background. Interface attentional demand is a spectrum, from signals that target a users’ full attention to ambient presentation [Weiser and Brown 1996, MacLean 2009], and a focus of considerable current study [Roda 2011]. As with other modalities, haptic sensations can be designed to fall almost anywhere in that spectrum. Information parameters that justify varying salience include urgency (time criticality), importance, or the user’s context. Guiding information (often continuous) may be designed for conscious or non-conscious use, or both. Mechanisms to modulate a user’s attentional demand include perceptual salience of a given signal element, and recruiting additional perceptual modalities to reinforce (amplify) a percept.

3.2.3 Roles of a Haptic Signal within a Multimodal Team

The haptic signal can play several roles as part of a “team” of sensory channels involved in a multimodal design (Figure 3.1).

A haptic signal can work with other senses to provide reinforcing information about the same percept, or complementary information about a separate one. An example of a reinforcing multimodal display is when an automobile driver is informed of an upcoming turn with a visual map display showing the turn approaching, an auditory voice (“In one kilometer, turn left on Elm Drive”), and vibration of the left side of the seat or steering wheel as the turn approaches.

Alternatively, a visual map might show a bird’s-eye overview, while the vibration gives graded information about how far away the turn is. In this case, the visual and haptic information complement one another—even though they are both related to the navigation task, each gives a different part of the overall picture.

When modalities give redundant (reinforcing) information, one may be primary—the one which users will be lost without, even if they are aided by a secondary one. Often the benefit of the secondary modality is to differ in quality or timing of information display. In the previous driving example, the driver might have a general sense of location in mind, and need just a little nudge to distinguish which of several choices is the correct turn—here, the low-detail, easy-to-absorb haptic tap is just right, and having to look away at a detailed map is overkill.

Figure 3.1 Roles a haptic element can take in a multimodal interaction.

A given modality’s information must be coordinated in temporality and sequencing with respect to others. Information in a multimodal display can be presented at varying levels of detail at different times depending on need. For instance, the holistic display system can be considered as a state machine, or alternatively as detail that fades in and out. In these different states (or levels of detail zoom), modalities may play different roles.

One approach is frontline notification to backup detail. A haptic signal can present an easily processed initial notification with low information density; then the user can follow up to query a visual modality for more detail at a better time. In [S1], a smartwatch vibrates to notify the user of a message (haptics as frontline modality) then audio/visual information is displayed when the user looks at the watch, and possibly queries it for additional detail.

Alternatively, action can be followed by confirmation. A user’s interaction with a device can be actively confirmed, either immediately (button feedback) or later (message sent). In [S2], a user presses a button (visual interaction first), and receives a followup vibration for confirmation.

3.2.4 What a Haptic Component can Contribute Within an Interaction

We have outlined some possible structures of interaction models, general types of roles that haptic elements might play within them, and some design parameters that must be resolved. Here we will look at how the haptic element might work, using examples that researchers have already studied.

Guidance Targets and Constraints

We spoke earlier of guiding as an interactive goal that can benefit from multimodal coordination. Even for this specific goal, the haptic channel can take many forms. We’ll give examples that vary on the recurrence/continuity parameter, spanning both kinesthetic (via force feedback) and tactile varieties.

Haptics can provide virtual constraints and fields. In virtual space (3D virtual environment, driving game or 2D graphical user interface), with a force feedback device it is possible to render force fields that can assist the user in traversing the space or accomplishing a task. These “virtual fixtures” were first described as perceptual overlays: concrete physical abstractions (walls, bead-on-wire) superposed on a rendered environment [Rosenberg 1993], which can be understood as a metaphor for a real-world fixture such as using a ruler to assist in drawing a straight line. This concept is a fertile means of constructing haptic assistance [Bowyer et al. 2014], which has been used repeatedly in areas such as teleoperated surgical assistance [Lin and Taylor 2004], and efficient implementations devised, e.g., for hand-steadying [Abbott et al. 2003].

Haptics can predict user goals. To provide guidance without getting in the way, the designer must know something of what the user will want to do; but if the user’s goal was fully known, the motion could be automated and guidance not needed. In dynamic environments like driving, a fixture can be exploited as a means of sharing control between driver and automation system. The road ahead is a potential fixture basis, and a constraint system can draw the vehicle toward the road while leaving actual control up to the driver [Forsyth and MacLean 2006].

Haptics can layer guidance onto graphical user interfaces (GUIs), or alternatively be built from scratch into visuo-haptic interfaces. Researchers have often sought to add guiding haptic feedback to GUIs, essentially layering a haptic abstraction on top of one designed for visual use. This has been tricky to get right. Some argue the need to start from scratch. Smyth and Kirkpatrick [2006] developed a bimanual system whereby one hand uses a force feedback device to set parameters in a complex drawing program while the mouse hand independently draws—an example of complementary roles of the two modalities. Some guidelines emerged: design for rehearsal; use vision for controlling novel tasks and haptics for routine tasks; and haptic constraints to compensate for the inaccuracies of proprioception.

Haptics can provide discrete cues. That most familiar of haptic mediums, vibrotactile buzzes, has been well studied for guidance cueing: of spatial directional [Gray et al. 2013], walking speed [Karuei and MacLean 2014], timing awareness [Tam et al. 2013]), and posture [Tan et al. 2003, Zheng et al. 2013]. In Section 3.3.3, we discuss vocabulary development for more informative discrete communicative elements.

Haptics can provide spatial marking. Highly relevant to guiding interactions, the addition of spatially informative sensations to touched surfaces screens is becoming possible through several emerging technologies, whether the surface is co-located with a graphic display (touchscreen) or mapped to it (as with a trackpad accessed through fingertip or stylus, or a haptically enabled mouse). Most basically, a vibrotactile actuator can jolt an entire touched surface when a finger crosses a boundary; our brain attributes the “bump” to the touched point rather than the entire screen. Variable friction can render textures that mark regions of a surface [Levesque et al. 2011], but because the whole surface has the same coefficient of friction at a given instant, state changes are salient but not felt as edges under the finger.

Marking traceable edges requires the capacity to independently display different haptic states to skin that touches a surface at different points, through multiple fingers, different parts of the hand, or adjacent points on one finger. Present efforts have not yet simultaneously achieved high resolution, high refresh rate, and optical transparency, nor low cost. Recent advances in shape display use technologies ranging from shape memory polymers (http://www.blindpad.eu) or mechanical structures [Jang et al. 2016]) are promising.

Improving Specific Performance and General Quality

Quantifiable performance improvements are always easier to value than more qualitative ones, whether they benefit safety, efficiency or some other monetizable parameter. As for many interface innovations, however, performance improvement often manifests as a fluidity or reduction in effort that lessens fatigue over a period of time where the user is doing many different things, and can be difficult to isolate in causality or to measure precisely.

Exceptions may be when haptic feedback is applied to error suppression in situations where users are known to be particularly error-prone. For example, drivers often have difficulty with verbal left/right direction commands, whereas spatially delivered haptic cues are likely to improve performance without diverting visual or auditory attention from a driving task. Haptic feedback can also increase dexterity in surgical simulations, teloperated environments, and facilitate simple pointing tasks on a GUI or touchscreens [Poupyrev and Maruyama 2003, Levesque et al. 2011]. These are all changes that can be measured, at least in controlled laboratory settings, with some transfer to real environments inferred.

More broadly, haptic feedback is often found to contribute to the user’s sense of immersion through addition of a sensory modality, for gaming environments, virtual reality, and teleoperated or minimally invasive surgery. Immersion is generally accepted as beneficial, enabling secondary performance improvements by dint of focus and clarity, or greater engagement and enjoyment and thus product success.

Affect or Emotion Display

Haptic elements, both input and output, can be used for affective coloring of an interactive experience, as an overt user expression (overtly, as in “conviction widgets” [Chu et al. 2009]), or deliberate conveyance of emotion to another person [Smith and MacLean 2007]. Incoming to the user, attention to affective haptic design can influence how signals are interpreted [Swindells et al. 2007], make them more understandable and memorable [Klatzky and Peck 2012, Seifi and MacLean 2013], and contribute to a sense of delight in the interaction [Levesque et al. 2011].

Sometimes the primary purpose of a person-to-person communication is affective in nature. Haptics can contribute to such enrichment. Therapeutically, touch-centric mediums such as haptic social robots can act both socially and physiologically on a human to change emotional state [Inoue et al. 2012, Sefidgar et al. 2015].

3.3 Physical Design Space of Haptic Media

Designers of effective haptic sensations within a multimodal interaction must understand what properties of haptic signals are manipulable, how they are perceived, and schemas for encoding meaning to them.

3.3.1 The Sensation

Delivered through a heterogeneous set of technologies, haptic sensations target different human mechanorceptors, and further vary in energetic state and expressive properties.

A sensation can be kinesthetic or tactile. The most common type of the proprioceptively targeted haptic display is force feedback, in which the device exerts a force on the user’s body (often a hand) while the user moves the device through space (e.g., handshaking with a robot or teleoperated surgery). Vibrotactile actuators, alone or in arrays, produce the most well known of tactile sensations. Others include programmable friction [Winfield et al. 2007, Levesque et al. 2011], ultrasonic sensations [Carter et al. 2013], and thermal feedback [Ho and Jones 2007].

A sensation’s salience can vary, from intrusive to ambient. Haptic sensations can be designed to instantly capture the user’s attention (e.g., vibrotactile (VT) notifications) or be present at their attentional background, and referred by the users when needed (Section 3.2.3, [MacLean 2009]). The latter presents information in an ambient manner while the former can interrupt the user’s current state or action to convey the information. Interesting designs are possible by moving between these two ends. For example, a posture correcting chair provides awareness of the user’s posture with ambient pressure sensations at the back of their seat which can gradually move into the user’s attentional foreground when necessary [Zheng and Morrell 2012].

A designer can engineer the properties of an individual stimulus to create different sensations. In addition to signal amplitude, haptic signals commonly use temporal and/or spatial parameters. For example, vibrotactile signals have several temporal parameters including frequency, rhythm, and pulse envelope (specified by attack, decay, sustain, and release parameters [MacLean 2008b, Ternes and MacLean 2008, Choi and Kuchenbecker 2013] as well as spatial parameters such as location (x,y) and direction when several actuators are combined over a surface (e.g., a haptic seatpad) [Schneider and MacLean 2014, Schneider et al. 2015b]. Variable friction and force feedback devices can provide different signals over space and time depending on the user’s interactions [Levesque et al. 2011, Levesque et al. 2012].

3.3.2 The Sensation-Human Connection

In devising effective interactions, designers must consider a device’s connection to the user’s body and the range of haptic sensations perceptible in a given context.

Physical Connection

A haptic device’s connection to its user’s body varies with technology and use case, and impacts perception.

Contact mode can vary. Location, surface area, and tightness are part of the body-device connection; prototypes for wrist, belts, jackets, shoe insoles, or handheld devices vary these parameters. The contact can be persistent (e.g., a wristband) or occasional and on-demand (e.g., a haptic keypad on an ATM or a haptic door knob) [Karuei et al. 2011, MacLean and Roderick 1999].

Bodily distance can vary. Haptic signals can be felt through an internal mechanism (such as vibrating tattoos [Radivojevic et al. 2014]), an external but contacting device (smartwatches, game controllers), or an external, noncontacting device (ultrahaptic devices [Carter et al. 2013]). The current norm is to feel the sensations through an external and contacting device.

On the human side, contact can be active or passive. Human-active touching is generally done for a reason. Active or passiveness of user touch is influenced (afforded) by device and interaction design. For example, sensations rendered by today’s variable friction technology can only be felt with active (sliding) finger movement. Conversely, users commonly receive vibrotactile sensations passively as event-based notifications; finger movement yields no additional information.

Effective Size of the Sensation Space (Signal Set Size)

The number of sensations that humans can perceptually distinguish is a function of hardware, bodily connection, perceptual capability, and context of use.

Hardware specifications such as actuator frequency range determine the rendering limitations and provide an upper bound for the number of perceptually distinct stimuli. These specifications can be used to compare expressive capability among hardware elements (e.g., VT actuators).

Connection characteristics—body location, prototype assembly and materials, and contact mode (orientation, grip, tightness)—impact sensation distinguishablity [Gallace et al. 2007]. Karuei et al. [2011] reports differences in vibration detection thresholds on 13 different body locations and 2 different bodily states (e.g., walking vs. sitting).

Differences in perceptual and processing capabilities for those of different ages, visual acuity, profession, and simply genetics (Section 3.1.4) impact signal distinguishability [Goldreich and Kanics 2003]. Stevens and Choo [1996] report that the decline in tactile acuity with age affects all body locations, but has a larger impact on fingers and toes compared to more central body locations such as lips and tongue.

Context of use can impact haptic perception and processing capabilities, through parameters such as environment, body state (running vs. resting), and sensory and cognitive load and involvement (listening to music, driving) [Karuei et al. 2011, Blum et al. 2015]. This in turn determines the effective set size for distinguishable stimuli. For example, the number of different vibration notifications an individual can discern while driving a car (with its environmental vibrations, high sensory, and cognitive involvement) is smaller than when seated at an office desk.

3.3.3 The Meaning

Sometimes haptic signals are able to directly represent a meaning, e.g., through adequately high fidelity representation of a real physical sensation. More often, abstraction is required: perhaps the sensation being represented is beyond the capacity of the haptic device to display, or the information itself is abstract (“speed up”). Mapping haptic sensations to intended meaning—encoding the information—is a crucial design task that needs to be done in a consistent and compatible way across the full vocabulary used in an application, and sometimes more broadly [MacLean 2008b].

In this section, we discuss users’ cognitive meaning-mapping frameworks, then present encoding and vocabulary-development approaches that have been used by haptic designers.

Interpretive Schemas and Facets

To interpret haptic signals, people employ a number of conceptual or translational schemas, often combining them. We might compare a haptic sensation to a natural one (“This is like a cat purring”), to emotions and feelings (“This is boring”), or consider its potential usage (when a quickening tactile pulse sequence is described as a “speed up”). The meaning someone chooses is typically influenced by the sensation itself but also by the context of use and the user’s background and past experiences [Seifi et al. 2015, Schneider and MacLean 2014, Obrist et al. 2013].

Facets are a concept originating from the domain of library and information retrieval which nicely capture the multiplicity and flexibility of users’ sense-making schemas for haptic sensations. A facet is a set of related properties or labels that describe an aspect of an object [Fagan 2010]. Five descriptive facets have been proposed and examined for haptic vibrotactile stimuli (Figure 3.2, [Seifi et al. 2015]):

Figure 3.2 People use a variety of cognitive frameworks to make sense of haptic signals. Bottom left image (from Schneider et al. [2016]). Bottom right image courtesy of Anton Håkanson.

Physical properties that can be measured—such as duration, energy

Sensory properties—roughness, softness

Emotional connotations—pleasantness, urgency

Metaphors or familiar examples to describe a vibration’s feel—drumbeat, cat purring

Usage examples or types of events where a vibration fits—speed up, time’s up

If a designer neglects a consistent consideration of these meaning assignment facets the result is likely to be confusion and bad user experience. Leveraged properly, facet-driven mappings can be lead to more intuitive, consistent results and highlight pathways to work around individual differences, for example through tools that allow users to efficiently customize their interfaces (Section 3.5.1).

Stimulus Complexity and Vocabulary Composition

Interpretive facets for haptics are not as developed as for other modalities, either culturally or in research. There is a relative wealth of immediately reliable visual idioms, e.g., a graphical stop-sign icon. Instead, haptic designers typically need to devise custom vocabularies [MacLean 2008b]. These vary by application requirements, which dictate size and complexity of the required set as well as the context of use and the hardware that will deliver it.

We can start with simple signals. Simple vocabularies are composed of just two to three haptic-meaning pairs—binary and trinary sets, common in current mobile and wearable notification systems, easy to learn and adopt. The binary case can indicate on/off state of a parameter (e.g., a message has/has not arrived). A ternary vocabulary can distinguish three states (such as below/within/above a target zone, three levels of a volume being monitored, or three categories of notification types).

Next, we have complex signals (a.k.a icon design). More detailed encodings/vocabularies are possible when the hardware and context allow a larger set of distinct stimuli and the user can learn and process a larger mapping [MacLean 2008b]. One design approach is to map information elements to design and engineering parameters of the haptic sensation [Brewster and Brown 2004, Enriquez et al. 2006, Ternes and MacLean 2008]. For example, vibrotactile technologies allow control of frequency, amplitude, waveform, plus temporal sequencing, such as rhythm. In a vibrotactile message notification, amplitude can be mapped to urgency while rhythm can encode the sender group (family/friends vs. work). This approach has the hierarchical structure of a natural language (e.g., letters, words, sentences) [Enriquez et al. 2006].

An alternative approach uses metaphors for designing individual signals and sets of them in a haptic vocabulary. Here, the whole signal has a meaning but its individual components may not encode information, instead exploiting users’ interpretive frameworks for designing more intuitive vocabularies. In [Chan et al. 2008], a heartbeat indicates that the remote connection is live/on, using a metaphor framework.

In both approaches, designers can use perceptual techniques such as Multi-Dimensional Scaling (MDS) or psychophysical studies to prune and refine an initial stimulus set for salience and maximum recognizability [Maclean and Enriquez 2003, Lederman and Klatzky 2009], both prior to encoding and to adjust the final set to optimize distinguishability [Chan et al. 2008].

More complex vocabularies must be learned. Haptic-meaning pairs composed into vocabularies can utilize users’ interpretive frameworks or rely on learning through practice and memory. In the former case, the user should be able to recognize the associated meaning with no or minimal practice (e.g., an accelerating pulse sequence signifies “speed up”) whereas in the latter, sensations are arbitrarily assigned, necessitating prior exposure and memorization. In Figure 3.3, directions can be presented with two types of patterns, spatial and temporal: this particular spatial arrangement has a direct and recognizable perceptual association to the meaning, while the second pattern is arbitrary and will have to be learned.

Past studies suggest that users can learn large abstract vocabularies (56 pairs) with practice but the learning rate and performance can vary considerably across individuals [Swerdfeger 2009]. Users’ performance on large vocabularies with intuitive meaning assignment is yet to be fully studied, in part because of the difficulty of designing them.

Figure 3.3 Intuitive vs. abstract encoding of a direction vocabulary for a vibroactile seat. (a) User easily interprets intuitive encoding of direction with spatial parameters. (b) User learns abstract encoding of direction through temporal parameters.

3.4 Making Haptic Media

How do we translate knowledge of the physical and semantic haptic design space into compelling, coherent, and learnable haptic media, given the many and particular challenges it presents? The answer is a robust and flexible process. We draw upon a design thinking approach, often described as a funnel of idea candidates wherein the designer iteratively generates, refines and narrows down multiple ideas in parallel until a final, well-developed, and trusted design concept remains (Figure 3.4.

We look now at how generic forms of design thinking must be adapted when applied to haptics, and offer several different schemas for approaching haptic design (including those introduced earlier for the user’s view of haptic sensations see Section 3.3.3). We close with an inventory of current haptic design tools and techniques.

Figure 3.4 Incorporating haptics into the design process. We adapt the classic design funnel, where multiple initial ideas are iteratively developed, then add four design activities we have found useful when supporting design: browsing, sketching, refining and sharing. (Based on Buxton [2007])

3.4.1 Design Process

Understanding how best to support design and creativity has long been an important research topic. There is increasing evidence that designers’ environment and tools shape their output, especially their exposure to previous designs, flexible and precise tools, and collaborators [Herring et al. 2009, Schneider and MacLean 2014, Kulkarni et al. 2012, Dow et al. 2011]. We will look at how four design activities—browsing, sketching, refining, and sharing—look in the context of a principled haptic media design process; and where these activities differ from designing in other modalities.

Alongside these activities, designers are constantly engaged in other tasks such as devising effective haptic-meaning mappings (encoding, Section 3.3.3), and evaluating designs, often with rating scales or qualitative feedback—against criteria described in Sections 3.1.5 and 3.2.4). These tasks sequence and bind design activities in specific ways that help accomplish a design goal.

Browse

No idea is born in isolation. Individual designers have a repertoire of previous experiences they have encountered while learning or through practice [Schön 1982]. In addition, design often starts with a “gather” step [Warr and O’Neill 2005]: viewing examples for inspiration and problem definition. Gathering often occurs explicitly at the start of a design process, and can reoccur during iteration. Tangible examples are corkboards and mood boards, which allow ideas to “bake in” to the background [Buxton 2007]. Software tools like d.tour [Ritchie et al. 2011] and Bricolage [Kumar et al. 2011] recommend websites for inspiration and can automatically generate new ideas by combining sites. Haptic designers, however, encounter modality-specific barriers when gathering, managing, and searching for examples.

First, we require a way to represent sensations, singly and in collections. How do we store, view, and organize haptic experiences? Haptic technologies are often inherently interactive, part of a multimodal experience with visual and audio feedback, and can take a variety of physical forms depending on the output (and input) device. This last point is particularly bothersome should the user not have access to the original device type—imagine trying to browse force-feedback sensations on your phone!

Then we need means of classifying and organizing collections. Haptic language and cultures of meaning are still in active development. Without a commonly shared lexicon, organization dimensions, or even adjectives, it is difficult to curate collections. Compare this to sound: most musical terms have a long tradition with a clearly defined lexicon (e.g., crescendo, staccato); non-musical sound effects generally “sound like” something, and are often literal. With vision, one does not have to be a graphic designer or artist to instinctively understand “warm” and “cool” colors; the color wheel is introduced to us in grade school.

Overviews allow us to skim collections. Visual or physical collections of examples are often displayed spatially for ambient reference or to enable quick scanning. When you cannot feel multiple things, it can be hard to get the big picture or swiftly peruse a collection. Both designer and end-users have needs for finding similar/different vibrations in a collection, requiring a low barrier-to-entry on any overview technique.

Given the importance of browsing, it is no surprise that the haptics community has made some progress. Libraries such as the Haptic Touch toolkit [Ledo et al. 2012, HapticTouch Toolkit 2016] or Penn Haptic Texture toolkit [Culbertson et al. 2014, Penn Haptic Texture Toolkit 2016] are available to the community. The Haptic Camera allowed for easy capturing of door knob dynamics that can be stored and recreated later [MacLean 1996], inspiring similar camera-like devices like a portable texture recording device [Burka et al. 2016]. VibViz [Seifi et al. 2015, VibViz 2016] is an online, visualized collection of vibrotactile icons that explicitly tackles these issues, providing multiple classifications schemes (facets) and visualizations to rapidly skim and find vibrations. Visualization techniques are still early, but they help [Seifi et al. 2015], and careful design can help improve representation of perceptual qualities [Schneider et al. 2016].

Sketch

Sketching allows people to form abstracted, partial views of a problem or design, iterate very rapidly and explore concepts. This is mostly heavily used early in design, and plays a role in collaboration (discussed more under “Share” below). Of course, such a central technique is used as a key way of thinking about experience design [Buxton 2007]; some even consider sketching to be the primary language of design, equivalent to mathematics as a language for natural sciences [Cross 2006]. With haptic technology, there is no immediate way to handle two essential features: abstraction and ambiguity, and rapid iteration (addressed more fully in Section 3.4.3).

With respect to abstractability, we note that haptics suffers from a dearth of notation. Sketching of physical devices or interfaces is well supported, with paper and pencil and innumerable software assists. Sketching motion, and in particular showing what is or might be felt in, say, a vibrotactile experience, is trickier. While we can sketch a visual interface and look at it, it is much harder to sketch a haptic sensation and imagine it without feeling it.

Creative approaches are emerging. Most directly, Moussette and Banks [2011] teach Haptic Sketching [DesignThroughMaking 2016] with physical scraps and materials, combined with manual actuator and tools like Arduino, to build effective interactive haptic prototypes physically and programmatically in minutes or hours. Simple display-only sensations can be sketched (e.g., VT icons) using interactive design tools [Schneider and MacLean 2014, Hong et al. 2013].

Refine

Clearly apparent in Figure 3.4, design requires iteration to refine an initial set of ideas into a single well-developed one through concept generation followed by iterative revision, problem-solving and evaluation, until only small tweaks are necessary. This long view of the design process is necessary to see designs through to the end; furthermore, tweaking final designs is a valuable way to accommodate individual differences.

Incorporating haptic technology into a design is an extremely vertical process, dependent on specifics of hardware, firmware, software, application, and multimodal context (Section 3.1.6). With the complexity of these many components, there can be a significant initial cost to setup a first haptic experience; then, adding this complexity to the time needed to program, recompile, or download to a microcontroller means iteration cycles have the potential to be slow and painful. Thus, increasing refinement fluidity is ripe for innovation. For example:

Pipelines now connect initial design seamlessly through to final refinement [Schneider et al. 2015b, Schneider and MacLean 2016]. Continuity in future tools will provide fluid, transparent (rather than cumbersome, many-staged) connection between hardware and software tools at different design stages.

Evaluation is as crucial as for any human-centered refinement cycle. While it will often require some form of sharing (coming up next), here we simply point out that the full spectrum of evaluative mechanisms and supports found in user experience development can be gainfully applied to haptic design, from lab-based comparative performance studies to qualitative examination of how usage strategies change when a physical dimension is deployed (e.g., [Minaker et al. 2016]).

Customization tools are appearing at least at the level of prototyping and requirements generation [Schneider et al. 2015a, Seifi et al. 2014]. Force-feedback virtual environments support iteration and refinement through code, once the initial environment is setup. Software platforms like Unity [Unity Game Engine 2016] offer immediate control of variables in the UI itself.

Tool context—calibration, customization, and sensing—in tools will help final haptic designs remain consistent depending on user activity (e.g., running impairs vibration sensitivity), individual differences, or other contextual concerns.

Share

Sharing designs is valuable at different stages of the design process [Kulkarni et al. 2012], whether for informal feedback from friends and colleagues, formal evaluation when refining designs, or distributing to the target audience for use and community for re-use [Shneiderman 2007].

As haptic experiences must be felt, this process works best when collocated with only a few collaborators, whether by having collaborators work in the same lab, or by showing final experience in physical demos. During ideation, ideas can be generated when collaborating remotely, but physical devices need to be shipped back and forth and it is difficult to troubleshoot and confirm that configuration and physical setup are the exact same. Feedback also typically needs to be collocated, using in-lab studies or feedback, or shipping devices between collaborators. Furthermore, visual and audio design support very easy capture of ideas to share later, through smartphone cameras and microphones, that could later be browsed.

So far, haptic broadcasting, analogous to broadcasting radio or television (e.g., Touch TV [Modhrain and Oakley 2001]) has been envisioned and explored. Follow-up work has added haptics to YouTube [Abdur Rahman et al. 2010] and movies [Kim et al. 2009]. Low-cost devices like the HapKit [Orta Martinez et al. 2016, Hapkit 2016] and Haply [Gallacher et al. 2016, Haply 2016] make haptics more ubiquitous, but remain troublesome to calibrate. To share ideas remotely on phones, proxies like visualizations or other types of haptics (phone vibrations) could be used [Schneider et al. 2016, HapTurk 2016]. Features like automatic calibration and proxies for use in online evaluation, and online communities more generally, are still in development.

3.4.2 Schemas for Design

Because haptic design is such a young field, there are many ways to approach it. One is to consider analogies to other fields, for example to draw on existing expertise in making sounds and multimedia. Another is to focus on the language of haptics, affect, and descriptive aspects of sensations, as laid out in Section 3.3.3. These approaches can productively be combined. In the following, we start with some general perspectives and techniques useful for haptic design, then delve into several specific schemas that haptic designers have made use of: sources of inspiration and conceptual scaffolding of what the finished design may be.

General Methodological Perspectives

Some higher-level perspectives offer useful outcome targets, collections of methods, and design attitudes to guide haptic practitioners in their process. DIY (do-it-yourself) haptics categorize feedback styles and design principles [Hayward and MacLean 2007, MacLean and Hayward 2008]. Ambience is proposed as one target for a haptic experience, where information moves calmly from a person’s periphery to their focused attention [MacLean 2009]. Haptic illusions can serve as concise ways to explore the sense of touch, explain concepts to novices and inspire interfaces [Hayward 2008]. “Simple Haptics” [Simple Haptics 2016], epitomized by haptic sketching, emphasizes rapid, hands-on exploration of a creative space [Moussette 2010, Moussette and Banks 2011] and has been enabled by recent and radical advances in mechatronic rapid prototyping technology. The notion of distributed cognition [Hutchins 1995] has particular relevance for haptic design, suggesting that people situate their thinking both in their bodies and in the environment. Finally, haptics courses are extremely helpful collections of skills and techniques, with foci including perception, control, and design [Okamura et al. 2012, Jones 2014]. Each of these different perspectives can help haptic designers think about how to design haptics more generally, and can augment schemas inspired from other fields.

Design Schemas Inspired by Audio, Video and Multimedia

Haptic designers have often appropriated design elements used in other fields. Haptic Icons [Maclean and Enriquez 2003], tactons [Brewster and Brown 2004], and haptic phonemes [Enriquez et al. 2006] are small, compositional, iconic representations of haptic ideas, inspired by comparable elements from graphical and sound design [Gaver 1986]. Touch TV [Modhrain and Oakley 2001], tactile movies [Kim et al. 2009], haptic broadcasting [Cha et al. 2009], and Feel Effects [Israr et al. 2014] aim to add haptics to existing media types, especially video.

Music analogies and metaphors have frequently inspired haptic design tools, especially VT sensations. The Vibrotactile Score, a graphical editing tool representing vibration patterns as musical notes, is a major example [Lee and Choi 2012, Lee et al. 2009]. Other musical metaphors include the use of rhythm, often represented by musical notes and rests [Ternes and MacLean 2008, Brown et al. 2005, Chan et al. 2008, Brown et al. 2006b]. Earcons and tactons are represented with musical notes [Brewster et al. 1993, Brewster and Brown 2004], complete with tactile analoges of crescendos and sforzandos [Brown et al. 2006a]. The concept of a VT concert found relevant tactile analogues to musical pitch, rhythm, and timbre for artistic purposes [Gunther et al. 2002]. In the reverse direction, tactile dimensions have also been used to describe musical ideas [Eitan and Rothschild 2010].

Language of Touch

The language of tactile perception, especially its affective (emotional) terms, is an obvious possibility for framing haptic design. Language is a promising way to capture user experience, both more generally and for haptics in particular [Obrist et al. 2013], and can reveal useful parameters, e.g., how pressure influences affect [Zheng and Morrell 2012]. In Section 3.1.4, we noted how individuals differ in their experience of haptic stimuli, and this certainly has implications for the generation of stable, broadly understandable design languages in this modality. Reiterating those points: relatively (although not perfectly) consistent sensory dimensions have been established with psychophysical studies for both synthetic haptics and real-world materials, but for meaning-mapping, agreement becomes highly variable. Touch clearly communicates strongly to individuals, but it is difficult to describe, and there is less evidence for existence of a general tactile language that all individuals would agree with [Jansson-Boyd 2011]. The importance of learning and familiarity to cultural agreement on meaning has been barely looked at [Swerdfeger 2009].

More research is clearly needed. Our own view is that some tactile elements can be consistently understood, but far more will be personally interpreted. The beauty and power of active haptic interfaces is that individualized approaches are possible, and solutions that allow and support users in easily creating, assembling or discovering their own tactile language for their personal tools are the most promising. To this end, tools for customization by end-users, rather than expert designers, are another way to both understand perceptual dimensions [Seifi et al. 2014, Seifi et al. 2015] and move toward assisting users in “rolling their own.”

Facets

We introduced the notion of facets and schemas in Section 3.3.3 as a way of conceptually organizing, browsing and curating haptic sensations more generally. Five validated haptic facets elaborated there are physical, sensory, emotional, usage, and metaphors [Seifi et al. 2015] (Figure 3.2).

Here, we look at facet-based design as a language-grounded approach that deliberately builds on multiple sense-making schemas in users’ minds. Specifically, faceted interfaces use this multiplicity of schemas to facilitate comprehension of interface concepts, as well as navigation and search for items according to their various properties [Fagan 2010]. For example, VibViz, built around the five abovementioned vibrotactile facets, is an interactive visualization of a library of 120 vibrations. Without any haptic background, users can quickly navigate the library by flexibly moving between vibration descriptions in various facets [Seifi et al. 2015].

3.4.3 Tools

The range of tools available to haptic makers span software and hardware domains, to use for browsing, prototyping, authoring, and evaluating.

Content Collections

Libraries of effects were the first kind of software tool to achieve any kind of broad dissemination, coordinated with hardware platforms that became available for more widespread develompent. These software collections support developers by providing examples to browse, and supporting faster, easier programming and customization for sketching and refining. The UPenn Texture Toolkit contains 100 texture models created from recorded data, rendered through VT actuators and impedance-type force feedback devices [Culbertson et al. 2014]. The HapticTouch Toolkit [Ledo et al. 2012] and Feel Effect library [Israr et al. 2014] control sensations using semantic parameters, like “softness” or “heartbeat intensity,” respectively. Vibrotactile libraries like Immersion’s Haptic SDK [Immersion 2016] connect to mobile applications, augmenting Android’s native vibration library. VibViz [Seifi et al. 2015] structures 120 vibrations using a multi-faceted organization. Force feedback devices have software platforms like CHAI3D [CHAI3D 2016], H3D [H3DAPI 2016], and OpenHaptics [Geomagic 2016].

Hardware Platforms

Haptic hardware prototyping used to be really hard. Even products like Phidgets [Phidgets 2016], which lowered barriers by sourcing physical interaction widgets and giving access to them from standard computing platforms [Greenberg and Fitchett 2001], did not help force feedback designers because of the need for fast, low-latency refresh rates and high quality hardware. Similar problems applied to making vibrotactile displays do more than make annoying buzzes. Actuators capable of displaying more diverse sensations were the exclusive province of expert engineers.

The world has changed. Emergent mechatronic prototyping platforms, as well as the takeoff of a “Maker” mentality and a new ease of quick turnaround hardware component outsourcing, have radically altered the landscape for hardware rapid prototyping and sketching over the last several years. Perhaps the most impactful platform has been open-source microcontroller and development platforms, lead by Arduino [Arduino 2016]. These have freed the designer from the painful choice between slow, irregular control updates from a general purpose computer, or tedious development cycles using embedded controllers, by making embedded control easy and inexpensive: performance and fast iteration at the same time. Expressive actuators like the Haptuator [Yao and Hayward 2010] can be ordered by hobbyists [Tactile Labs 2016] and controlled with audio. Even more recently, WoodenHaptics [WoodenHaptics 2016] gives open-source access to fast laser cutting techniques for force feedback development [Forsslund et al. 2015]. Soon we can expect a marketplace of haptic designs and techniques as already exists for other physical things, further spurring the haptic sharing economy. The benefit to haptic design is incalculable: not only is design democratized, but the ability to quickly explore large design spaces is expanding the gene pool of solution approaches.

However, we can do much better: these platforms require programming, hardware, and haptics expertise, and include inherent time costs like compilation, uploading, and debugging. As we will discuss later, outreach and online communities may help to support hardware platforms.

Browsing and Authoring Tools

As long as designers have considered haptic effects for entertainment media, they have needed compositional tools [Gunther et al. 2002]. A great deal of previous work has focused on how to prototype or author haptic phenomena using nonprogramming methods.

Many user-friendly interfaces help designers create haptic sensations, especially with vibrotactile devices. These tools often resemble familiar audio editors. The Hapticon Editor [Enriquez and MacLean 2003], Haptic Icon Prototyper [Swindells et al. 2006], and posVibEditor [Ryu and Choi 2008] use graphical mathematical representations to edit either waveforms or profiles of dynamic parameters (torque, frequency) over time. The Vibrotactile Score [Lee et al. 2009] is built around a musical schema and was shown to be generally preferable to programming in C and XML, but required familiarity with musical notation [Lee and Choi 2012]. Commercially, Immersion provides two tools: TouchSense Engage is a software solution for developers, while Touch Effects Studio lets users enhance a video from a library of tactile icons supplied on a mobile platform. Vivitouch Studio allows for haptic prototyping of different effects alongside video (screen captures from video games) and audio, and supports features like A/B testing [Swindells et al. 2014], a small-scale version of sharing. Macaron [Macaron 2016], an open-source, online editor [Schneider and MacLean 2016], moves browsing directly into the interface with an example window, facilitating remixes of existing vibrations, and was shown to directly support browsing, sketching, and refining.

The Handbook of Multimodal-Multisensor Interfaces, Volume 1

Подняться наверх