Designing for Touch, Part 2: Josh Clark’s workshop at An Event Apart

In the second part of his workshop, Josh delved into designing for larger touch screens: tablets and touch-enabled laptops/hybrids. He also brought it around to using gestural interactions to optimize the experience on any size touch device.

He finished the day helping us figure out how to help users discover invisible gestures, and then looked at the future of interaction inputs that go beyond touch.

If you missed my notes on part one of his workshop, be sure to check it out as well.

Larger touch screens

  • People hold tablets in different ways based on context.
  • Sitting, you’re leaning over the tablet, balancing it on a table. On the couch, you’re leaned back holding it freely. In bed, you’re laid back, resting it on your belly or chest.
  • In general, we hold tablets two-handed, on the middle of the sides.
  • Controls across the bottom is bad for tablets. We’ll rarely hold it at the bottom (too floppy) and it places the controls out of easy reach of thumbs. It’s also out of your sightline, since it’s farther to move your eyes down than a smartphone.
  • Center top is also not a great solution. When you go to tap it, you’re covering your content. And it’s also not easy to reach with thumbs.
  • It makes more sense to place controls at the top corners of the screen (see slide in the photo at top of this post). It’s within easiest reach of thumbs and reaching for them won’t obscure content.
  • On the sides

Touch on laptops and “hybrids”

  • People use touch on these devices to supplement the trackpad, keyboard, mouse.
  • There are things the mouse is best for, but touch gets used for tasks when it’s just easier to do.
  • The behavior with these is to rest your arms on the table and hold the base of the screen.
  • The thumb zone, then, is a sort of arc from the bottom and up the sides (see photo below).
  • Many people continue to use their right hand for other inputs, but keep their left hand on the base of screen and use left thumb for touch.

Josh Clark shows the optimal placement zone for controls on a hybrid touch laptop

How do we build for this?

  • We’ve been doing it wrong.
  • Screen size is a terrible way to detect a touchscreen.
  • But echoing LukeW earlier, there’s not any great solution currently to detect touch correctly.
  • There’s a new hope for CSS4 media queries (how far out is this, I wonder?)
    • @media (pointer:coarse) would detect if touch is there. Then you can design for the lowest common denominator.

New desktop design guidelines

  • All desktop designs have to be touch-friendly.
  • Hover is an enhancement. Sayonara, hover events!  Doesn’t mean we can’t use hover, but it needs to be a progressive enhancement for those that do use a hover-capable input.
  • Move thumb controls to left side.
  • Big touch targets to ensure fingers/thumbs can hit easily.
  • 44px rhythm to create a consistent design system based on our touch points.
  • Progressive disclosure. Clarity is better than density, even on desktop.

Gestures

  • There’s an efficiency with touch—gestures can be a super fast way to take care of a task.
  • We have the opportunity to get away from the abstraction of UI elements like buttons and navigation items.
  • Instead, the user can interact directly with the content to move through their experience.
    • Gestural controls to move from one part to another.
    • Tap on a piece of content to get more details.
  • There are drawbacks, especially discoverability.
  • “I hate the iPad back button!” – Josh
    • Even though it’s the same as the iPhone button, it takes more effort to hit it. YOu have to move across a much larger screen to peck at a tiny tap area.
    • What if there was a gesture like five-finger touch to get back instead?
  • Gestures are the keyboard shortcuts of touch. Even if you want to leave the visual affordance for an interaction, you can still provide the gesture for faster use.
  • Let people be lazy. They’re going to be lazy anyway, so design in a way that it’s easy for them to use without a lot of effort.
  • “Semantic zoom” is becoming a standard affordance. Pinch-to-zoom
  • Big screens invite big gestures.

Imagine your data objects as physical objects.

  • This is about thinking about the behavior of an object and what it would act like if someone could interact with it directly.
  • It’s not about what it looks like (i.e. skeumorphic design).
  • Example: Clear app. You can “move” your items up and down the list, swipe to move them off your list. Pinch and zoom to “make room” for more  content.
  • Example: Facebook’s experiment with their Paper app. All the content and navigation are objects that you can manipulate to view and move through.
  • Buttons are a hack.
    • Buttons take cognitive & physical effort.
    • They are an abstraction of a direct interaction with an object/content (true both in the real world and digital).
    • When we started designing digitally, we just copied the metaphors of the physical controls we already understood.
    • They’re still often necessary (e.g. light switch allows you to control the light that’s on the ceiling).
    • But we need to think before just plopping in another button (or other UI element). We need to decide if there’s a better way of doing it where the user can directly interact with content.
  • Example: TouchUp app. Instead of using slider or brush picker to get smaller brush, just zoom in on the whole photo. Your finger stays the same size, but when you zoom out the line is smaller.

Implementing touch on the web

  • It’s hard to code gestures in JavaScript. Only have touchstart, touchmove, and touchend right now.
  • hammer.js is a great library that can help detect more of these gestures.
  • The browser itself limits you.
    • It has its own built-in gestures (pinch-and-zoom reserved for zooming, long-tap is reserved for opening further options with a link)
    • Tap and swipe are really the only reliable gestures available on web.
  • For now, we’re really still limited in this arena.
  • We need to keep pushing the browser makers and folks who work to codify the web to make code and rendering more friendly to touch.

Standard gestures and uses are emerging

  • Tap: Used to activate, show more.
  • Swipe: Almost always used for next/previous motion. That means scrolling or flipping between cards and screens.
  • Double-tap: Most used for zoom in and zoom out.
  • Pinch and spread: the more playful way of doing zoom in and zoom out
  • Tap and hold: Usually brings up contextual actions. It acts as a “right-click” for touch screens. This one’s not discovered by many users. Used most by power users.

Discoverability

  • How do we find what we can’t see?
  • Unlike buttons or menus or icons, gestures aren’t visible to the user. We need to design how users will discover them.
  • People discover things by trying to interact in ways they’re familiar with from past experience.
  • Otherwise, we need to learn an interface.
  • “The message is the medium.” Let people interact with the things they would naturally want to interact with directly. They will find the gesture just by trying what comes naturally.
  • Teaching users:
    • Written instructions or videos are too long, and users won’t use them.
    • Show. Don’t tell. Show illustrations or animations of the gesture that users can understand quickly.
    • Do. Don’t read. 
    • Tutorial: Have users actually perform the gestures in a demo mode so they get the idea and reinforce it with muscle memory. Example: Mailbox app.
    • Walk-through: Users learn by performing the gestures through the actual app, with real content.
    • Bad example: Microsoft Office Clippy. He popped up at annoying moments. It was the concept that was bad, but that he never learned what you already knew, so timing was never helpful.
  • Nature doesn’t have instructions.
  • Apple’s skeumorphic design in earlier versions of iOS and Mac OSX purposefully meant to look like physical objects to help users understand how they were meant to be used.
  • Much of the push toward “flat design” has more to do with aesthetic shift than usability. It’s fine if you want to go flat, but make sure it behaves like a natural object, and users understand that somehow.
  • If you do use skeumorphic design, make sure it does behave like what it looks like. Bad example: iPad contacts book looked like a book, but original app did not let you swipe through pages.
  • Embrace the physical metaphor you’ve chosen, but marry it and embrace the capabilities of the device and OS you’re using.
  • Kids are much better at touch interfaces because they aren’t holding on to old metaphors like mouse and keyboard.

Play more video games.

  • It’s research!
  • Games are great at teaching unfamiliar controls.
  • You start games without knowing anything. But game designers have gotten great at subtly guiding you through new controls, step by step adding to your understanding.
  • Three ways that games teach you:
    • Coaching: Simple demonstrations and prompts show you what to do. It’s not just telling you how to do it, it’s coaching you through interactions while you do them. As soon as the user has shown the understand how it works, stop showing the tutorials.
    • Leveling up: Ease players into the app. Show them one interaction at a time. Show them a new interaction when it comes up in context of use. Force them to do the action before they can continue to the next step.
    • Power ups: These are the keyboard shortcuts of video games. They are usable by anyone, but most efficient in the hands of experts. Learning to use them gives the user a sense of accomplishment at being more advanced. Gamification is mostly fluff, but real reward of knowing a new skill is valuable.

Phooey to touch. Let’s look at the future.

  • The big shift in our new era of computing is the plethora of new sensors we have to use.
  • Touch, GPS, camera, microphone, light detector, proximity detector, accelerometer, compass, gyroscope.
  • So many of the first attempts at using these technologies have come across as mostly gimmicky.
  • But there are some cool examples, too.
  • Augmented reality:
    • Skinvasion game. Recognizes your face with the front camera and you play a game killing aliens on your face.
    • Ikea app. Scan things you like in a catalog and then view them on your screen in your home environment from camera.
    • Word Lens. Use camera to point at text on a sign, and it translates it to your language, but shows it in context of the graphical sign image (hard to explain, here’s a video).
    • Table Drum. With app open, it can use mic to listen for sounds as you tap on any surface to play back instrument sounds. You can even teach it that tapping on a glass means play the cymbal sound, for example.
  • Design for sensors, not just for screens. (Wait, weren’t we designing for people?? I get what he’s saying though).
  • Custom sensors:
    • Asthmapolis: sensor is placed on your inhaler. It talks to your phone via bluetooth to let it know you’ve taken a puff. The phone adds GPS info and relays time & location to the server. Helps you track whether your asthma is in control, but also gives insight into conditions that may be causing issues for you.
    • Bluetooth sensors communicating back can quickly become creepy. Just like Mike Monteiro said, we need to be consciously deciding what we should be working on to make the world better.
    • Disney Research came up with Botanicus Interacticus. Putting a metal sensor into the soil lets you touch the plant and use it as a touch controller for your computer. (Holy crap, this is cool!)
    • In Switzerland, farmers have been trying out sensors in cows. When the cow is in heat, it sends a text to the farmer.
  • Digital is becoming physical (wearables), but the physical is also becoming digital with sensors.
  • Mirroring allows us to make personal device content social by placing it up on larger screens.
  • Everyone is trying to make “smart” appliances. But they’re often missing the point. I don’t want to read tweets and facebook on a refrigerator’s screen. I want it to be a better refrigerator with real, valuable improvements for its primary tasks.
  • Remote control is quickly becoming one of the most prevalent ways of using devices together.
    • iPhone as remote or game controller for TV, control security and lights at your home with your phone, etc.
    • It’s nice to be able to maintain sanity by limiting the number of device platforms we use. So to be able to use one device (smartphone) for multiple purposes is awesome.
  • Migrating interfaces. Shared control among devices, with primary role shifting between them depending on your context.
  • Mind the gap. Start designing interactions in the spaces between devices. Here he’s referring to his talk earlier in the conference.
  • More great examples:
    • Great concepting using gesture prototypes.
    • Really cool device called sifteo where PC runs the game, but inputs are little block devices that you put together in different ways to play games.
    • Knock to Unlock (ignore the creep on the demo video. I actually use this product and it’s really cool and useful.).
    • Nest thermostat and smoke detector. They can detect with proximity when you’re home, and they communicate with your apps and website. And of course, they have their primary sensors for temperature, smoke, etc.
  • Devices are likely to get dumber. That’s a good thing. Let the dumb stuff do the work, and talk to a handful of key smart devices we actually interact with.

The cloud

  • Part of it is the cloud as we know it today (services).
  • But it’s also sensors, mirroring, remote control, migrating control, passive interfaces.
  • It’s all the devices that are part of doing all these things.
  • Your API is the application. It carries the service/data across devices and platforms.
  • Devices are just the container. We need to starting thinking in terms of designing services.
  • Likewise, the application you’re designing today is just one window into your service. It will be obsolete in short time, and you’ll need to create new windows to keep the service alive and accessible.

Whew! I’m tired. But I got a ton out of this workshop, and out of the entire AEA conference. Once again, thank you to all the speakers and organizers for another awesome experience!