All object-oriented UIs have the same fundamental events: navigate to an object (usually represented by a graphic), select it and then do something to it. How does the user navigate and select? Prior to windowing interfaces, we did it via the keyboard. Now we do more often using a mouse, a touch pad, a touch-sensitive window, or a pen interface like the Bamboo tablet. In the future we will probably do it by waving our hands and feet (ala Kinect).
Each of these methods maps a set of gestures to a set of actions, with lots of overlap. We get used to one device and then try to transfer those skills to another with widely varying success. I purchased a Bamboo tablet and pen recently to see what it can do and how it can give me additional functionality. I’m still trying to learn how to use the darn thing.
The tablet can be used like a touch pad or with the special pen. I am not a big fan of touchpads and routinely connect both a keyboard and a mouse to my laptop. For this and another reason I describe below, I turn off the touch pad functionality. The pen looks like a ballpoint pen (a basic UI affordance) with a tip and something that looks like an eraser which, in fact, it is (albeit, digitally).
The tablet area maps directly to the screen. That is, the top left corner of the tablet is the top left corner of the screen. The tablet area is shaped like a classic screen ratio of 4×3. I have a widescreen monitor so the aspect ratio for the screen is different than the aspect ratio for the tablet. Thus, there isn’t a 1 to 1 mapping between distance traversed on the screen with distance traversed on the tablet when moving left and right.
To move the cursor on the screen, I can’t touch the pen to the tablet, but rather, have to hover over it. Hmmm, why is this? It’s because drawing on the tablet is a different gesture than moving the mouse around on the screen. Briefly touching or tapping the tablet == a mouse click at the pointer location while drawing on the tablet == mouse down and drag from the current pointer position on the screen.
When you hover over the screen you have to keep the point fairly close to the screen to be recognized–roughly half an inch or less. I have a hard time moving around without wresting my hand on the tablet which is also how I generally write. (Thank goodness I am right-handed in a left-to-right language locale or I would be smearing all over what I had just written). So I have to turn off the touch pad because the pen and the pad are competing over the respective signal received and the cursor is jumping over the place. Since the point has to be really close to the tablet but not touching it, I have to keep reminding myself that it’s easier to lift up the pen and then lower it down when I think it’s at the right location. But the tablet is not the screen, so I keep looking back and forth between the tablet and the screen to figure out where the pointer is. If only I had a touch sensitive screen that recognized the pen.
So why use the pen? It’s probably best used in a drawing and painting environment like Photoshop, Illustrator, or Fireworks to do freehand drawing and touch-up and it is not a general replacement for the mouse. In fact, the tablet comes with a free version of Photoshop Elements and a special add-in in support of the pen. I’ve got to start playing w/Elements to see what it can do in that app environment.
Note that the mouse and pen are compatible with each other so they can be used together–now if I only had two dominant hands to use them simultaneously.