Point and Touch

Touchscreens have made a difference. Until fairly recently, the assumption could normally be made that people would provide input to their machines using a keyboard or a mouse. But not now.

The people have spoken: we want touchscreens. Touchscreens more than justify the disruption they have caused. We need calm too, though, and that’s still a work in progress. This posting reviews some recent developments on the software standardization front concerning “pointer events” and “touch events”.


Touchscreen technology has actually been around since the 1960s. A great review of the history is available in Bill Buxton, “Multi-Touch Systems that I Have Known and Loved” (19 March 2013) (

Displaying a keyboard on a touchscreen is not the most disruptive thing. Indeed, that’s more or less only the tactile equivalent of speech-processing software at the spoken-alphabet only level. It’s the pinching, flicking and so on that’s new.

Making things work

To understand how such things work–particularly in connection with websites–we would normally start with the HTML5 documentation ( and its section 6.1.6: Events. Section lists the event handlers. These include oninput, onkeypress, onkeydown, and onkeyup. They include onmouseup, onmousedown, onmouseout, onmouseover, onmousemove, onclick, ondbleclick, ondrag, ondragstart, ondragover, and so on. What we don’t find are the touchscreen gestures.

There’s nothing about them in the DOM documentation or the Javascript documentation either.

The current version of the “Document Object Model” is DOM4 ( We say we’re currently at level 4. The older levels are described in “Document Object Model (DOM) Technical Reports” ( Events are discussed in section 4. It’s a pretty high-level discussion, not obviously device specific.

Javascript (ECMAScript) is similarly not device specific in any obvious way. The current standard is ECMA-262, 5.1 Edition (, and the word “event” only appears in the rather generic description of web scripting in section 4.1.


So where’s the documentation for gestures? Well, in the absence of standards, we have to look to the device manufacturers and the browser people. The four big browser families, according to Wikipedia, are “WebKit” (Wikipedia), used in Safari; “Blink (layout engine)” (Wikipedia), a 2013 fork of WebKit used in the newest versions of Chrome and Opera; “Gecko (layout engine)” (Wikipedia), used in Firefox; and “Trident (layout engine)” (Wikipedia), used in Internet Explorer.

In the WebKit world (which at that point included both Apple and Android), the big event was “Added interfaces for touch event support in JavaScript” (11 Dec 2009) ( Now, there’s also the “Safari DOM Additions Reference” (, particularly in the following two sections: “TouchEvent Class Reference” ( and “GestureEvent Class Reference” ( As noted in the first of these, “TouchEvent objects are combined together to form high-level GestureEvent objects that are also sent during a multi-touch sequence.”

There is also a helpful discussion of how to put it all together in the “Safari Web Content Guide” (, particularly in the “Handling Events” ( section.


As noted, Android was part of the WebKit world until the very recent fork to Blink. JavaScript support in WebKit was noted above under the “Apple” heading.

In other respects, the normal starting point for developers would be to download the Android software development kit (SDK) from, and to check out the Android Open Source Project website: An additional interesting perspective on the developer world-view is provided by “Using Samsung Emulators for Android Application Development” (24 May 2011) (

There are gentler introductions available elsewhere for beginners. Check out the Android Developers blog ( and the Samsung “Technical Docs” ( With respect to touchscreens, I recommend starting with “Touch Mode” (December 2008) ( This explains how touch mode contrasts with trackball mode, navigation mode and keyboard navigation. Then, “On-screen Input Methods” (21 April 2009) ( explains how the Input Method Framework (IMF) can be used for such things as software keyboards. “Creating an Input Method” (27 April 2009) ( illustrates how the SoftKeyboard sample code in the SDK can be changed to create other input methods. Another helpful article on this topic is “Implementing a custom input method” (23 January 2013) (

Finally, “Gestures on Android 1.6” (5 October 2009) ( introduced the Gestures Builder. As noted above, the WebKit-JavaScript interface followed soon after. A more recent introduction has also been provided in “Gestures in Android” (4 April 2012) (

JQM, Microsoft and the W3C

Still pretty complicated, right? As I noted in a previous posting, jQuery can help to fill the gap. In particular, there’s jQuery Mobile: Touch-Optimized Web Framework for Smartphones & Tablets. The list of “jQuery Mobile Supported Platforms” ( doesn’t seem to leave much out. The “Events” demo ( lists touch events (tap, taphold, swipe, swipeleft, and swiperight); virtual mouse events; orientation change event; scroll events; page load events; page change events; page transition events; page initialization events; page remove events; layout events; and animation events.

The W3C very recently gave us a standard, “Touch Events version 1” (9 May 2013) ( It lists the following touch events: touchstart, touchend, touchmove and touchcancel. It has been implemented on Firefox. See “Touch events” (

One should note that the “Touch Screen” specification features prominently on the “Web Events Working Group Patent Policy Status” page (, where a number of patent claims by Apple are noted. As one might imagine, Apple’s actions have been commented upon. Haavard (whose views “do not necessarily represent those of Opera Software”) was early off the mark, with “Apple using patents to undermine open standards again” (9 December 2011) (

Meanwhile, Microsoft was working on a different model, described in some detail in Scott Gilbertson, “Give the Web the Finger With Microsoft’s Proposed ‘Pointer Events'” (February 2013) ( On the same day that “Touch Events version 1” became version 1, the W3C also published Microsoft’s proposal as “Pointer Events Candidate Recommendation” (9 May 2013) ( According to Gilbertson, the goal is “to provide a unified model for dealing with all the various input devices on today’s web, namely, the mouse, the stylus and the finger.”

Earlier, the jQuery blog had featured a very thorough posting by sgonzalez, “Getting Touchy About Patents” (10 April 2012) ( (It will be obvious how much I have relied upon this posting in preparing this note.) The posting makes the following statement, apparently on behalf of jQuery: “Regardless of which model the W3C chooses to pursue, jQuery is dedicated to filling in the gaps, just like we do for other events such as submit and change. We think the pointer event model is easier to use and more future-proof, and we hope that it can be standardized, even if Touch Events are standardized as well. … We would like to publicly call upon Microsoft to submit a proposal to the W3C for Pointer Events.”

And Microsoft did exactly that. Jacob Rossi, “W3C Transitions Pointer Events to Candidate Recommendation” (9 May 2013) (, writes “This fast 5-month progression from First Public Working Draft to Candidate Recommendation is a mark of the effective collaboration between Microsoft, Google, Mozilla, Opera, Nokia, jQuery, and others to help sites take advantage of new interactive hardware on the Web.”

It will be interesting to see how all of this continues to develop in the months and years ahead.

Comments are closed.