Imagine this: You are a busy lawyer with a multi-jurisdictional practice, and frequently find yourself in different courtrooms or offices in various counties, states, provinces, etc. At each one of these locations, you need access to relevant, location-specific information, such as local rules of the court. Now let’s assume you carry a networked mobile device that has one or more “apps” giving you access to primary and secondary source material. The portal, while very modern, is still dumb, and by that I mean it requires you to navigate—whether by search, facets, tables, or indicies—to the location where the relevant jurisdictional materials are located (assuming primary source information is what you’re after) or worse, to slog through secondary source content that explains specific jurisdictional requirements.
But what if all of that information was reactive, meaning it changed dynamically based on your location, with the content shifting with each crossing of a county line? What if we took that even further and said the content changed as you walked into each courtroom, with local local rules and general orders being pushed to the home screen of the app and important secondary practice guides morphing to reflect cases decided by that judge? What if instead of navigating to find the content, which already exists, it simply presented itself to you?
It has occurred to me that in an age of connected, location-aware mobile devices, we have a unique opportunity to improve the “findability” of legal information. And while it may represent an incremental step, it is an enormous leap towards the idea of smart data, that is, data that changes as sensor information touches it.
My thoughts about the marriage between sensors in mobile devices (e.g., phones, tablets) and primary and secondary legal source content (e.g., cases, rules, statutes, treatises, manuals) recently sprung from three posts. First was the Economist’s special report titled “It’s a smart world”:
[I]t is the smartphone and its ‘apps’ … that is speeding up the convergence of the physical and the digital worlds. Smartphones are packed with sensors, measuring everything from the user’s location to the ambient light. Much of that information is then pumped back into the network. Apps, for their part, are miniature versions of smart systems that allow users to do a great variety of things, from tracking their friends to controlling appliances in their homes. [¶] These and other services are bound to grow together into what Jan Rabaey, a computer scientist at the University of California at Berkeley, grandly calls ‘societal information-technology systems’, or SIS. [¶] More processing power and better connectivity [will] allow the construction of computing systems capable of storing and crunching the huge amounts of data that will be produced by these sensors and other devices. All over the world companies are putting together networks of data centres packed with thousands of servers, known as ‘computing clouds’. These not only store data but sift through them, for instance to allow a smart system to react instantly to changes in its environment. [Fn. *]
Second was Joe Esposito’s terrific piece on sensor publishing at Scholarly Kitchen last December, titled “The Ambient Authorship and Subtle Potential of Sensor Publishing”:
For some sensors, the phone’s owner may direct the inputs, as when someone uses a phone as a bar-code reader, but other sensors will operate autonomously — How fast is this car traveling? What is the humidity? What patterns can be detected (and what do they mean) when a phone’s ringer is turned on and off? All the data that is collected is then sent over the network to a database, where it is (or will be) analyzed in many ways. The output of that analysis could be a publishing product. [¶] I first stumbled across the role of sensors several years ago, when an entrepreneur pitched his new company to me. His idea was to affix sensors to fleets of trucks and collect data from these mobile sources. I will decline to say what he planned to do with the data (yes, it was an eye-popper), but it was a publishing idea — high-end analytic data delivered to a specialized market.
From these two pieces, I began thinking more about the inverse of Esposito’s proposition, namely that location and other sensors influence what digital primary and secondary source material you read rather than actually creating it. But my thoughts really didn’t come together until I read this third piece by the always insightful Craig Mod, titled “A Simpler Page”:
Tablets are in many ways just like physical books—the screen has well defined boundaries and the optimal number of words per line doesn’t suddenly change on the screen. But in other ways, tablets are nothing like physical books—the text can extend in every direction, the type can change size. So how do we reconcile these similarities and differences? Where is the baseline for designers looking to produce beautiful, readable text on a tablet? [¶] If the axis of symmetry for a book is the spine, where is it on an iPad? On one hand, designers can approach tablets as if they were a single sheet of ‘paper,’ letting the physicality of the object define the central axis of symmetry—straight down the middle. [¶] On the other hand, the physicality of these devices doesn’t represent the full potential of content space. The screen becomes a small portal to an infinite content plane, or ‘infinite canvas,’ as so well illustrated by Scott McCloud. [¶] Regarding iPad book design, designers are left with a fundamental question they must answer before approaching this device: Do we embrace the physicality of the device—a spineless page with a central axis of symmetry? Or do we embrace the device’s virtual physicality—an invisible spine defined by every edge of the device, signaling the potential of additional content just a swipe away?
It occurred to me then (and after studying Mod’s image below) that many publishers simply regard the content presented on an iPad and other mobile devices as “on rails.” We search, tap, swipe, swipe, and move through content going up or down, from one side to another and back again. But even if done elegantly, it’s still dumb as far as legal content (and presumably others) is concerned, and it doesn’t need to be.
When an author is creating, for example, a practice guide for civil trial lawyers, she is, at many times, faced with the prospect of addressing different jurisdictional requirements. The product’s taxonomy may reflect the different jurisdictional requirements through a rigid structure or informal notation system (e.g., “circuit split” notation). My experience with many national treatises or manuals is that authors tend to reflect the rules and preferences of the largest jurisdictions while ignoring the smaller ones (e.g, they might say “consult your local rules” as guidance). In a print-only, non-mobile world, this practice makes complete sense. An author cannot (and a publisher will not) dedicate space to addressing every possible local permutation or supplementation of a rule. So the default is simply to inform the lawyer to be prepared because things might be different where she lives.
In the modern age of mobile devices though, we have an opportunity to extend this content because the information is born digitally and sensors can provide the location data to make it immediately observable. It is an infinite canvas of legal information that doesn’t require us to search and swipe our way through it. It simply goes to the right place.
[Fn. *] For an interesting law-related example, see Greg Lambert’s recent post on 3 Geeks concerning case law, legal history, and GPS.