Curating the Legal Web?
Much to the chagrin of the museum crowd, the last few years has seen a steady degradation of the term “curate.” A recent New York Times piece noted that the term “has become a fashionable code word among the aesthetically minded, who seem to paste it onto any activity that involves culling and selecting.” In this sense, everyone perhaps is a curator.
Now, as stimulating as an etymological debate on the word “curate” undoubtedly would be (e.g., Florida still uses the phrase “probate curator”), I’m not really interested in doing it here. I raise the issue because I am attracted to its current mutation as it relates to Web 2.0 (as opposed, say, to music or fashion), and more specifically how it is and can be applied to analytical content.* And in this vein, one commentator has offered an observation:
We’ve just recently latched onto the idea of curation as though it were something new. The need for curation in the old media world wasn’t as obvious as in the internet world because, on the web, ‘everything carries the same weight’ and the average user has difficulty discerning good content from bad. … The buzz word ‘curation’ does carry with it some logic: As the sheer amount of information and content grows, consumers seek help parsing the good from the bad. And that’s where curation comes in. The amount of content available to consumers—much of it free of charge, but scattered across thousands of websites—is growing exponentially every day. At the same time, consumers are increasingly doing independent research and attempting on their own to source important information to support their increasingly complicated lives. Questions or information relating to healthcare, finances, education and leisure activities represent a small sample of the range of topics on which consumers look for accuracy and relevance, yet encounter an immense sea of specious or outdated content. In many ways, the web—in its entirety—is the new dictionary, directory or reference encyclopedia, but users with specific interests are increasingly beginning to understand they need to spend as much time validating what they find as they do consuming their research. In the old days, it was as simple as pulling the volume off the shelf and, while the web offers a depth and accuracy of content that far outstrips any from the old days, finding content of similar veracity can be a challenge.
In its broadest sense, there is plenty of legal-content curation going on. Slaw is a curator, with experts creating original content, link publishing, and editorializing on specific topics, cases, legislation, etc. And while this is important and useful, it is not what I consider to be a challenge to traditional legal publishing, which is something Slaw contributor Jordan Furlong suggested nearly four years ago that blogs might ascend to:
Legal publishers need to understand that the number of competitors is not going to shrink—it’s going to multiply tenfold. And these competitors won’t have overhead, distribution, payroll or marketing costs to deal with—they’ll write when they want to, promote themselves by word of mouth, sell as much focused advertising as they like, and establish themselves as individual brand-name forces. Seth Godin is right: blogs are going to create thousands of expert media outlets with a total staff complement of one. It’s already started.
And indeed, since 2006 we have seen a rapid growth in legal media outlets, although I don’t think we could characterize all of them as “expert.” Regardless, thousands of lawyers and legal professionals are creating content, and more specifically, analytical material. Little of that content, however, is curated (i.e., evaluated, authenticated, and categorized). And if digital outlets are going to compete against traditional publishing companies, their collective analytical content—which is fast becoming substantial—will need to be managed.
Curating this growing body of analytical content will be difficult. It suggests a person-machine process of locating and separating good content from bad, and categorizing, verifying, authenticating, and editorializing that content. It will undoubtedly require the creation of a rich taxonomy to help organize and manage the content for later discovery, clean metadata, and a good search engine, and raises issues from data permanency to copyrights to brand dilution. It’s a mess. But a worthy one I think.
Last year, Seth Godin wrote a post on when the writer becomes the publisher. He concluded it with the following comment:
Mark this down as another job for the new economy: someone who can collate, amplify and leverage the work of writers and turn it into cash. I don’t believe that there’s one solution, not this time. But I’m confident that around the edges and deep into niches, there’s money being made.
I think if someone wants this badly enough, they will find a way to make it happen and monetize it. When that occurs, we’ll have a real challenge to the status quo. In the meantime, let’s hope the duopoly doesn’t get to it first.
______________________
* I would add that the theft of the word here is not, as the museum crowd might have you believe, an act of self-aggrandizement. If it were, I would have opted for something like, “connoisseuring” the legal web, instead.
Jason: Thanks for this very interesting post. Our colleagues at ITTIG, Enrico Francesconi & Ginevra Peruginelli, have been working on an OAI-PMH-based curation system for legal analytical materials, described in their 2009 article, Integrated Access to Legal Literature Through Automated Semantic Classification, http://j.mp/9u3p4e . Also, a number of legal knowledge representation resources that might be relevant to your project appear here: http://j.mp/9WpCSI ; many of those available as Linked Data are listed here: http://j.mp/bzKTdg ; and some legal metadata resources that may be relevant are listed here: http://j.mp/cuzSU2 .
Robert,
Thanks for the information. By way of update as well, industry vet Michael Cairns notes that Elsevier appears to be “widening their content silos” by allowing reciprocal linking of scientific data provided by third parties. Post is here: This move might suggest what the future could hold for legal content on the web generally.
You’re on the mark, Jason. The ABA Journal has been curating legal news for three years, posting 25-50 stories a day that summarize and link to the most relevant and timely legal news stories from across the web.
And we’re categorizing each story, so readers can quickly find the news that is relevant to them. Check out today’s coverage here: http://www.ABAJournal.com. And see the dozens of topics we cover here: http://www.abajournal.com/topics/.
Our traffic has grown by 500 percent. That says, we think, there is an enormous need among lawyers for highly targeted content.
Ed,
I read the ABA Journal, and the interesting things is that I have always felt it was more of a “news organization” rather than an “analytical destination.” That’s not a judgment on the quality of material available through the site, I think it is just my perception of what the ABA does.
You are right, the ABA is curating (and creating) a great deal of news and posts throughout the legal web. You’re picking out good stories, articles, posts, etc. and putting them on the site. You have search, but I’m assuming the taxonomy that’s being used to keep it organized is either relatively uncomplicated or it is complex, we just don’t get to access it easily through the current UI (e.g., facets). But as a user, I want greater findability and understanding of that content.
I suppose the point to my post is whether we can wrap a wiki-like structure and interface around the legal web, and make it a destination for learning about both general topics and specific issues, rather than just a portal for all results that match search terms. And that’s where the curation, meta (taxonomy), and search is so important.
As a note, the ever entertaining Scott Greenfield has commented on this post on his blog Simple Justice, with a post of his own titled “.”