Skip to content

A linked data web of a million easy pieces

(The following article is an op-ed piece and does not reflect the views of EnAKTing as a whole.)

The community of developers and researchers working on the Web of Linked Data are an extraordinary group of talented hackers.  But anybody who is a member of the community quickly runs into the same problems: the number of tools we have at our disposal are minuscule compared to the massive quantities of “Web 2.0″ core software tools and development frameworks.  As a result, we often resort to building things from scratch – over and over again.

This becomes painfully clear when attempting to teach a new generation of software architects (e.g., university undergraduates) how to build Linked Data systems.  “Is < tool > (e.g., rdflib) really the only triple store for Python?” Well, no, I reply, there are a half a dozen others but they are mostly long abandoned, woefully incomplete, unstable, buggy, or over-engineered and too complicated to use — or maybe there are others, but they are insufficiently advertised and unknown.  How many server frameworks are there for Web 2.0 sites, by comparison?  Nearly 300, and growing.  How do you find them?  Under article entitled Web frameworks on Wikipedia.  What is their adoption rate?  Massive.  How good are they?  The best power web sites like Twitter, WordPress, and so on: good software.

It is thus no wonder that hundreds of  Web 2.0 developers are born every second, while the number of new Linked Data systems each year grows by the dozens.  But not for long: we are on the brink of a phase change, one that involves as much an adoption of Web 2.0 culture by the Linked Data community as the Web 2.0 community has to learn about rich data representations.

The Linked Data community is picking up one important lesson from Web 2.0: simplicity.  The singular feature of the success of a Web 2.0 framework or toolkit is how easy it is to understand and use.   Simplicity begets understandability – a tool designed to do only one thing is easy to understand.  The second is robustness — nobody wants to rely on a system that is buggy or incomplete.   These two priorities have pushed the best tools of Web 2.0 (e.g., server frameworks such as Django or client-side APIs such as jQuery) to become the most widely disseminated and re-used code on the planet.  Out of these reusable bricks has grown the thousands of random Web-2.0 style social networking and sharing sites we have today.

The tradition the Linked Data community is starting to leave behind, meanwhile is that of building massive, opaque, integrated systems.  Without wanting to name any here explicitly, one can fairly easily point to massive Linked Data systems that never gained adoption by a single real user because they singularly tried to do too much at once. The lack of ready-made robust tools means that most of these systems started by re-inventing the platform : re-implementing a triple stores, RDF parsers and APIs, DL reasoners, prior to implementing the application or desired user interface.  In Web 2.0, the equivalent would be roughly designing a web site by re-writing one’s own HTTP server, web framework or templating language from the ground up – an obviously time-consuming exercise requiring substantial software development experience.

As a community to move towards the model of building Linked Data applications out of easy pieces as Web 2.0 does, we need to encourage the development of tools and services that are 1) useful 2) simple to use 3) reliable.  There are so many difficult integration and representation challenges in Linked Data development that the first requirement (finding a need) is trivial.  The latter two, on the other hand, require considerable thought, design, and (as with any tool for real human users), testing with real developers, iteration and feedback.

One of the core tenets of enAKTing has been to develop simple and essential software and services for the Linked Data web that are easy for all Linked Data developers to use – from casual to expert developers.  While we are far from perfecting these tools we’ve already seen considerable demand for several of these services — for example, our coreference (“sameAs”) service which can be used to find whether two concepts are equivalent; our Backlink service which can be used to find incoming links to concepts on the distributed web of data.  Our javascript-SPARQL proxy makes it possible for client-side code to directly query SPARQL endpoints – getting around difficulties such as same-host-restriction policies.   We have a host of new other services coming down the pipeline* that, together, we hope will help to usher new innovative Linked Data applications and developers.

But we can’t do it alone. What tools are you developing? What would you like to see? Let us know.

* Please see The enAKting Services for a list of services we are currently developing.

Max Van Kleek is a Senior Research Fellow and can be contacted at emax <at > ecs dot soton dot ac dot uk

One Comment

  1. Tim Harris wrote:

    When you say that there may well be useful tools around, but they are insufficiently advertised and hence unknown – are there not an abundance of technical software download sites that would have these programs on?

    Thanks, T. Harris

    Tuesday, October 19, 2010 at 6:38 am | Permalink

One Trackback/Pingback

  1. [...] the rest here: EnAKTing » Blog Archive » A linked data web of a million easy pieces linked, linked-data, [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *