I don’t like the linked data API, but not because it’s a bad design, but because you end up with scary confusing HTML webpages which people end up on by accident and have no cue as to what to do next.
It’s my strong opinion that when producing a system that will mass produce HTML pages with raw data dumps in, you will alienate people who will arrive there by mistake from search engines.
Currently we don’t have a viewer for individual payments, so you can see how our data site defaults to show a raw data object:
http://data.southampton.ac.uk/payments/201106/payment-50350344.html
“This page shows raw data about this resource. We have not yet had the time to build a pretty viewer for this type of item. Don’t worry if it seems a bit arcane — you are looking under the hood of our service into the data inside! You can download this data in the following formats: rdf, ttl, json, xml.”
I think just telling people that this page isn’t the droids they are looking for will help them move along without distress, and should be considered good practice.
ps. The Linked Data API does one thing I have a real issue with, it can only be configured to follow “forward” relationships from a resource. So you can show a person and their name & phonebumber and follow the link to their address and follow the link from their to their city. What you can’t do is follow an inverse link like “foaf:member” which links from a group to a person, not a person to a group. It could easily be fixed by a tiny tweak where you define an inverse relationship for the property and give that a label, and the system will look for both it and its reverse. This is much cleaner (and semantic) than having people create inverse triples for every membership in their database to appease the whims of the linked data API. I know I should join the community, and argue for it there, but I don’t have time. Sorry.
“you define an inverse relationship for the property and give that a label, and the system will look for both it and its reverse” – that would be nice. Ditto for transitive relationships and also super properties…and then potentially more complex OWL axioms.
Chris,
I’d strongly encourage you to provide tailored web pages for the HTML views of each endpoint rather than use the generic example.
The HTML view that I created for the linked data API is a demonstration of what you can do. It suits what we needed to create for data.gov.uk (with very little time and a huge variety of data) and is therefore fairly generic.
What I suggest is that you replace the XSLT that creates that HTML to give a view that is less scary and more tailored to the particular data that you want to present. You can create a different view for each endpoint, tailored to emphasise particular fields and hide others: whatever you want. The XML isn’t hard to process (it’s not RDF/XML).
Jeni
@jeni, I already do this, but process the raw RDF document outside the linked data API. I’m unable to retrieve the triples I require using the LDAPI as I often want ‘back triples’, like for a building I need both the rooms that are within it AND the site it is within. I’m not willing to double up by adding the inverse predicates to my data just to facilitate a quirk of the LDAPI.
Producing HTML from XML+XSLT is a painful process and I much prefer producing HTML using RDF+PHP.
You’ll notice almost all our other pages ARE processed to be pretty:
http://data.southampton.ac.uk/syllabus/feature/RSC-_DISABLED_ACCESS_-_AUDIENCE_%28A%29_Ground_Floor_Room_Fully_Accessible.html
http://data.southampton.ac.uk/programme-theme/0029/2010-2011.html
http://data.southampton.ac.uk/org/KX.html
etc.
I spent ages finding one that wasn’t and that was unlikely to be done any time soon, so would remain an example!