Southampton Open Data Blog

Good Data Practice

June 7, 2013
by Ash Smith

While building the University’s Open Data, we’ve seen many different types of data. Much of the information is exported from Oracle and MySQL databases, or from enterprise systems like Sharepoint, but the vast majority of what we use is in a tabular data format such as a spreadsheet.

Spreadsheets are actually a really good way of producing linked open data without any technical knowledge. A technical person just needs to write a single program or script that converts a spreadsheet into a computer-readable format, and anyone can then modify the spreadsheet to their heart’s content, you just need to run the script again afterwards. But this allows us to fall into a very common trap caused by bad spreadsheet discipline.

Spreadsheets are generally designed for human use. Most modern spreadsheet packages, such as Excel, allow the user to include headings, cell colours, lines, even import images and other files. There are also no strict rules about data type, so you can type a list of numbers in a column and then enter “N/A” or “see below” as part of the list, and the spreadsheet will not complain. This is fine for spreadsheets that only need to be read by people. However when generating information that might one day be read by computer, there is one very important 1975 Doctor Who quote you should remembered, “the trouble with computers is that they’re very sophisticated idiots”. They can only handle what they’re programmed to handle. So if I were to write a program that processes a spreadsheet for converting into linked open data, and then someone were to update a cell in the spreadsheet using the word ‘None’ rather than the number zero, the computer running my program will get confused and behave unexpectedly. This is why good data practice is essential when generating or updating data that may one day become linked open data.

So how can we avoid this? Well, one way is to employ super hackers who can pre-empt every possible anomaly in the data. But in a world with time and financial constraints this isn’t always an option! Joking aside, it’s a really quick and cheap fix to make sure that if you’re designing or editing a spreadsheet, you keep it as computer-friendly as possible. To this end, we’ve come up with what we consider to be the four most important rules for making your spreadsheet ‘linked-data-friendly’.

  1. Standardise your data format
    Values should be numerical or a simple yes/no as far as possible. For example, if you were producing a list of food, rather than put ‘not suitable for vegetarians’ in a general comment field, add an extra column labelled ‘vegetarian’ and restrict the possible values to ‘yes’ or ‘no’. If this isn’t possible, keep to a small set of possible values and don’t deviate from these. ‘Red’, ‘Yellow’ and ‘Green’ is better than ‘Red’, ‘Burgundy’, ‘Yellow’, ‘Lime’, ‘Emerald’ and ‘Jade’, unless the exact shade of green is critically important.
  2. Keep free text to a minimum
    There is always room for a comments column. Sometimes we need to express something that can’t be represented as mere numbers. However, try not to put this in the actual data. The data should be as accurate as possible, and clarified by the comment field. So, for example, if you are maintaining a list of water coolers and their locations, you might have a ‘room’ column. If a cooler is in a corridor rather than a room, there are several ways you can represent this in a spreadsheet. You could leave the room empty and put ‘outside 2065′ in the comments, you could put ‘outside 2065′ as the room number, or you can put the room ‘2065’ as the room number and then write ‘outside’ in the comments. The third way is the linked data way! We still have consistent, numerical data to represent the room, but the comment clarifies to a human reader that the cooler is actually outside the room rather than within it. The computer may not be able to make sense of the ‘outside’ comment, but at least it can get the closest room correct.
  3. Consistent, unambiguous identifiers
    Computer scientists often refer to ‘primary keys’, and information architects will talk about ‘controlled vocabularies’, but at the end of the day we’re all talking about the same thing and that’s a way of identifying a specific thing in an unambiguous way. A good example of this is buildings in the University estate. Some buildings have names, some more than one, but all buildings have a number, so if you have a ‘building’ column in your data, make sure and use the number rather than the name. The same applies for rooms. A computer doesn’t understand ‘level 4 coffee room’ (and indeed many buildings may have a level 4 coffee room) but it does understand ’32/4032′ (for example).
  4. Style is nothing to a computer
    Although you may like to use headers, coloured cells and so on, don’t rely on them for meaning. When you export a spreadsheet to its raw data form, all the styling is lost, so making the vegetarian options in a menu green is not a good way to identify them. If it’s important, it should have a column. By all means, make your spreadsheet as pretty as you like – just be aware that it’s not going to look like that to a computer.

There are other things, but these are the most important. Next time you start a spreadsheet keep to these rules, and your spreadsheet will be trivial to convert and add to the open data service. Once its in it is really easy for us to give you loads of value add on your data. The value add increases the desirability and accessibility of your data and makes your data helpful. People use your data to make their lives easier and that reflects positively on you and boosts your reputation.

How open data helps change management

June 6, 2013
by Christopher Gutteridge

Last year we built a system which aggregates event RSS feeds and makes a nice events calendar for the university. I was recently quite surprised to discover a way which the information was being used.

“Change Management within iSolutions uses the Events Calendar open data in conjunction with other confidential data sources to build a rich understanding of critical University activities throughout the year. This enables iSolutions to schedule maintenance work to minimises service disruption to our users, and it also assists in understanding impacts to users when unplanned service outage occurs. ” – D.J. Hampton, IT Service Management & QA Team Manager

Open Data Hack-around day

May 22, 2013
by Christopher Gutteridge

When: June 26th, 10am-5pm
Where: Access Grid room, Level 3, Building 32 (and probably also the Level 4 coffee room for less formal stuff)
The data.southampton team will be hosting a hackaround day where we’ll give demos, take ideas, help you use the data and build neat things. The exact format of the day will be very loose, but anybody interested is welcome to drop in and have a chat, watch a demo or meet other interested people to start developing ideas and new uses.

If you’re definitely/possibly coming, you can indicate it on the facebook page for the event.

Got some requests or ideas already? Leave them in the comments.

The Vacancies Dataset

April 11, 2013
by Ash Smith

Just recently I’ve been looking for data we can publish as RDF with minimal effort, and without requiring any access to restricted services or taking up peoples’ time. I came across the University’s jobs site, It uses a pretty cool system which exports all the vacancies as easily parsable RSS feeds, grouped into sensible categories. We have a feed for each campus, and a feed for each organisational unit of the University, so if a job appears in, for example, the feed for Highfield Campus as well as the feed for Finance, the job is a finance-based job on the Highfield Campus. Because of this, it’s trivial to write a script that parses all the RSS feeds on the jobs site and produces RDF. So that’s what I did, and you can see the results in our new Vacancies dataset.

Normally when I produce a new dataset I like to provide a clever web tool or search engine to make use of the data, but this time I haven’t, because the jobs site already does this very well. So why republish the data at all? There are two reasons. Firstly, our colleague at Oxford University, Alexander Dutton, has already done this with Oxford’s vacancies. If we do the same, using the same data format, we’ve effectively got a standard. If other organisations begin to do the same thing, suddenly the magic of linked open data can happen. The second reason is because now SPARQL queries are possible. They’re a bit advanced for the layman, but if you were looking, for example, for a job at Southampton General Hospital paying £25K or higher, you can write a SPARQL query that does all the hard work for you, and the same query will work with Oxford’s data, although obviously you’ll need to replace the location URI with one of theirs.

Feel free to have a poke around at the data and, as always, if you manage to come up with a cool use for this data – even just an idea – then please let me know.

March 20, 2013
by Christopher Gutteridge launched today. It will provide a hub for linked data in open data services, and aggregate open data from UK academia. It’s been set up by the data.southampton team, but it’s owned by the community of open data services.

Our equipment dataset is now aggregated by and there is a nifty search.

New Feature! The Room Finder

March 19, 2013
by Ash Smith

We now have a tool that allows anyone in the University to find a suitable room for their event. We call it the Room Finder and I for one am rather proud of it. The tool pulls data from the places dataset, the room features dataset and the new room bookings dataset, and is a really simple way of finding a room at the University of Southampton. Let’s say, for example, that you need a room for a lunchtime meeting on Friday somewhere on Highfield Campus – and by the way, the room must contain a data projector and a piano. Using the Room Finder, you can check to see if such a room is available at the time you need and, if so, click through to the room description pages to find out more. The tool doesn’t currently allow you to actually book the room, but it’s hoped that many phone calls to Estates and/or the central booking service can now be avoided as we continue our ongoing mission to get all the University’s useful data onto the web.

The Room Finder is still under development, so things will change in the coming days. Specifically, I’m not completely happy with the way it displays the features list, it’s still a little bit more technical than it needs to be. We’re also hoping to get a mobile version out soon, it’s a bit fiddly trying to use it on a small screen. But as with everything on this site, I hope it shows just how useful open data can be. If you do find a problem with it, have a request for an additional feature or just find it useful and want to let me know, then feel free to drop me an email at

Room Features updated

January 24, 2013
by Ash Smith

Over the last few weeks, Patrick has been exploring the university’s central data store looking for information on rooms and the features they contain. We’ve always had room features on, but they were all generated from a single XML file given to Chris some years ago, and things change over time. So thanks to Pat’s fearless efforts investigating the central Oracle database, we now have a couple of scripts to pull not only room features, but booking information as well. A quick RDF generation script from me later, and we now have a method of ensuring the open data is as up to date as the university’s central database.

This is quite a big deal in my opinion – anyone planning a lecture or event can now view room information from the web and work out which rooms are suitable and available at the required time without having to phone Estates or walk across campus in the rain. Also, updating our data after such a long time is interesting for noting how things change over time; if nothing else, audio/visual technology is improving while chalk blackboards are definitely getting rarer!

FOI Man Visit

January 18, 2013
by Christopher Gutteridge

Last week we had a visit from Paul Gibbons aka “FOI Man”. He works at SOAS and came down to Southampton to see what we’ve been up to with open data.

He’s written it up in his blog.

At Southampton the FOI-handling stuff and open data have only a nod-in-the-corridor relationship, but there’s some obvious wins in working together.

In other news, we’ve got more data in the pipes, and are writing importers for it in the next few days, we’ve had a meeting about moving some core critial parts of the open data service into “BAU” – business as usual, so that there’s people who know how to maintain it outside our team, and the core is (change) managed more formally. This is essential if we want open data to be part of the long term IT strategy and not a glued-on-bit on the edge.

I’m also thinking about the fact we have very spotty data on research group building occupation, and so forth. By rights this data probably belongs to the “Faculty Operatiing Office”, but they are busy and don’t answer my questions very often. A cunning plan has entered my mind… Make a ‘report’ URL for each faculty which provides a spreadsheet with what we know about their faculty and let them download it and send it back to us. I think they could ‘colour in’ the missing information in a few minutes, and it will better express the problem to the management/administrator mindset if I show them a spreadsheet with blank cells in. To me, it’s a just data, but then I’m a data nerd, and we’re learning you have to have the data owner work with data in a way that makes sense to them.

Getting Real

December 18, 2012
by Christopher Gutteridge

Up until now the open data service has been run on a pretty much seat-of-our-pants approach. We’re actually at the point where one of our services, the events calendar,  really needs to graduate into a normal university service. It requires a little regular TLC to deal with broken feeds. There’s 74 feeds so some break now and then. They were always breaking, but now at least someone notices. I (Chris) recently attended a course on the University “Change management” process (which is basically getting sign-off to modify live services to reduce the impact and manage risk). I was pleasantly surprised to hear that the change management team actually use the events calendar to check if a change to live IT services might cause extra issues (eg. don’t mess with the wifi the weekend we’re hosting an international conference.

I always said that the success criteria for was that it becomes too important to trust me with (tongue in cheek, but not actually a joke). And, lo and behold, management has asked me to start looking at how to start the (long) journey to having it be a normal university service.

I feel some fear, but not panic.

I’ve been trying to think about how to divide the service into logical sections and consider them separately.

I’ve discussed the workflow for the system before, but here’s a quick overview again.

Publishing System: This downloads source data from various sources and turns it into RDF, publishes it to a web enabled directory then tells the SPARQL database to re-import it. This has just been entirely re-written by Ash Smith in command line PHP. An odd choice you might think, but it’s a language which many people in the university web systems team can deal with, so beats perl/python/ruby on those grounds. We’ve put it on github. The working title is Hedgehog (I forget why) but we’ve decided that each dataset workflow is a quill, which sounds nice.

SPARQL Database: This is 4 store. It effectively just runs as a cache of the RDF documents the publishing system spits out, it contains nothing that can’t be recreated from those files.

SPARQL Front End: This is a hacked version of ARC2’s SPARQL interface but it dispatches the reqests to the 4store. It’s much friendlier than the blunt minimal 4store interface. It also lets us provide some formats that 4store doesn’t, such as CSV.

URI Resolver: This is pretty minimal. It does little more than look at the URI and redirect you the the same path on data.soton. It currently does some content negotiation (decides if /building/23 should go to /building/23.rdf or /building/23.html) but we’re thinking of making that a separate step. Yeah, it’s a bit more bandwidth, but meh.

Resource Viewers: A bunch of PHP scripts which handle all the different type of resources, like buildings, products, bus-stops etc. These are a bit hacky and the apache configuration under them isn’t something I’m proud of. Each viewer handles all the formats a resource can be presented in (RDF, HTML, KML etc.)

Website: The rest of the website is just PHP pages, some of which do some SPARQL to get information


So here’s what I’m thinking about getting some of this managed appropriately by business processes.

As a first step, create a clone of the publishing system on a university server and move some of the most stable and core datasets there. Specifically the organisation structure: codes, names, and parent groups in the org-chart, and also the buildings data — just the name, number and what site they are on. These are simple but critical. They also happen to be the two datasets that the events calendar depends on and so would have to be properly managed dependencies before the calendar could follow the same route.

The idea of this 2nds data service, lets call it, is that it would only provide documents for each dataset, all the fun stuff would stay (for now) on the dev server, and I really don’t want to get iSolutions monekying around with SPARQL until they’ve got at least a little comfortable with RDF. The hedgehog instance on would still trigger the normal “beta” SPARQL endpoint to re-import the data documents when they change.

We could make sure that the schema for these documents was very well documented and that changes were properly  managed, and could be tested prior to execution. I’m not sure how, but maybe university members could register an interest so that they could be notified of plans to change these. That would be getting value out of the process. For the buildings dataset, which is updated a few times a year, maybe even the republishing should have a prior warning.

The next step would be to move the event calendar into change management, and ensure that it only depended on the ‘reliable’ documents. This service is pretty static now in terms of functionality, although we’ve got some ideas for enhancements, these could be minor tweaks to the site, with the heavy lifting done on the ‘un-managed’ main data server.

Don’t get my wrong, I don’t love all this bureaucracy, but if open data services are to succeed they need to be embedded in business processes, not quick hacks.

Apps wanted!

December 13, 2012
by Ash Smith

Southampton’s Open Data is really gathering momentum now, and is being used for many things. Personally, I like the “cool stuff” approach, as it allows people to see what’s really possible with Open Data. Recent additions to our “cool stuff” are the university events page, which gathers event information from all over the university and makes it available in one searchable index, and the workstation locator which allows members of the university to locate an available iSolutions workstation nearby, using a GPS-enabled smartphone if they prefer. I’m currently liaising with the providers of the council’s live bus data in order to make sure that no existing apps break when the system goes live again, which should be in a few weeks time. I’m making it my top priority to not inconvenience application developers, and data integrity is something I take very seriously. After all, if I were to develop a cool app based on external data and a week later the data format changed for no good reason, I probably wouldn’t trust that data not to change again. So if you’ve developed an app that uses our bus data, please feel free to get in contact if you find it’s suddenly started behaving strangely and I’ll do everything I can to help. The system may go up and down in the coming weeks while we iron out some bugs, but the last thing I want is everyone having to re-implement their apps because of something we’ve done.

With cool apps in mind, I’d like to take this opportunity to publicise the university’s Open Data Competition, an initiative designed to try and encourage developers to use our data. If you can’t program, don’t worry, you can submit an idea for a cool app without having to actually develop it yourself. The competition also accepts visualisations of our data, so if you’re into statistics or making mash-ups, this may be your chance to impress the judges. There’s a £200 Amazon voucher up for grabs for the winner of each category and £100 vouchers for the runners up. Don’t feel you have to restrict your ideas to the data we provide, data is best when linked. It’d be really nice to see something that provides a useful service by combining our data with that of, say, the government or the police.