At the moment, all the RO builder gives as an output is RDF, which isn’t all that exciting for many people. To remedy this, several visualizations will be created, so that the users of this tool can have something to look at that isn’t just a mass of triples. Here I’ll list the ideas that have popped up so far for visualizations.
Idea one: a (customisable) web page. The idea is that it will create a web page (or small group of pages, depending on size) based on the RDF given. As an example, for the sorts of objects I’ve been working with mostly, there would be a section for projects associated with the research, a section for associated people, one for papers, etc. Each of these sections would be internally linked to each other, that is, a paper’s representation could list all the authors of the paper, and each one of these would be a link to the main section of that author in the page. The idea is that it should be somewhat customisable too, at least to change the order of the sections, delete unessecary ones, that sort of thing; as such I envision some kind of edit mode for the page, possibly after authentication of some kind, that would be seen like an overview of the page – each individual section and subsection would be hilighted, and they ought to be drag/droppable into different positions. There would also be a icon on each block to allow it’s deletion. A couple of default stylesheets will be provided, but as the structure of the document should be consistant, it’s easy enough for anyone to create their own/change a pre-existing one.
Idea two: a graph, differing from the usual RDF graphs you see, in that this would try to keep it simple – at around the same level of complexity to the end user as the RO builder is, that is, only the significant relationships relating objects, and hopefully arranged in some way that makes sense (all people in one place, all papers in another). There are several ways this could be accomplished, two that I’ve been looking at include the RDF Graph Visualizer, part of the OAT framework, and using graphviz to generate a SVG graph which could then have all kinds of fun going on with something like raphaeljs. Because the graph would be visualizing things a reduced level of detail, for people who want to see the detail, clicking on an element could take you to either a website-representation of the element, or just a graphite dump of it.
Idea three: similar to PeoplePivot, something that attempts to visualize the research through time. This would probably take the form of a webpage, with a bar at the bottom that controls how you move through the research chronologically, that is, you would be able to select (on a slider) a range, and a date, and the page would display the content that was published within that time period, as well as be linked to the other content that may not have been published/released/had anything significant happen to it at that time. It could also work quite well as a graph, but rather that showing/hiding elements, it would rather be hilighting them or making them otherwise stand out, with some other way of showing which events are the reason why the resource ought to be hilighted, else the graph would end up not looking very meaningful.
I have been asked to introduce myself and the project I am on. Hi, I’m Rikki. I am currently finishing my PhD and working on a JISC funded project called the “Southampton Student Dashboard”. I have blogged a bit before about life as a student, in case you want a sense of my style.
The Southampton Student Dashboard (I’ll drop the “Southampton” from now on) is a project associated with the Southampton Learning Environment (SLE), and aims to provide a single view of a student’s data to both the student and the various members of university staff that interact with that student.
The hope is that the Student Dashboard will provide a more student-centric approach, by allowing anyone dealing with a student to have all of the appropriate information to hand, without having to contact the myriad of existing owners of this data. This will hopefully enhance the student’s experience and improve retention.
We’ve identified a number of challenges that lie ahead for us. Retrieving the data we need to feed the Student Dashboard will involve both social and technical hurdles. There are clearly going to be privacy issues involved with exposing this data, and close attention will be paid to ensuring protecting sensitive information. There will also be the chance to explore some of the ways of exposing different views of the data to different people, depending on their role and relationship to the student.
At the moment, we are meeting with the various stakeholders (student services, academic tutors, senior tutors, students) to determine which data they require in their interactions with students. We are also in discussions with the computing services staff (iSolutions) about how we will actually access the data from the various existing systems.
The following stages will involve more of the stakeholders in a co-design of the system and interface, which will lead to a prototype being constructed and piloted.
It is looking like an interesting project and I look forward to posting more about it on here. Stay tuned!
We have now got PANFeed software up to a nearly acceptable standard. In the interest of releasing early and often we have released a tarball and some simple install instructions so that if you like you can set up you own PANFeed like service for yourself or your university. If for example you don’t like relying on a service external to your campus managing your news then why not install your own PANFeed. Maybe you want to catalog a specific type of RSS feed or add some functionality to the PANFeed code base. We have made PANFeed available under the GNU GPL so you can modify it all you like.
If you like PANFeed, discover any bugs or write a cool extention then get in touch and tell us by email on email@example.com.
You can find the Release here: http://code.google.com/p/panfeed/downloads/detail?name=personal.tar.gz
If you would like to keep up with development as it happens: http://code.google.com/p/panfeed/
Or if you just want to use our live service: http://panfeed.ecs.soton.ac.uk
The driving reason behind developing the SLE is to provide students and staff with a fresh collaborative environment in which they can build a community for themselves and each other. Communities are fantastic, because they give users a sense of pride in the software they’re using – and what better way to give users this affinity than by letting them develop features of the software to share with their peers?
However, as it stands now, if a user wants to help develop a feature they first need to find the appropriate feature request form (not an easy task in itself), print it out, decipher the three pages of business jargon, fill it in and return it. We want to make things as easy as possible for our users and all this achieves at the moment is badly filled in forms, or people simply not bothering to develop features.
As a result, one of the features we are developing is a sensible, online feature request form. Hopefully, this should make things either on both sides: users will be able to quickly and simply explain what feature they want and what support they will need, and the development team will know exactly what needs doing, how to do it and when it needs to be completed – and best of all, everyone will have an interesting new feature to use in the community.
Now the RO builder has actually taken shape, this is a blog post to show what it looks like, and the current capabilities/limitations of the UI at present. First, a screenshot showing what the tool looks like before adding any content :
As you can see, there are 2 fields to add either objects that have URIs, or user created ones – blank nodes. There is also a field which allows you to relate objects to each other, after they’ve been added. Through this tool, you add a URI, which is handled by a shim, generating RDF on the object, passed back to the tool, which then generates the representation of the object. Here is the tool after adding two papers, and what’s contained in the folding :
As you can see, adding an EPrint also adds all the authors as people, and relations showing that they are the authors of a particular paper, and then their resource entries are grouped under that fold. The resource folding is such that if the same author is associated with a different resource fold, or is added as a stand alone resource, de/selecting any of their entries will deselect the lot. Relations are all folded, and added to a fold named after each resource – as an example, if person A authored paper B, then the relationship A authored B will appear under both the fold for all of A’s relations, and all of B’s. As with resources, selecting / deselecting the relation works globally. Adding relationships between two resources will add the second resource into a group for the first resource – or turn a singular resource into a group. With owl:sameAs links, the first resource is treated as the main resource, and the two resources are merged, and all instances of the second resource replaced with the first. As an example: if you have person A, bnode B, and paper C, and also the relations A and B are creators of C, if then the relation A is sameAs B is added, the tool will represent this to the user by merging B with A, so all that is visible to the user at the end will be resources A and C, and the relationship A is creator of C.
After a user has added whatever resources they need, and any appropriate relations, they will create RDF based on this – by clicking the “Create RDF” button.
After RDF has been created, they can visualize it in a number of ways – as an automatically generated webpage, as graph, etc (see later blog post about this).
There are also a number of improvements that have been suggested, following some user testing : again, see the relevant blog post.
So, for the first time, I gave the RO Builder some real live testing, and here I’ll discuss some of the difficulties encountered during this. Generally the problems encountered can be divided into conceptual limitations, or bugfixes and minor features that ought to be added, the conceptual problems are what’s interesting here.
To begin with, our user decided to create a RDF representation of his PhD. The way he went about this is he added himself, and then searched for the papers he had written, which were hosted on a number of different sites, and then for other components of his PhD. This is different to how I had assumed the tool would be used, in that I had (foolishly) assumed that people would have all their papers in one place, or know where to find them. Furthermore, some difficulty was encountered as it wasn’t obvious what a bnode actually was, or when to use them.
From these two initial problems, a couple more arose: as papers may be spread out, many sites must be supported. To support a large amount of different sites, if a generic solution cannot be made (and there are several reasons why it might not be possible), then we may end up with a large amount of shims, and this would quickly become unusable – or at the very least, unpleasant to use. To resolve this, a couple of changes to how the shims are handled and selected could be implemented: At the beginning, the user merely selects a type (Person, Article, AVdoc … None of the above), and then the tool tries to either treat it generally (easy for links that point straight to RDF, and well constructed HTML) or tries to match it to a specific shim (e.g. if the URI given starts with arxiv.org, it’s safe to assume that it’s a link to an arxiv article, thus use the arxiv shim), and if those methods fail, or a nonsensical answer is returned, then ask the user to try and select a shim. A simpler solution might be to keep the current system of having the user select a shim each time, but reorganise the interface such that the shims are better organised, and easier to find / select, this would likely take up much more space on the tool, and still not be as pleasing for the user as the first solution though.
To ensure people use parts of the tool for the correct reasons, a concise, user-targeted, expandable glossary, or something of the sort, at the top of the tool should help. Due to the fact that people may not know exactly where their resources are, some form of search integration would be helpful, e.g. a wrapper for Google Scholar that allows you to search and then import results of the search directly into the tool.
Most of the other comments and things were little bugs or features which didn’t really represent any conceptual change: It would be good to easier support copying/saving the generated RDF, it would be good to support inline renaming of resources, it would be good to support deletion of resources / relations after they have been added, it would be good to allow users to reorder resources and relations by simply drag/dropping their resource boxes. These features are all good examples of things that really ought to be in the tool (and most will be added shortly) but are things that simply got overlooked in the design process.
The final point here is that this RO Builder is not actually something that specifically builds Research Objects; at the moment, it creates a more general user-defined RDF construct – it might be useful to automatically have a bnode for the particular object that the session creates, and then automatically link all added resources to this: this was a process that happened manually in our user test, and it was both time consuming and unclear that it had to happen. To remedy this I think I will add an option (with description at the top) which will allow a user to toggle using a central object, with all other objects in the builder as parts of this object (of course, with relations that are editable in the relations section).
The University of Southampton has been undergoing restructuring for about a year now and we are no down to the real brass tacks. As a result the OneShare team has become part of the Web And Internet Science (WAIS) research group. While it is a shame to leave my roots in LSL I hope we will retain the spirit and good nature we have become accustomed too and we are all relishing the opportunity to do something new and collaborate with new people. As a group forming exercise we held the first WAIS away day, which in true ECS fashion was neither away nor a day, WAISfest. The group of 160 people (phd students, academic staff and research assistants) divided into 6 teams following a champion with an exciting research idea. The aim is that people team up to work on something interesting they might not otherwise be able to work on and learn about the other people in there group and create some cool stuff in the process. Each team had 3 days to put something together and then we had a big party to report our results.
I was on the team championed by my good friend (and short term OneShare research fellow) Charlie Hargood. Charlie is a hypertext fiend and is interested in the new affordences of publishing in styles which are never designed to be printed. This includes interactive fiction, text with spacial or temporal themes and a whole raft of other interesting ideas. The project we undertook has become known as the Strange Hypertexts working group. Over the 3 days Charlies team sat down and came up with 4 new tools for publishing or augmenting hypertext. Three of these solutions were implemented and a specification for the 4th was drawn up. We also put together a community website (www.strangehypertexts.org) on which we made catalogue to showcase some existing hypertext. Strangehypertexts.org also forms the base of our community site which we hope will become a focus for those interested in strange hypertext around the country (and globe) to come and discuss, review and introduce interesting new mediums for publishing.
If you aren’t getting the message yet or are curious to know more visit strangehypertexts.org and ask questions on our google group. We welcome and encourage your comments and thoughts.
Last week we took PANFeed (Personalized Academic News Feed) and CampusROAR to its first conference: RepoFringe in Edinburgh. We were talking about CampusROAR as an idea and also doing a short demo of PANFeed (the new name for the custom feed creation system). Since I last blogged about CampusROAR there’s been a good deal of progression, so I’ll also use this post to give an update on everything I’ve done.
CampusROAR featured in two presentations at Repository Fringe; a main presentation on CampusROAR by Yvonne, where me and Patrick finished off with a quick intro into and demo of PANFeed, and a Pecha Kutcha made by me and given by Patrick which was just on PANFeed in more detail.
The first version of PANFeed is limited, but has the following functionality:
- Spider domains for RSS and Atom feeds.
- Identify keywords for items in feeds.
- Search within items using keywords to create custom feed for user.
The website allows a user to search and get given a custom feed (whilst displaying a nice preview), but the other functions must be run by an operator.
We’ve also had some requests for people to spider their university domains (and some others) for feeds. We will attempt to do these as fast as we can, but it can take a while to spider large domains.
There are lots of things that I would like to do in the near future, here’s a selection:
- Improve news feed collection – make it simpler to use and/or automated. Also improve feed selection, with less duplicates and irrelevant/bad feed items.
- Improve the feed item aesthetics by ensuring they all have good images, possibly with keyword searches and flickr api.
- Separate feed items by institution and allow users to pick which sources they get their news from.
- Modify the system so that users get an individual feed which learns about their preferences based on an up and down vote system.
- Just add more news sources!
Remember folks if you are looking for Personalized Academic News Feeds then look no further than http://panfeed.ecs.soton.ac.uk
This years repository fringe was another resounding success. Successfull placement in preview week of the Edinburgh fringe made the atmosphere electric.
The event kicked off with a presentation by Eloy Rodrigues about the monumental amount of work him and his team have been doing to run the Portugese national open archiving service. It included a range to valuable tools from a repository hosting service, collation, monitoring, reporting of data, training and community building to boot.
From there the event took a brief dive into standard form for these events with a few presentations and reports about various projects in the repository space. Mo McRoberts gave a very interesting presentation which even still left me thinking maybe he was underselling the scale of cool on which the BBCs digital public space project opperates. Yvonne, Andy and I took the stage to show off the latest and greatest output of the JISC funded campusROAR project demoing http://panfeed.ecs.soton.ac.uk to the keen interest of the assembled crowd.
From there proceedings went fringe bound into a block of 6 pecha kutchas (20 slides x20 xseconds) . All these presentations were good and i think the format really focuses the mind of academics which sometimes have the tendancy to ramble. Particular note was an excellent presentation by Marie-Therese Gramstadt from the Kultivating group.
Day one rounded up with a good reception in a room overlooking Edinburgh. For the strong willed among us furious activity into the night at the DevCSI hackathon.
Day two was more of the same from what I can tell although most of our day was consumed by Andy, Matt and I parcipating in the Hackathon. We stopped only briefly for me to give a pecha kutcha about the importance of understanding and wielding the power of news on your campus (www.panfeed.ecs.soton.ac.uk) . Applause should also go to Nicola Osborne’s informative 101 on social networking in repositories which covered the important aspects in a digestable way.
The Hackathon was presented at 4. There were 5 really strong entries which is lovely to see. We were lucky enough to win the top prize of £300 Amazon vouchers which was really cool. The other entries were really strong and it was good to have genuine competition. I’ll put a link to the DevCSI blog when they publish the details.
The fringe ended with a talk by Prof. Gary Hall, probably one of the most profound talks I have ever seen. It took us through the socio-political philosophy of the impact of Open Access and looked at some of the most weird and wonderful examples of work in the area. Topics included liquid publishing, IP tv broadcasting and piracy as a distribution mechanism. Utterly enthralling and when the video is up on the web I will link it.
Before the Southampton Learning Environment is made available to the university as a whole, the School of Electronics and Computer Science will have the opportunity to use a pre-production release as ECS’ current Intranet is phased out. Keeping this in mind, it is important that the SLE emulates (and strives to improve) the features of the current ECS Intranet such as electronic coursework submission and the university’s teaching and learning repository, EdShare.
EdShare, an extension of EPrints, is a university wide digital resource for all staff and students, and much of its content is also open to the rest of the world. The goal of adding this functionality to the SLE increase user knowledge of EdShare and, overall, to make it trivial to create, share and remix everyday teaching materials in a collaborative environment.
As with every widget so far, the first step was to create wireframes and mockups based on user’s needs in Pencil and HTML respectively. The most obvious use case was searching the repository for a certain resource (or set of resources). However, another use case put to us by members of the testing group was to show all the resources of a given module, arranged by the date they would be presented in the case of lectures, on that module’s page.
With these guidelines in place, coding could begin to create the Sharepoint web part shown below. As with all of the web parts mentioned to date, this is still in development and is subject to change as a result of constant user testing, in accordance with the agile development methodology.