Skip to content


janettxt

We’ve been using janettxt this academic year to allow our academics to send text messages to students. This has been very successful. Students can choose to give our database their mobile number, and the lecturer can send an SMS message out to all students with registered numbers. The staff can’t get at the numbers. The message is also emailed (in case of SMS failure, or just the student not wanting to give us their number), and appended to the blog for that module.

There’s also an option to send emails, which also are appended to the module blog. This is all very painless, as there’s an tab on each module homepage which is only visible to staff allowing both these options. The email option rapidly gained a “cc to module leaders/teachers” option and a “cc to email xyz” option; many staff requested this option.

Any staff can send to any module, as it’s all logged and a closed group this allows maximum flexibility.

The costs are 4p per message per recipient, negotiated by JANET.

I have found working with the pageone service, however, frustrating.

  • They provide no mechanism for checking your remaining balance with them, so you could run out unexpectedly.
    • If anyone needs it, I wrote a nagios plugin that screen scrapes their web-admin console for this information.
  • The SOAP interface is needlessly complex to work with and provides no value over REST.
  • They allow multiple username/passwords per customer, but on ringing up they told me that to enable a second username/password for the SOAP interface would be £250 set up plus £50 per month. All I wanted was to be able to let a research project use our balance and to keep track of what they used separately. I’m going to assume they will use less than the 15,000 SMS messages a year the 2nd SOAP account would get me.
    • £300 of staff time would allow me to build a local REST gateway to their SOAP interface with our own auditing to account who uses it for what.
    • But I think, for now, we’ll just all use the same username/password and I’ll trust this project not to over do it.
  • There did not appear to be any indication of SOAP enabled yes/no via their web admin. interface.
  • When I rang them, they asked for my account username and password so they could look at my account. This makes me worry about their security practices as this is baffling.
  • You bulk-buy your quota of messages, and lose any unused ones after 12 months, which is a bit annoying as you have to guess in advance your likely use, and most people will either over-estimate or end up in a situation where messages are failing to go out until they notice and buy more quota.

It’s possible that there’s been a communication error, and I’m only posting the above notes to encourage pageone to pick up its game or clarify things.

I also should acknowledge that I’m a low-volume account holder, who asks difficult technical questions. The staff have always been polite and tried to help me solve my problems.

The above represents my current understanding from phone conversations with their support staff and my direct experience. I am not recommending against using their service, just documenting my frustrations. We have no plans to stop using their service at this time.

If I have made factual errors, they are unintentional and I will make corrections to the above article. cjg@ecs.soton.ac.uk

Posted in Intranet.


Institutional Web Management Workshop 2010

The Institutional Web Management Workshop (IWMW 2010) will take place at the University of Sheffield from Monday 12th to Wednesday 14th July 2010.

Our team went in 2009 and had lots of fun, but also came back with lots of good ideas. This blog being one of them.

If I recall, people pronounced the acronym I-WyM-o-Wam (as in “The Lion sleeps Tonight”).
UPDATE: There’s a song. More like a filk really. The Brian Tweets Tonight.

Posted in Uncategorized.


2010: The Year We Make Contact?

Opinions in this article are my (Christopher Gutteridge) personal opinion.

I prefer the movie 2010 to 2001, but that’s a rant for a different blog. In the years between 2001 and 2010 we’ve seen the world change.

The war on terrorism has been a sad feature of the decade with it’s erosion’s of established freedoms. The new UK “Digital Economy” bill is the latest in a long line of legislation which benefits abstract entities over people. But this is also the decade we’ve seen many companies take a step back from DRM. Openness is exciting, it allows people to try out dumb ideas. It gives the chance for us to experiment.

The current move of the UK and USA governments to make data available online in useful and joined-up ways is amazing. Other countries will doubtless follow our lead. I expect the next decade will see a high-water-mark of open data as the initial wave recedes, but progress will have been made.

I’m utterly proud to have had the opportunity to contribute to the Open Access movement, and to see the amount of research which is openly available due to our efforts. The next steps are to get rid of the embarrassment that is PDF. We’re still using simulated A4 paper in documents which are digital from cradle to grave. Digital distribution allows us to make very very deep footnotes. 10 years from now all supporting data should be supplied with the digital document, along with the tools or instructions to recreate any process of analysis.

In 2020 terrabyte drives will be in disposable “Tree Presents”. You’ll be able to impulse buy storage that can hold more movies, in HD, than you will watch in your lifetime. It only takes about 40 Terrabytes to hold 80 years worth of mp3s.

What I desperately hope to see is more interoperable cheap devices. Input, processor, sensors etc. Cheap enough to build commercial items out of, but worth harvesting when the item is scrapped. I hope that we’ll see taxes favoring items which are easy to recycle. Ideally manufacturers would be taxed on the recyclability of items, and consumers would pay deposits which were returned when the item was handed in to a recycling plant designed to handle it. Anything with reusable components should get a tax break.

The last 10 years saw some great advances in cybernetic tech. such as robot arms and cochlear implants. I met a 10 year old who saw nothing very science-fiction about his cybernetic ear. The next 10 years will see the first elective cyberware (I’m not counting the prof. from Reading Uni.)

We’ll also see some scary stuff in image tech. Google goggles are the tip of the ocean. Given head mounted camera, a net connection and some readout only I can see (I assume we’ll have LCD contacts soon), you’ll be able to look at a person and ID them and get all the notes you or friends made about them, see their blog, read crap they put on usenet as a kid etc. In seconds.

It’s inevitable this tech will be used by the police in sensible ways, but also there will be cases of it being abused. But no cop will be able to cover his ID number when my phone can ID him by his gait and hair colour.

All this tech. is coming and it’s going to work on disposable tiny devices. Sometime in the next 10 years we’ll see software algorithms that our government wants to ban anyone except licensed people using. Those will be interesting times.

Oh, I also predict upheaval once the first truely painful cloud computing data-loss occurs. eg. Imagine if Flickr went bust and everybody lost their pictures.

OK. I’m off out for new-years eve now. I’m trying to think of a good bit of “use sunscreen” style advice, and the best thing I can come up with is to remember that you are not your data. Losing your blog or your mp3 collection is not half as important as you think.

Or maybe the old text adventure adage “Examine everything. Make a map. Save often.”

Happy new year.

Posted in Uncategorized.

Tagged with , , , .


Temporary downtime httpd.conf

If you have to offline an entire webserver for an upgrade to a complicated site, or some other reason, here’s a quick (and not too dirty) way to put up a placeholder page on ALL the virtualhosts on the server.

Nb. These instructions are for RedHat Linux Enterprise, but the concept is easy enough to do on any UNIX.

To prepare the placeholder: (adjust paths to suit taste)

  1. Create /opt/placeholder/index.html with your placeholder message.
  2. Copy your /etc/httpd/conf/httpd.conf to placeholder-httpd.conf
  3. Edit placeholder-httpd.conf
    1. Remove all lines starting with “Include”
    2. Remove all <vitualhost> blocks
    3. Add the following to the bottom of placeholder-httpd.conf
      <VirtualHost *:80>
        DocumentRoot /opt/placeholder
      </VirtualHost>

When you want to make this go live, shut down the normal server:

service httpd stop

And then bring up apache with our custom config:

/usr/sbin/httpd -f /etc/httpd/conf/placeholder-httpd.conf

And that should serve your place holder page to any HTTP request to the server!

When you’re done, kill the root httpd process and restart normal httpd service.

Posted in Apache.

Tagged with .


Basic HTML page template

I’m taking a break from conference websites to confess one of my (many) bad web designer habits.

I still hand-roll everything in vi. Which isn’t the bad habit, as most of what I do is heavily script-oriented. However last night I had to create a page in a hurry, and did my usual hand typing of

<html><head><title>...</title></head><body>...</body></html>

which is adequate, but could be better. It also is time consuming when doing something urgent (Like a service outage page, in this case). I remembered a friend of mine used to keep a blank page template to make sure all his pages started life as valid documents.  I think that’d be a useful habit to get into, and that this blog post may be a convenient place to keep it. I’ll update the template below if I get any good suggestions:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
  <title>..</title>
  <style type="text/css">
    body { font-family: sans-serif; }
  </style>
</head>
<body>
  <h1>..</h1>
</body>
</html>

At this stage I’ve not put in an meta tags. I’m considering an http-equiv text/html; charset=urf8

Posted in Best Practice, Templates.


Diary of a Conference Webmaster Part 2: The Infant Conference Website

At the first stage the website needs very little, but it must exist. The key goals are to:

  • Reassure people that it’s happening
  • Tell people when & where it is so they can think about going. Ideally who, what, where, when and why.
  • Tell people how to get in touch.
  • Look like you’re worth sponsoring.

Sponsors are important. Better sponsors mean a bigger event and better food. Few or no sponsors could even mean the event is cancelled. At the earliest stage your goal is to look like you are worth cutting a cheque with at least 4 zeroes in it. Don’t be pushy, but make it easy for people to get in touch.

At this stage it’s vital that you ensure you are in the loop, you want to plan your time as these sites lie dormant then have flurries of activity (always urgent, naturally).

Things to find out ASAP:

  • Who’s got the executive authority to approve stuff. Committees suck for running a website, get a single contact where possible.
  • When the “Call for Papers” is (often just called CFP). This is basically when they stick up a page with “Call for Papers” describing the academic subject of the event, and spam everyone they can with the text.
  • How many days the event is (big deal is if it’s single or multi-day) (multi day events means you need to recommend hotels etc.)
  • If the event is going to be “streamed” or if it’s just a single track all the time? (steamed events take much more planning)
  • How many delegates they are expecting?
  • Is it co-located with other events? Workshops are often specialised 1 day events in the same venue as big conferences, often before or after but booked as part of the larger event.
  • Who’s handling registrations and fees? (ie. the co-located bigger conference, hired company, or you)
  • If it’s “peer reviewed” which means that the papers are reviewed by experts to make sure they don’t suck in any obvious way. Popular conferences also use peer-review to score papers as they have to reject a percentage. Generally conferences with many submissions use “Easychair” to manage the process. It’s a bit icky, but does the job and people are used to it.
  • If it’s “Open Access”? That is, are the papers and maybe posters going to be made publicly available on the web? While this is a good thing, it can worry the committee as people may not pay to attend if the papers are free. A compromise may be to embargo them until some period after the conference. They should not be made public before the conference starts. If you want to keep these online, what tool will you use? EPrints is nice, but for a small event it’s better to use an existing repository than set one up to store a handful of papers. Ideally, set up a repository for the conference series and hand over the keys to it the year after.
  • And, most importantly, ask if you get to go. I’d never do a *bad* job, but I’m more willing to go the extra mile and work late etc. on something when I’m getting to go to an interesting conference in return. If you do go, make sure you pull your weight. It’s not a free ticket, you should make sure you earn it, then they’ll be keen to have you at the next one…

Posted in Conference Website.


Diary of Conference I.T. Part 1 of N

No promises for making a complete series, but I dream of writing something I can compile into a useful guide.

I’ve described this as “conference I.T.” as the website is the core of a big mesh-like-thing. A web, if you will. I’m not writing about WiFi as that’s a stand-alone problem. I’m aiming to make a complete guide for the IT team working on a conference or similar event. I’m looking at it from the paid tickets, multi-day, international, multi-tracked, peer-reviewed perspective as this is the most general case. Being free, single-day, non-international, single-tracked or not-peer-reviewed just means you can skip some bits.

The Domain

One of the first things to do is look at the website for the last event. If it was badger2009.com then your site should be badger2010.com. If they had any sense, they pre-registered a block of 5-10 years worth, so you may need to find out who actually registered it and get it pointing at a web server you can control.

If you can get hold of last years web maintainer (a) get any tips, tricks, code and data she or he has,  and (b) get ’em to link to this years conference from the homepage of last years.

If you have to register the domain, you might consider badgerconf.com so you can use 2009.badgerconf.com etc. and not have domains lapse, but it’s ugly. Register the domain for a long time. I recommend no less than 10 years for an international conference. Register a few extra too, like badger2011.com badger2012.com etc. and keep careful track of how to hand them over next year. Give this information to the permanent conference committee, if there is such a thing.

Tags

The same goes for twitter & flickr tags. These should be selected early, and ideally should be consistent with previous years. Make the tags as short as possible. http://www2009.org/ used “WWW2009MADRID” for flickr. The Madrid may have made it more accurate and distinct, but the bigger the barrier to tagging the less people bother. I would have preferred “WWW2009”.

Social Networks and Mailing Lists

Last years social networks may be entirely useful to this year, eg. Web Science Conferences on Facebook. It’s unlikely you’ll re-use mailing lists, but even if you do set up new social network tools and mailing-lists, your committee will want to send out some adverts to last years conference. When setting up your own tools, consider if you can do so in a way useful to the next years conference too, but don’t lose sleep over it.

At this very early stage you won’t want to set up any public communication channels, but the people running the conference will probably want private mailing lists and maybe a wiki.

Posted in Conference Website.


Timezone vs Time Offset

Time is a bit like character sets, in that you can more or less get something useful by guesswork on the data, but that’s not safe for key systems.

A place, or system, has a timezone such as “Europe/London”. A time should either have a time-offset (eg. +0100 or BST) stored with it, OR it can be assumed to be in the timezone for the system running it.

What is important is that when converting datetimes (without explicit offsets) to exact times in seconds (UNIX time_t), that you convert using the offset that was true on the date, not the current offset.

Like UTF-8, you can’t always tell by looking what a time means. Don’t use the MSSQL DATEDIFF() function as it appears to use the current offset (ie GMT) rather than the offset which was true at the given date (BST). This is a subtle issue, and long really bites if converting DATETIME to UNIX time.

The moral of the story is, like with character sets, to use a damn library, and that just because the output looks correct, doesn’t mean it is.

Posted in Uncategorized.

Tagged with .


More on Conference Structures

I’ve been working more on modeling conferences. My real interest is to provide data which is useful to people wanting to build tools to work with the program, like things which help you keep track of the session you want to see, or things that render the conference program in pretty ways. What I really want is a RSS-style template of how to describe an entire conference in XML. The fact that that XML will “happen” to be RDF is a bonus which most people won’t care about. I want lots of nice ways to hook this into Twitter, iCal etc. I think by adding a bit of extra restriction, it’ll be far more practical for people to build tools to use the data. I’m using RDF as it has lots of interesting properties, but I’m not, like, married to it or anything. It might make more sense to express the conference in XML and just map it to RDF for the people who want to suck up the linked data. (SPARQLy vampires?)

There’s a number of key uses for conference data; all intersect but have quirks:

  • for the committee, and venue:
    • Planning the Conference
    • Running the Conference
  • for delegates:
    • Evaluating the event (should I bother going?) << social events are key here!
    • Planning before the event (what should I see?)
    • Planning at the event (where should I be?)
    • Reference after the event (who did that paper?)
  • For speakers:
    • When am I on?
    • Who am I on after/before?
  • For the wider community
    • The list of papers, speakers etc. This is really the bit that needs to be preserved long-term and is much lower-resolution. 10 years from now, nobody needs to know if the paper was presented before or after lunch.

The ISWC “dogfood” ontology is a great place to start, and they’ve done much of the heavy lifting, so I think what I should look at doing is working out how to use their work as much as possible. My real interest is in making the structured program data useful before, during, and shortly after the event.

The key thing is that different conferences are structured in different ways. I am basing this on my own experience which may be limited in ways I can’t see. My ideal result is a set of models for various size of conference-like events. I’ll be building the websites using PHP & MySQL. Code may be available on request.

All events should have a cal:summary, cal:dtstart and cal:dtend (with an exception noted below)

A ConferenceEvent can be MultiDayConferenceEvent or SingleDayConferenceEvent. A MultiDay conference has 2 or more Sub-Events which are each SingleDayEvents.

A SingleDayEvent is either a SingleStreamEvent, a StreamedByTrackEvent or a StreamedByLocationEvent (it could have other more unusual ways to divide up sessions but I think that’s beyond the scope of what’s easily renderable), as are events streamed by track, but with multiple simultaneous events in the same track (make it Track-Foo-A and Track-Foo-B!). The idea of this is that each days events tend to be organised by time, in one axis, and by location, or more rarely by track in the other axis). To pick a not-so-random-example, WWW2006 was streamed by location, but generally events in the same room were in the same track. nb Ochil (c) which has a different track in the afternoon. A possible extension would be a “streamed” event, where every subevent is someont:inStream of a resource.

Any kind of SingleDayEvent will have many sub-events:

  • It may have some ConferenceTimeSlotEvents which are the slots into which the other events are supposed to fit. Looking at the Ochil morning events in the Wed WWW2006 example, events don’t always neatly sit in slots. The timeslots are just there as they are supposed to be drawn on the days program.
  • A SingleDayEvent will have subevents for each session, break and social event of the day. These should be either PlenaryEvents (indicating that they are relevant to the majority of delegates attending their parent event, and should be drawn all-the-way-over on the program), or not. If not then their location or track is used to decide which column they appear in. All sub events of a StreamedByTrackEvent should be either PlenaryEvents or have a track. StreamedByLocation isn’t a problem, as events must have a location. Possibly there should be one other which is “DisplayAsPlenary” for things like evening meetings which are not plenaries, but are rendered on programs in the same manner. Break Events are generally plenary or DisplayAsPlenary.

Any sub-event of a single day event can have one level of sub-events itself (except ConferenceTimeSlots). Well, actually in RDF anything can pretty much have any number of sub events, but they are considered out-of-scope. Sub-Events of Sessions may be individual papers, or an after dinner talk at a meal. These do not have to have cal:dtstart and cal:dtend set, but they may. It should be assumed that delegates will mostly be selecting their attendance from the immediate sub-events of a SingleDayEvent.

It’s quite common for a big conference to have sub-events, such as workshops. In this case, the workshop may be represented as an all-day event in the main program, but can be a SingleDayEvent in it’s own right, with it’s own slots, locations etc. In my own experience, such workshops have been single-streamed but there’s no requirement for that.

Tracks should have a title specified by a specific predicate (not sure what at this stage) and should have an optional colour code.

Tracks, Events and Locations can be assigned tags. Specifically twitter & flickr tags.

Oh, my final idea for Dev8D is for us to set up a semantic mediawiki for the conference and put a sameas:… seealso:… link in the conference data for every location, day, session, person, talk etc. and allow the delegates to generate a pile of secondary data. This is kinda interesting as they’re a bunch of hackers, not academics.

Posted in Uncategorized.


Zones

It’s occurred to me that conferences (and music festivals) designate spacial locations as specific zones, but only in the context of that event.

The “Poster Area” or “Coffee Area” (or “The Green Peace Field”) only exists in the context of the event, it’s not a permanent property of that space.

Also, for describing conferences and workshops, it’s useful to know that a location is a room or building. (maybe levels and floors too, but that’s less important). The conference centre in Madrid which was used for WWW2009 was one building with levels on both the North and South end. Maybe the useful subdivisions are Room, Building Section, Building. Where Buildings contain building sections and rooms. Building sections contain building sections and rooms..

Also, conferences generally have two important locations, the City they are in (Madrid) and the building ore complex they are in. (perhaps complex or campus is a useful level above building?) I think it’s useful to model both these location relationships.

As with most systems, the really important thing is to make it easy to describe the data you’ve already got, and not force people to create data and mappings that they would not other wise bother with.

I’m also toying with modelling some data as a fixed data file, and the rest via a semantic wiki which extends the core data.

Posted in Uncategorized.

Tagged with , .