Skip to content

Summer Internship – Week 2

To follow the theme from last week, a large portion of my time this week was spent learning about new technologies I haven’t used before. I was taught about dependency injection, something I’d heard about a lot but never really gotten to grips with. I think this will be a tough thing to use in my own work at first, as it requires a bit of a change in thinking. Aurelia is used for front end web development within the team, and I got to see it being used and contribute to the development of a web view for the Lab Marks system with Kevin. For my own personal project, which I’ll talk about soon, I learnt how to use the Service Now Rest API, and even a little bit of web crawling. I also had a little bit of time to dig into a new book – Clean Code by Robert C. I’ve thoroughly enjoyed reading up on these things, and reading Clean Code in particular, and I’m looking forward to putting some of these things into practice with my own work.

This week wasn’t all about reading, however – I got the chance to join a team briefing and to start work on my own personal project. The team briefing was interesting to see, and it was refreshing to see that people within iSolutions really do care about what the students want and need. It was great to be able to add a fresh perspective and talk about issues I’ve experienced from outside the organisation as an undergrad and as a prospective PhD student. It also gave me a better overview of the team and their role within iSolutions, the direction they’re heading in in the long term and why.

And most excitingly, I got to start work on my own project. I’ll be designing and making a small proof of concept system that makes the University of Southampton knowledge base easier to access and specific articles easier to locate. The idea is to help ease the work load for the service desk who may be answering generic questions that have already been thoroughly answered and explained online. As a student, I’ve never used the knowledge base before, and when I had the idea to make an intuitive question answering service I didn’t realise that one already existed. This needs to change, and a more open google-able system could do that. To start my project off, I investigated how to use the Service Now Rest API to extract knowledge base articles to display in my own web application. I managed to do this, and now have a web application that has an index page that loads all the knowledge base articles into a list in a table with links to the article details page. The article details page takes an article number as a parameter in the url, and will display a short description and the given text for that article in browser. It’s a very basic set up for now, but I feel like it’s a good starting point. The next steps for my project include investigating web crawling to make the knowledge base searchable, and investigating adding features such as related pages.

Posted in HTTP, Programming.

Summer Internship – Week 1

My first week with the iSolutions Technical Innovation and Developments team was an interesting start to my 12 week internship here. With the team in the early stages of two new projects, there was little for me to contribute straight off the bat besides a small amount of requirements capture and some wire frame drawing. Instead, I took the opportunity to get to grips with some of the new technologies I’ll be using for the next three months. I’ve never gotten very involved with web development and database frameworks, but the tools I’ve been investigating have proven themselves to be more simple (and dare I say fun?) than I had anticipated.

Firstly I re-familiarised myself with .NET. C# was was one of my first programming languages, taught to me for my computing A level, and whilst the iSolutions projects I’ve seen are much more complex than any of my A level courseworks, it still feels friendly and familiar coming back to it. Building upon that basic knowledge of .NET, I then learnt about code first entity frameworks from David, and began working on online tutorials to create an MVC entity framework based web application.

Naturally, my first week also exposed me to some web design, starting with a talk from Chris. This talk was a test run for a web conference next month, and was aimed at people with little to no experience with the technical side of the web at all. Whilst I come from a technical background, I’d never done web development before, so the front end web development side of the talk taught me a lot. Delving into online tutorials on the topic, I started to really enjoy playing around with html and css.

I’m excited to see where the team’s two new projects will go when they start to take off, and to use some of my newly learnt skills to contribute towards them.

Posted in Database, Programming.

Tagged with , , .

Science and Engineering Family Day: Minecraft Engineering Report

 Last Saturday (18th March, 2017), I ran the “Minecraft Engineering” activity at the Science and Engineering Day.

Useful links and downloads are included in the updated post about it from 2016.

We were co-located with the Minecraft implementation of the model railway, which connects ways the university research intersects with railways. The model was built by the talented young Joe Roberts, who ran a bay of 10 computers for the entire 6 hours of the event.

My activity was run on 20 machines supported by a volunteer named Jamie Scott, and myself.

What went well?

  • 2017-03-20_11.42.38

    Southampton Town Quay from Open Data – Rendered with Optifine and Sildurs Vibrant Shaders

    Both Jamie and Joe were tireless in their patience, enthusiasm and professionalism working with the hundreds of families passing through in the day. Both of them dealt with technical issues in a calm and effective way. Their combination of talent and can-do should take them far in life.

  • Using desktop machines with mice rather than laptops: Lack of mice was a big problem last year (we had some but not enough), and transporting 12 expensive laptops added to the stress at the start and end of the day. Our very nice desktop support people got it all installed very nicely.
  • Purchasing Minecraft accounts with logical usernames, rather than last year when we borrowed accounts from everyone we could, which made setup complex.
  • Optifine; this is a mod which lets you use “shaders” which make everything look pretty; shadows, reflections on the water, rays of sunlight. This showed off our expensive hardware nicely.
  • Jamie created a server with all my maps on using “bungee” which let people walk through portals into different worlds. This meant we didn’t need to copy a memory stick of save files onto every machine and gave a smoother interface for the children.
  • Rather than last year’s laminated cards, I printed all the notes in 10 booklets. A few people wanted to take them home, but I agreed to put a PDF online, and the front page had a URL of last year’s blog post, where I put a link so they could just photograph or write that down.
  • My 3D models of cities from open data gave families a nice A-ha moment.
  • The Gong: last year we used electronic 15 minute timers on each machine, which were switched down to 10 minutes due to queues. It was certain some sneaky kids were pausing their timers, or up to other shinanigans (like I would at that age). This year we tried starting sessions every 15 minutes and ending them with a large gong I happened to have in the garage. It worked really well and there were very few arguments or sulks as a result.
  • Packing up was really quick. Although there was some custard powder on the floor (sorry cleaners).

What was a mixed bag?

  • 3D Anaglyph

    Southampton Town Quay from Open Data – 3D Anaglyph

    I planned to set up 10 machines with Optifine, and 10 with cyan/red 3D glasses. Some people loved the 3D glasses, other people hated it, others wanted to see it but then wanted to change back so we ended up navigating those menu options 50 times at least.  I took 125 of the cheap glasses and finished the day with about 75 left. I was fine with people taking a set of these home if they wanted as they are £2.00 for a pack of 10 (plus postage).

  • The “lobby” on Jamie’s server was a bit too big, and it wasn’t quite clear what to do. Jamie made a portal to a default “survival” world, which meant that if we didn’t guide people, the kids just headed for that and messed around for 15 minutes. More volunteers would have smoothed that issue, but if we had it over I’d make the portals much more simple and close together, and not offer the survival world option or any PvP!
  • Initially people couldn’t teleport on the big city maps, which Jamie managed to fix. We also shifted those maps into full creative, which ended up with Elizabeth Tower (big ben) covered in lava, and the lawn at Buckingham Palace having a few TNT crators, but this helped show that these maps were interactive and very big “play mats” rather than static demos behind glass.
  • The Redstone map seemed to go OK with those who tried it. Jamie set up a very clever mode where the demonstation was fixed (they couldn’t break it) but they could walk out into the wilderness and build their own things. That’s an idea I’d like to take further another time.
  • We had a bit of a panic that when we tested multiplayer on Friday, the machines gave a windows firewall warning about network access, but after a bit of prodding it seemed we could still connect to external servers. My best guess is that the multiplayer screen also attempts to listen on the local network for servers, and that triggered the warning. So a storm in a teacup, but worth mentioning in case it bites someone in future.

What didn’t work?


Southampton Town Quay from Open Data – Normal Minecraft rendering

  • We didn’t have enough staff to do this to the standard I’d have liked. The 3 of us were more or less flat out for 6 hours, and that’s not reasonable. I tried to eat an apple and it went brown in between bites.  ECS volunters were down in general, but one more person would have let us take breathers. Next year, if we do this again, I must make much more of a campaign to get helpers.
  • I should have had fliers to take away with information and how to get in touch etc. We did that last year.
  • My “Minecraft Archaeology” map requires too much explanation to be suitable for this envioronment. It might work better in a classroom setting, but the only child who got into it was one who’s father was with him and already understood the purpose of that map from an earlier conversation.
  • Registering Minecraft accounts enmasse is a nightmare. The online options do not let you buy more than between 2 and 5 before it triggers something that stops you buying any more on that card. In the end I got a lot of scratch cards from ASDA, and even then there was a maximum of 5 on each transaction! We had a look at the educational version of Minecraft, which is much easier to get bulk licenses for, but it’s incompatible with normal Minecraft (it’s the dreaded microsofted version) so you can’t load normal maps, can’t convert them, and can’t use mods or anything else that makes life interesting.

Overall we did very well, but could have done better with more planning, preperation and staff on the day.

Apparently feedback from the larger event was it was so big that people couldn’t do it in a single day, and could we make it two days next year? I don’t think I could physically manage that without a holiday afterwards!


Posted in Uncategorized.

Student Open Data Antipattern

I’m writing this post to highlight a recurring anti-pattern I’ve seen when people new to the open data field are asked to come up with a project for a coursework, group project, or hackathon.

What happens time and time again is that people set their heart on an idea which requires data which is just not available. At we make every effort to provide all available data in a timely and linked way. If it’s not there it’s because we don’t have access to it, we don’t have the right to publish it, or it doesn’t exist.

Often we are requested data such as class timetables, and this is getting very close to data about people, and therefore we are far more cautious about it, as we have a duty to protect our students’ privacy. We hope one day to find a way to provide secure & consent based API access to such data, but it’s a bit of a pipedream.

Other datasets people have requests just don’t exist. This summer our open data intern was sent on a mission to create a dataset of all building entrances, as amazingly there was no such dataset that we could locate! Our “Buildings and Estates” department are very helpful, but their system thinks in terms of site/building/room, so we had to build our own dataset which was a lot of time and effort, but worth it as buildings and building entrances hardly alter year on year. You can see the new building entrances layer on Ash’s interactive university map. (Click them to see a photo!)

If wishes were open data we’d all have full harddrives, but they’re not.

The lesson here is that we need to better communicate to open data newbies that it is unwise to plan a project which requires data you don’t know to be available. If it’s not available, it’s virtually certain there will not be enough time to make it available in the hours, weeks or months of your project.

We need to teach our students and hackathon participants:

Don’t start from the application you want to build and looking for open data that “should be”.

Start with the data that is there, and invent the application that can be!


Posted in Best Practice, Data, Open Data, Training.

HESA Open Data Consultation Summary

In the summer I wrote a contribution to the HESA open data consultation.

The summary of the consultation has now been released (PDF)

Posted in HESA, Open Data.

Internships the TID way

For the last two summers TID has had interns and so will it go next summer. I learned the value of having interns in my previous role and would strongly recommend it. Interns give you lots of advantages which are quite hard to get in your regular team.

  • Fresh perspective and new ideas
  • Current degree level education
  • More capacity for non critcal tasks
  • Great test of your induction materials

However nothing in life is free and to get these great benefits you have to make an internship experience which attracts good interns without causing too much upheaval to your team. Interns when managed badly can be a cause lots of surprise extra work (my least favorite kind of extra work). To mitigate this risk we are careful about the workload we allocate to interns.

When we have an intern the body of their 12 weeks of work is roughly divided into thirds.

  1. I plan for them to spend 1/3 of the time working on a project in the team. This gives them plenty of chance to see how we work and learn the TID way. They will work on a real piece of project work which ships to customers and the work will be reviewed and structured by the technical lead to ensure the quality is high. We do not factor the interns time into project estimates because generally the time they save is spent answering questions and reviewing the work.
  2. 1/3 of the time will be spent on a solo project. This will be a stand alone project which is chosen based on the interns rough preferences. Some times this will be an investigation piece or a piece of software for internal use in the team. It can be a customer facing product but only if it is non critical in every way. Work which we would decline because it is not important enough is a good candidate for this. Interns are not full members of staff and they lack professional experience so it would be foolish to entrust them with significant responsibility. They will usually get support from a team member in collecting requirement and technical guidance or review of their work. This gives them a chance to learn in way which has few negative side effects to us but gives them control and independence.
  3. The final third of the time will be spent doing dogsbody work. This is the kind of work that requires little thought or input but still has to be done. It is often data entry or data collection, it is not very glamorous but someone has to do it. It is a helpful thing for interns to do because it shows them not all our work is fun or exciting and it is something they can get on with when they are waiting for help from a busy team member. It is also a lot cheaper for us to have the intern doing this than a software engineer.

One other weekly tasks I like interns to do is write a blog post on this blog. It helps them reflect on the purpose of their work and hone there writing technique. It lets me know when they haven’t really understood what they are doing or why and can also be used as a showcase to their future employers. In recent years it has also been used to advertise our internship program to new potential interns. You can read some past blogs here:

Over the years interns have provided the team with new project ideas, introduction to technologies to new and great management experience for team members. They don’t save that much work so don’t time budget them as a full member of staff. Temper the work load with non-cricical and sometimes uninteresting tasks and you can get good value out of an intern while they get good experience in return.

Posted in Best Practice, Management, Outreach, Recruitment, Team.

Unit testing Aurelia service code

Aurelia is one of a crop of new front end JavaScript frameworks that make it easier to manage complex interactions in the browser. It implements a MVVM pattern and includes routing, dependency injection etc.

Unit testing Aurelia custom elements and attributes is described in the Aurelia docs ( However, more general testing of business logic or service code is not discussed. This article gives a basic introduction to testing these classes using the Aurelia CLI.

  1. What libraries are included in the CLI, what do they do?
  2. Writing a basic test
  3. Running tests
  4. Improving the test output
  5. Debugging tests
  6. Mocking in tests
  7. Testing code with promises.

Note: Aurelia supports the babel.js and Typescript transpilers. The code examples below are in Typescript but should be readable for anyone who is familiar with modern javascript.

1.      What libraries are included in the CLI, what do they do?

When creating an Aurelia project using the CLI the following testing libraries are included:

  • Jasmine – popular BDD JavaScript testing framework. Provides the test structure and asserts.
  • Karma – test runner. Allows you to run the tests from the command line and debug the tests within a browser.

Angular Protractor and selenium web-driver tests are also included – they are used for end to end testing and so not discussed in this article.

2.      Writing a basic test

Test classes are placed in the /tests folder and should include spec in the title e.g. articleStore.spec.js. The test class should include the following elements.

Import referenced classes

Import the class under test (and any other relevant classes) at the top of the test class e.g.

Import { ArticleStore } from ‘../../src/articles/ArticleStore’;

Create a top level describe function.

Create a describe function that indicates the name of the class under test as a string and the details of the tests as an argument e.g.

describe(‘the ArticleStore’, () => {…});

Note: The text ‘the ArticleStore’ will then be outputted when we run the tests.

Create a test setup

As with most testing framework Jasmine provides a mechanism to run setup and teardown code before/after each individual test. This is achieved by creating a beforeEach/afterEach function that takes a function as an argument.

In this example we will use this to create an instance of the class under test before each test is run e.g.

let target: ArticleStore;

beforeEach(() => { = new ArticleStore();


Create a test

To create the test itself we create an it() function which takes the name of the test as a string and the test code as an function argument. Again, the name of the test will be outputted by the test runner.

The test itself makes use of the Jasmine asserts to confirm expected state.


it(“should have an empty articles collection”, () = {




3.      Running tests

To run the tests issue the following statement from a command prompt:

au test

This invokes the Karma test runner to run any tests it finds in files ending spec.js and outputs the details of any tests that have failed.

“au test” will run the tests once, report the output and close. However, as with the “Aurelia run” command you can include the watch argument:

au test --watch

This will run the tests, report the output but not close. The test runner will maintain a watch for any changes to code and when new code is saved will rerun the tests and display the new output.

4.      Improving the test output

The default configuration of Karma within Aurelia will only report failing tests. To get a comprehensive list of all tests that were run make the following change to the /karma.conf.js file:

Change this:      reporters: [‘progress’],

To this:               reporters: [‘spec’],

5.      Debugging tests

Karma makes debugging quite easy as it creates a browser instance and allows you to use the standard in browser debug tools (F12). As well as being simple to use, debugging in the browser is more accurate than debugging in an IDE or similar, as it will correctly replicate any browser issues.

To debug from the Karma test runner:

  • In the command prompt run: au test –watch
  • A browser spins up. Click the ‘Debug’ button in the green bar on the top right.
  • A new tab opens with the unit tests loaded. Press F12 to open developer tools, view the source, add breakpoints etc. as normal and press refresh to re-run the tests and hit the breakpoints.

6.      Mocking in Aurelia tests

Mocking is an important part of any unit testing strategy. Currently the standard tool for mocks/stubs/spies in Javascipt is to use the sinon.js library. However in my experience this did not play well with Jasmine.

An alternative, simpler and more modern mocking library is called TestDouble ( . This can be imported via NPM.

Import the testdouble package

npm install testdouble –save-dev

Add testdouble as a vendor-bundle dependency.

This is optional but adding the following to the dependencies section of the /aurelia-project/aurelia.json file, vendor-bundle dependencies section makes regularly including the testdouble library within test classes simpler as you can refer to it with a simple name rather than needing the relative path.

        "name": "testdouble",
        "path": "../node_modules/testdouble/dist/testdouble"


Import TestDouble in your test class

Import * as TestDouble from ‘testdouble’;

Note include the relative path if you did not complete the previous step


Create/Destroy the mock objects

Within the test code create a variable for the object being mocked, with the type of the object being mocked e.g. if we are mocking a class of type ApiConnector

let apiConnector: ApiConnector;

Initialize the mock on the beforeEach() method, and call TestDouble.reset() on the afterEach() method e.g.

beforeEach(() => {
      this.apiConnector = TestDouble.object(ApiConnector);
afterEach(() => {


Setup and/or verify the mocks within the tests

It(‘getStatus should return the updating status from the apiConnector”, () => {

      // setup the mock
      let knownStatus = “cached”;

      // make the call under test
      Let result = this.articleStore.getStatus();
      // assert the mocked result is returned and the call was made on the mock object



7.      Testing code containing promises

Asynchronous code is very common in the JavaScript world and a modern approach to implementing async code is to use promises. Asynchronous code and promises requires some minor changes in testing approach.

To handle async code Jasmine takes an optional argument to the it(), beforeEach() and afterEach() methods called “done”. The “done” argument is a method that should be called once all other test code has completed. When using a promises approach to asynchronous code this would typically be at the end of the last “then” call.

The following code gives an example of creating a promise object and returning this promise from a mocked method. This is a common scenario e.g. where an ajax call which returns a promise needs to be mocked.

For a promise to be processed either its resolve or reject method should be called. In this example the mock returns a promise that has a resolve method and the test asserts that when that promise is successfully resolved the refreshAll method returns a promise with the data true.


it("refreshAll returns a promise with data:true when api call successful", (done) =>  {
       // arrange       
        let promise = new Promise((resolve, reject) => { resolve("Success data"); });


        // act
        let result = this.articleStore.refreshAll();

        // assert
        result.then(data => {




Complete example code

As a summary, I have included below a complete example of an Aurelia test class which mocks promises:

import { ArticleStore } from '../../../../src/resources/data-service/ArticleStore';

import { ApiConnector } from '../../../../src/resources/data-service/ApiConnector';

import * as TestDouble from 'testdouble';

describe('the ArticleStore', () => {

    // setup

    let apiConnector: ApiConnector;

    let articleStore: ArticleStore;

    beforeEach(() => {

        this.apiConnector = TestDouble.object(ApiConnector);
        this.articleStore = new ArticleStore(this.apiConnector);       


    afterEach(() => {

    it("refreshAll returns a promise with data:true when api call successful", (done) =>  {
        // arrange       
        let promise = new Promise((resolve, reject) => { resolve("Success data"); });

        // act
        let result = this.articleStore.refreshAll();

        // assert
        result.then(data => {



Posted in Uncategorized.

Image focal point hint

This is a quick idea that may already exist somewhere. Let me know if “there’s a vocab for that”.

We have a large set of images of our university buildings. There’s a variety of sizes & aspect ratios. Sometimes there’s more than one image of a building.

To render these in the university templates we need to trim them to certain sizes and aspect ratios. What would be useful is if we could store a “hint” of where the most important content is in the picture. For example take this image:


Clearly the most important part of this picture is the relationship between the researcher and the tool. I would say about here:

tinker with X

Which is about 36% from the left, and 50% from the bottom. What I’m wondering is if there is (or should be) some standard terms for indicating the focal point of this picture eg.

<> ns:focalPointX "0.36"^xsd:float .
<> ns:focalPointY "0.5"^xsd:float .

That way our HTML page generation can get cropped images but instead of a default focus (usually the centre point) it could know how to crop for this picture. You can see the results of making a portrait crop of this image using a focal point hint and without:

Without hint:


Using hint:


I think this makes a massive difference and seems like a really useful thing to optionally store with our images and publish as part of the metadata for them in the open data service.

Other useful metatags for images linked in open data would be:

  • An indication that this image should be treated as the illustrative image of something.
  • For logos; if the image is more suited for a light or dark background (ideally letting the renderer pick the more suitable variation
  • For background style images, a hint if the image is a light or dark background (so we know how to place contrasting text over the top.)

Does this exist already somewhere out in ontology land? Are there any other useful things we should add?

Posted in Best Practice, RDF.

Open Data Internship – How to Gather Data: Mark III

This is an updated set of instructions on gathering Open Data in Southampton. The previous versions of these instructions have been deprecated.

This is written assuming you’re using the Open Gather tool. It covers the sort of data we’re looking for and how to gather it.

Objects we’re looking for:

  • Drinking water dispensers (Fountains, coolers, etc)
  • Gender neutral toilets (Toilets a person of any gender could use, e.g most disabled toilets) (These can be found in the “Room” category in OpenGather)
  • Portals into buildings and between them (E.g Doors)
  • Public showers a cyclist could use
  • Reception Desks (And any other Points of Service)
  • Images of University buildings (that we don’t already have)

It’s a bit like a game of eye-spy (or Pokemon Go). The aim is to hunt round, find each of the items above and record info about it. For some of them (Portals, Building Images) we know the data we’re missing. For the others, we have no idea, so a little urban exploration may be required!
For all of these, we’re interested in where they are. This involves a building number, floor and geo-location for most of the above. Open Gather has a clickable map to allow for precise geo-location. Otherwise, is available.

How to Gather Data


  • If you’re hosting your own copy of OpenGather, make sure to clear any testing data out first!
  • If you’re recording portals and building images, it can be helpful to plot the things you’re looking for on a map. If you’ve retrieved a list of items using SPARQL, you can use the following to plot the items on the map.
    • Run a SPARQL query to generate the list of missing items, complete with latitude and longitude (examples to come soon!)
    • Generate a KML/CSV/GEOJSON file from the data produced by the SPARQL endpoint.
    • Host the KML/CSV?/GEOJSON file in a publically accesible location. I prefer Git, but Google Drive or an online pasting tool like Pastey also works.
    • Using umap ( as a mapping tool, add a layer, then either import the data from the remote, or in umap, add as a remote data source.
    • (When using Umap, tick “Use Proxy” to ensure the icons load correctly)
  • Print off a copy of the map. UMap doesn’t work very well in mobile browsers.
  • Print off Univeristy of Southampton photograph consent forms. These are needed to use photos with people’s faces in.
  • Make sure your phone and camera have adequate amounts of battery (ideally full).
  • If your camera and phone are different, check the two clocks show the exact same time. This makes matching images and data far easier.

Overall Process

  1. Pick a location on the map and decide which buildings to gather data from.
  2. For each building, gather the data needed, using the instructions below.

General Method

This is the quick-and-easy summary of how to record data. More specific information is available below.

  1. Open the OpenGather tool in your browser.
  2. Select the type of object to record.
  3. Fill in the appropriate fields.
  4. Submit the data.
  5. Take a picture of the object.

Taking a Building Image

  1. Using the open data tool, select the category “Building Image”.
    • Fill in the “Building Number” field.
    • Wait for the GPS to update to the current location.
    • If the accuracy is low (say, less precise [higher] than 6m), click/touch the map to mark a more accurate position.
  2. Take a picture of building, attempting to get as much of the building in frame as possible.
    • A good photo will make the building easily identifiable as you walk past it.
  3. Send the image(s) to opendata [at]

The geo-location data isn’t necessary for buildings that are already marked on the map, but it helps automatically match images to names later on.

Gathering Portal Data

Walk around the building looking for entrances. Try to identify all entrances that aren’t fire escapes (which we aren’t permitted to gather as of 14/07/2016).

For each entrance that you find:

  1. Select the category “Building Entrance” in the OpenGather tool.
  2. Record the geo-location of the entrance. This can be done by tapping on the map in the OpenGather tool. Do not rely on the GPS being accurate.
  3. Fill in the fields using the tool.
    • “Building Number” – Number of the building the entrance is attached to
    • “Entrance Label” – An arbitrary letter to identify the entrance. Typically starting from ‘A’.
    • “Description” – A brief description of the entrance, such as “Staff”, “Main”, “Side”, “North-east”.
    • “Access Method” – Is a card or key needed to get in?
    • “Opening Method” – How do you physically open the door? This is used to determine disabled accessibility.
  4. Submit the data
  5. Take a picture identifying the entrance. This doesn’t need to be recorded seperately in the tool. A good photo will make the entrance easily identifiable as you walk past.
  6. Follow the procedure for getting consent, if any people are in your photo (an ideal photo has no people).

Recording a Drinking Water Source

Should one of these rare and majestic creatures be spotted:

  1. Throw a greatball at it
  2. Select the category “Drinking Water Source” in the OpenGather tool
  3. Fill in the fields using the tool.
    • “Building Number” – Number of the building the water source is in
    • “Floor” – The floor the water source is on, level 1 is usually the ground floor.
  4. Record geo-location using the map. Zoom in on the building you’re currently in, and try to mark your position in the building by clicking on the map.
  5. Submit the data
  6. Take a picture of the water source. Ideally, this will make it clear where the water source is located in that part of the building.
  7. Follow the procedure for getting consent, if any people are in your photo (an ideal photo has no people).

Recording the location of Public Showers

  1. Select the category “Public Showers” in the OpenGather tool
  2. Fill in the fields using the tool.
    • “Building Number” – Number of the building the water source is in
    • “Floor” – The floor the water source is on, level 1 is usually the ground floor.
    • “Room Number” – Room number of the shower, if it has one.
  3. Record geo-location using the map. Zoom in on the building you’re currently in, and try to mark your position in the building by clicking on the map.
  4. Submit the data

Reception Desk (Point of Service)

  1. Select the category “Point of Service” in the OpenGather tool
  2. Fill in the fields using the tool
    • “Description” – What the Point of Service is. For example, “Library Reception Desk” or “Student Services Information Desk”
    • “Building Number” – Number of the building the point of service is in (assuming it isn’t a standalone service)
    • “Phone” – A phone number for contacting that point of service, if available.
    • “Email” – An email for contracting that point of service, if available.
    • “Opening Hours: Mon…etc” – Times when the service is usable. For example, a reception desk is usable when it’s manned. E.g “9:00-18:00”, “24 hours”, “7am-7pm, closed 12pm-1pm”
  3. Record geo-location using the map. Zoom in on the building you’re currently in, and try to mark your position in the building by clicking on the map.
  4. Submit the Data
  5. Take a picture of the desk. It’s nice to have a friendly receptionist in the photo if possible, but don’t force anyone!
  6. If anyone (including any member of staff) is in the picture, follow the procedure for gaining consent.

Requesting Consent
Attempt to get nobody in the shot, unless you’re taking pictures of a reception or Point of Service stand.

For a Point of Service, staff can improve the photo by making it look friendlier.

The consent form required is available here.

If people need to be in the shot:

  1. Verbally ask permission before taking the picture, explaining that you represent the Open Data Service, and what that is. Ensure they’re okay signing a consent form.
  2. Take the photo.
  3. Ask them to fill in an entry on the consent form.

Cross buildings off as you go, to mark them as completed.

Posted in Data, Open Data, Open Source, Training, Uncategorized.

Tagged with , , , , , , , , .

An Intern’s WAISfest: Introduction to the BBC Microbit

Over the Wednesday, Thursday and Friday of last week, the WAIS (Web and Internet Science) group held its annual WAISfest. This event is a chance for people in the group to explore side-projects and ideas they haven’t had time to do. The aim of the event is to get people thinking about possible areas of research. To stimulate some extra creativity, so to speak.

Luckily, I got to take part.

Wednesday morning began with the ideas unconference. The aim of this was to source ideas, loosely grouping people together to work on them. Ideas ranged from virtual reality workspaces to ways of teaching programming in schools using Microbits.

It was this latter project I hopped aboard, swayed by their stash of robotic buggies and a mountain of BBC Microbits.

Investigating the BBC Microbit

BBC Microbit

At the start of the project, I had no idea what exactly the Microbit was, let alone how to use it. We spent the first day of the WAISfest digging up information on how to use it. Hopefully, by posting this here, it’ll make someone’s life a little easier than ours was!
For those who don’t know, the Microbit is a low-cost embedded board given free to all year 7 students in the UK. It has an accelerometer, magnetometer, radio, GPIO pins and USB on-the-go. It’s able to be programmed in a variety of languages, including Python, Javascript, Microsoft Block Editor, Microsoft Touch Develop and C++.

Behind the scenes, all of these languages use the same core C++ library, published by Lancaster University. This library provides a simplified means to interact with the hardware.

The source code for the C++ library, MicroPython runtime and editor and Touch Develop are available at

Getting Started with the Microbit

How to get started with the Microbit depends, in my opinion, on your level of experience:

  • If you’re completely new to programming, try playing around with one of the online editors. They’re well documented, most coming with tutorials. Uploading your program is as simple as copying the file onto the board as if it was a USB drive.
  • If you’re a bit more experienced, or want to do a medium-sized project, try using Mu and MicroPython. Writing longer programs (up to a few hundred lines) is pretty straightforward in MicroPython. Mu makes putting the file on the board significantly quicker.
  • If you’re an experienced developer looking to do something complex, get the runtime and write using C++. There’s issues with using MicroPython for large projects I’ll get into in the next section.

We opted for the second approach, as we only had 3 days to produce a result. It’s also essential that kids and teachers could understand the code we were writing.

MicroPython for Microbit

“MicroPython is a lean and efficient implementation of the Python 3 programming language that includes a small subset of the Python standard library” – MicroPython Website

MicroPython is one of the languages available to program the Microbit. In my opinion, it exists as a middle ground between Javascript/Touch develop and C++. It’s useful for programs a few hundred lines in length, but struggles with anything larger.


  • Derived from Python, a widely used programming language. Skills easily transferred to other platforms.
  • Quick to upload to the Microbit. Deploying a new script takes seconds.
  • Easier to debug than many other embedded languages, as error messages scroll across the Microbit’s LEDs.
  • Editable, buildable and uploadable offline. There’s no need to use the BBC’s online editors.
  • Avoids needing to understand memory management, as is the case with C and C++.
  • Radio library provides a simple, minimalist interface to radio based networking.


  • With scripts larger than a few hundred lines, the Microbit runs out of memory. This essentially imposes a limit on how long your program can be.
  • Bluetooth isn’t available due to the memory usage of the Bluetooth stack.
  • Unable to easily split code across files. Importing requires extra files to be flashed onto the Microbit each time a script is uploaded.
  • Radio library only available in the Mu editor and not the BBC’s online editor.

Getting Started with MicroPython

There’s two main ways of getting started with MicroPython:

Using the online code editor.

The online editor is provided by the BBC. It provides the ability to write, edit and save code online and compile it for the Microbit. Uploading the code is as simple as clicking “Download” and copying the file to the Microbit. No installation of any software is needed, the editor runs in any modern browser.

While this editor is easy to use and fast to get started with, it has some downsides. To save scripts for later, you need to make an account and sign in. The editor also requires you to have a constant internet connection.

Another downside is, as of the date this was posted, it doesn’t support the Microbit ‘radio’ library, which allows Microbits to communicate with each other.

Using Mu

Screenshot of the Mu Editor

Mu is an offline, open source editor made by Nicholas Tollervey. You can write, edit and save code with it similarly to the online editor. However, files are saved locally, giving more flexibility about when you work and how you store the scripts.

The Mu editor also has several other features the online editor does not:

  • Upload scripts to the Microbit by pressing a button
  • Browse files stored in the Microbit’s flash memory
  • Error messages output to the screen via “REPL”
  • Syntax checking (identifying any errors)
  • Tabs for editing multiple scripts

The downside is that the Mu editor needs downloading and running, something that may not be easy on many school systems. On Windows, the Mbed serial driver is also needed for file browsing and REPL functionality.

Unless you’re just having a quick play, I recommend Mu over the online editor. The extra functionality (especially REPL) is invaluable, as is the ability for more experienced developers to version control their scripts.

As for writing your first script… there’s plenty of tutorials on Python, and the syntax here is identical. Access to the Microbit’s hardware is provided by the Microbit library, with documentation available here.

Aside: Fixing the MicroPython Memory Error

When a MicroPython for Microbit script grows to a certain size, MemoryErrors start appearing. I’m not sure if the size is file size, or number of function/variable definitions. The error looks something like this:

MemoryError: memory allocation failed, allocating 1584 bytes

If this occurs, the simplest solution is to remove unused lines of code. Shortening variable and function names is an alternative solutions. Using long strings as comments exaggerates the issue, as they aren’t optimised out by the MicroPython runtime. This results in them using annoyingly large amounts of memory.

Another option is to try the online MicroPython editor. When compiled on there, I didn’t get MemoryError issues. I’m not certain why this is, but it seems to work!

Other Bits of Useful Microbit Info

  • The online editors run a different version of MicroPython to Mu and the board firmware. In situations such as the “MemoryError” issue, different results can occur.
  • The compass in the Microbit can’t be relied on for consistent bearings. It appears to be affected by any nearby cabling or bits of metal.

Posted in Community, Open Source, Outreach, Programming, python, Tips, Tutorial.

Tagged with , , , , , , , , , , , , , .