Procedural Blending and Annalist

From Graham Klyne

On Friday 9 October, Iris Garrelfs and Graham Klyne attended a “Linked Music Hackathon” at Goldsmith’s University.  The event was organised by Kevin Paige and David Weigl under the auspices of the Semantic Media projects SLICKMEM (Semantic 
 in Early 
Music) and SLoBR (Semantic Linking of BBC Radio).  The event brought together those with a shared passion for music and the data that describes it.


We used the meeting to talk about Iris’ Procedural Blending (PB) model, and were able to make good progress on reconciling the model’s goal of providing a framework for discourse about creative processes with more mundane issues of establishing consistency of usage in more formal data descriptions.  In the end, we created some separation between purely structural elements of the model, and others that capture aspects of the creative process.

An updated representation of the Procedural Blending model is captured in the Annalist collection describing Iris’ Smoke project and soundtrack. The original notions of “input”, “blend node” and “output” from Iris’ thesis are retained, with the additional concept “procedural node” added [click here for more] to allow mechanical yet vital connecting elements of the process to be represented consistently within the data description.
ip_op_hirarchy2A blend node in this system represents a step where key creative decisions are made, where several lines blend into one. By comparison, a procedural node may not embody any actual (creative) decision making. The structural hierarchy is depicted in the diagram to the left.

Over the coming weeks, we shall be adding a Procedural Blending description based on these elements to the Annalist data collection describing the creation of the Smoke project. We will also begin to compare the PB model with PROV.

Spreading the word

Since our 21st September meeting, as well as continuing our work on characterising metadata usage with Annalist and developing our documentation of the active metadata concept and how we might produce appropriate guidance, we have submitted two Practice Papers and a Workshop proposal to the IDCC16 conference, 22-25 February 2016.

We have high hopes that our proposed workshop, Metadata in action, will not only engage participants with the CREAM project and its objectives but also give them an entertaining and rewarding experience. The audience will participate in discussions and hands-on activities, using both sound art and science, to explore the nature, generation, capture, curation and uses of active metadata for research. Activities include ear cleaning, sound waking and lego brick hacking.

One of the Practice Papers that we have submitted is entitled Using metadata actively and is about the concept and what we hope to achieve. The other paper is more forward-looking, exploring a novel mechanism for recognising and rewarding the effort involved in data curation, which could be an important consideration in realising the benefits of recording and sharing process and the underlying decisions.

Musings of a sound artist on metadata and making art

From Iris Garrelfs

1. Data as material

The principle of artworks visualising or sonifying scientific data sets is pretty well known. However, for some artists the role of research is to stimulate the imagination rather than to represent or produce facts. Even more so, Danish sound artist Jacob Kirkegaard feels an obligation to “illuminate a subject from a different angle” than scientists he has encountered commonly do [1]. Although from my point of view these approaches need not be mutually exclusive – new vantage points do create possibilities for novel insights!
Artists increasingly also work with data as a direct material. TED fellow Julie Freeman translates nature for instance by ‘…giving musicality to the movement of fish and expressing city lights in the quiver of moths’ wings’ [2]. Describing an exhibition of data artworks she curated at and for the Open Data Institute’s HQ in London, she noted that even in the orbit of visionaries such as Sir Tim Berners-Lee, who is one of the organisation’s co-founders ‘The idea of data as a material in an artwork was quite new to those working in the space’.. [2].

Sound artists (I am using the term loosely here) have also re-assessed what can be understood as a material like clay or oil paint, as since the advent of recording technologies sound can be treated as such. I mention this notion here as I will come back to it as an example later.

So, artist have appropriated data as a material that can be worked with and on. However, if we are thinking of data as the main information and metadata as information about this information, where does metadata come in here?

2. Metadata and making art

For starters, metadata might undergo a role change and becomes data as in Freeman’s data animation We Need Us, which was shown the Tate Modern in 2014 and has an online home at

But let’s have a little closer (if not in-depth) look at what metadata actually comprises. There are a number of metadata standards in operation, designed to share information about objects in a library for example. According to the Dublin Core Model in its simple form, metadata captures things such as contributor, coverage, creator, date, description, format, identifier, language, publisher, relation, rights, source, subject, title and type. (I m referring here to the 15 terms of the Dublin Core Metadata Element Set. Refined and extended terms can be here).

MARC (MAchine-Readable Cataloging) is devided into five main headings: bibliographic, authority, holdings, classification, community and information. Each of these categories splits into further entries (More about the MARC standards are available from rom Library of Congress website).

So, whether a book, a digital set of information or an artwork, these types of metadata essentially take objects as a point of departure. But objects are not the entire story. Coming back to the example mentioned earlier, working with sound as a material invites its manipulation. In music this might be done by simply applying a reverb to alter the space that a sound is thought to inhabit, or pitching voices to establish backing vocals. The principle of sound processing stretches further to creating entire compositions by transforming, dissecting, moulding and collaging sound as for example electroacoustic composer Trevor Wishart does. Such metadata in my estimation is very worthwhile finding out about.

Wishart was very involved with early digital sound processing tools (see the Composer’s Desktop Project), and with the advent of modular software such as Max or PD many more users at least partially create and also share the tools used to affect such manipulations as part of their process. A very concrete instance of metadata which is being shared, reused and made active.

Looking at metadata that emerges from the creative process a little more closely, further areas that metadata can be recruited from become noticeable: analysing/identify motivation; making selections and motivations; experiential aspects. These are, in all fairness, not the easiest things to capture let alone share, however, they may have increasing relevance even outside the arts and I will look at them more closely (for now see previous blog post about Metadata and Experience).

3. Procedural metadata

By way of conclusion, if metadata describes data that provides information about other data (with the proviso that a distinction between the two is not always clear and largely depends on context or perspective), then surely in creative terms metadata must also refer to HOW a work was being made, not just WHAT was made (including performances).

This implies, that in addition to descriptive, structural and administrative metadata (for a definition of these terms see NISA, 2001:1), we have to add a layer of procedural metadata, and indeed, such a layer exists already. It has been defined as describing ‘all actions and changes that have been applied to the content’ [3]. These actions would include software processes applied to sound as mentioned above, but as actions they do not include sensorial experiences, motivations or perhaps decision making (decision making could be understood as an action, but I see it more as reason why an action was carried out), which I had identified as areas metadata may be recruited from. Adding these, also add a WHY to the WHAT and HOW of metadata content. I am, therefore, at this point advocating to extend the notion of procedural metadata to include an extended notion of process. In doing so, capturing metadata has the potential to enable the sharing and re-use of information about process, making such metadata active. Whilst process is increasingly important to artist, such metadata may not be all that relevant for everyone accessing libraries, archives or collections. However for some researchers, procedural metadata may well become a tool for locating certain kinds of works. I certainly have been finding it difficult to locate materials relating to creative people’s process without trawling through boxes myself, the fun that involves not withstanding.


[1] (Garrelfs 2015, p. 140)
[2]Karen Eng (2013). The butterfly effect: Fellows Friday with Julie Freeman. (blog entry) available from
[3] Lehikoinen et all (2007) Personal Content Experience: Managing Digital Life in the Mobile Age. Chichester UK: Wiley. (Lehikoinen et all 2007:148)

Metadata and experience

From Iris Garrelfs

Yesterday, Thanasis and I met briefly to further discuss the nature and role of “experience” in relation to matadata. Not an easy subject to get one’s head around, but necessary as in my experience (no pun intended) sensorial experiences such as listening or touching can be an important input to making work.

Grappling with the question of what exactly we might want or indeed need to capture under this heading, we looked at a few examples based on another area we had previously identified as one that metadata in creative domains might be recruited from (see graphic below).


For instance, in the realm of technique aspects such as muscle memory or motor skills arrived at from having experienced something many times over plays a significant role – think me playing the violin (better not) or Yehudi Menuhin. Joking aside, such “experiential objects” could be important for other domains, for example in identifying what practical skills are needed to carry out specific experiments efficiently.

screenshot experience

The direct sensorial experiences I mentioned earlier, of course, are of an ongoing nature, and fiendishly difficult to observe, especially without practice, let alone to record. However, specific sensorial conditions, perhaps in the form of effects that particular locations/conditions have on the participants concerned, may encourage certain outcomes and therefore need stating. Some of these may even lead to somewhat longer term states which may be relevant – the domain of psychology springs to mind.

As a result of this meeting, we felt that the term “experience” is perhaps not quite the perfect fit for the task and I will revisit literature from the field of embodied cognition for further ideas ( That said, the way we are looking at it at the moment is that, even if we do label this potential metadata field “experience”, we need to provide a set of examples in the active metadata guidance document, to be produced in the near future. Perhaps this document might also contain suggestions as to how ”experience” could be recorded/coded in a meaningful way.

Very much to be continued. In the meantime, comments and ideas are warmly welcomed.

Docs meeting, Edinburgh, 29th September

Mike, Magnus, and Colin are meeting at the University of Edinburgh to review and augment two of the documents that we are working on, leading up to the structural outline of the guidance that is one of our final deliverables. Both documents will be discussed in full when the ‘Docs group’ meets on 22nd October.

Procedural Blending for Collaboration

From Iris Garrelfs

At our all-hands-on-deck meeting in Southampton earlier this week I was asked to write a little about one of the other envisaged applications of Procedural Blending: how it may aid the process of inter/cross-disciplinary collaborations. These can, as we know, be fraught with difficulties, especially where concepts of participating collaborators differ widely. At the same time, collaborations are ever more important in all research domains. Procedural Blending offers an approach and a potential structure for fruitful collaborative activities across domains. The diagram below gives a basic graphical overview of the idea, but let me flesh this out a little.

To begin with, we may understand each discipline or participating collaborator, say an artist and a scientist, as a distinct blend field, each of which will come with a set of key inputs or themes, elements of which will need to find correspondence in the envisaged output. As a reminder, an input describes everything that flows into the process in question, from concepts or project parameters to tools or physical interactions. Less important inputs and their elements will also be present, although they might not have to appear in the output at all, or at last not to the same extent.

It’s worth spending a little time on identifying and communicating what these inputs/elements consist of; making these aspects explicit methodically may facilitate connections between even very diverse subjects. We can then ‘drill’ into each input or element as deeply as needed until connections between them are established – or a blend is achieved, to use a bit of jargon.

Of course, as an iterative process, the whole procedure may take some time and joint evaluations are very important before moving on to the next stage. These will allow all participants to reflect and comment on the value of the achieved blends/connections, considering individual perspectives as well as joint goal-oriented view points.

There are a host of established techniques that can be inserted to facilitate making connections at each stage. Two of my favoured ones are what I call the “flipside approach” and the “throwing-the-spanner-in-the-works technique”. Roughly speaking, the flipside approach involves taking one thing, perhaps the first obvious idea that comes to mind, and then trying out its exact opposite as a spring board for further development. It might not sound much on paper but the results can be quite surprising!

The spanner-in-the-works technique is a little more elaborate. Essentially, one takes something completely random and tries to establish a connection between it and key ideas that where perhaps difficult to get to grips with (bearing in mind that both the “key idea” and “it” may be an artefact, concept or process) and see what result that connection attempt yields. In all likelihood, nothing directly relevant to what you are working on will emerge straight away, but this activity does uncover “hidden” values (key meta-elements), and in being fun and overtly beside the point, frees up blockages and changes persepctives.

On the whole, the view embedded in Procedural Blending, where elements from inputs are connected through an iterative process of engagement to arrive at outputs, lends itself to unpacking different concepts, working practices and so on. And unpacking such inputs can ultimately help us finding ways to connect elements that are important to individuals taking part in a collaboration.


Team meeting, University of Southampton, 21 September 2015

Colin has added a summary report of our recent team meeting to the page:

Smoke gets in your Eyes

Iris Garrelfs has now finished this audio-visual piece, which she described in a previous post –

You can see the film at

Summer activities

During the first six weeks of Phase 2, the CREAM team have been making progress with understanding the diversity of the research in which metadata might be used actively. Remaining conscious that our primary goal is to produce guidance and exemplars that will enable research to become more effective, agile, and timely, we are mapping out the stages that will be involved in the process of compiling that guidance.

A key stage of the process is to develop a clear description of the concept of active use of metadata or, to use the shorter form, the concept of active metadata. We are also making fair progress with a “getting started” guide to enable researchers to recognise and characterise both actual and potential active use. Currently, we envisage this being a stand-alone guide as well as a component of the guidance. We are designing the guide around a generic tool for eliciting the nature of the metadata being generated and captured, but are increasingly confident that Annalist meets all the emergent requirements for such a tool. Graham Klyne has introduced updates to improve Annalist support for the kinds of modelling that seems to be required for CREAM.  Enhancements over the past month or so include support for attached media (images and audio clips), extensive internal changes to support human-readable labels rather than internal identifiers in option selection lists, and limited support for subtypes to facilitate display adaptation to suit the kind of concept being presented.

CREAM aims to be as discipline-agnostic as possible, as evinced by the range of interests of its partners. Exchanges of views between the artists and scientists have proved valuable in opening up discussion about the nature of metadata and data. Iris Garrelfs observes that for her the main problem in looking at her metadata has been to find a demarcation line between where, within the artistic context, data ends and metadata starts. During our forthcoming team meeting on 21st September, the scientists and engineers in the team will be looking forward to learning more about Procedural Blending (PB) as a metadata model for artistic creation.

Annalist meeting, 27 August 2015

On Thursday 27th August, Colin and Cerys met with Graham at Wolfson College, Oxford to discuss the potential roles(s) of Annalist in characterising metadata usage, with a longer-term view to putting together a “getting started” guide for researchers wishing to use Annalist to explore their active use of metadata.

Our meeting was both constructive and productive and will inform several aspects of the CREAM project:

  • The “getting started” guide itself;
  • How we describe the concept of active use of metadata;
  • Our approach to producing the structural outline for the guidelines about active use of metadata (a Phase 2 deliverable);
  • The modelling of chemical experiment recording, which should influence our thinking about the design of a generic digital research notebook (DRN).

Thanks to Graham for choosing the venue. It could well be a good (and very pleasant) place to hold a future team meeting.