Postcards from the field: Studying the Neolithic figurines from Koutroulou Magoula, Greece

Clay Neolithic figurines are some of the most enigmatic archaeological objects, which depict in a miniature form humans, animals, other anthropomorphic or zoomorphic beings, and often hybrid or indeterminate entities. Figurines have excited scholarly and public imagination, and have given rise to diverse interpretations. The assemblage from Koutroulou Magoula, a Middle Neolithic site – 5800-5300 […]

Clay Neolithic figurines are some of the most enigmatic archaeological objects, which depict in a miniature form humans, animals, other anthropomorphic or zoomorphic beings, and often hybrid or indeterminate entities. Figurines have excited scholarly and public imagination, and have given rise to diverse interpretations. The assemblage from Koutroulou Magoula, a Middle Neolithic site – 5800-5300 BC – in central Greece (excavated under the co-direction of Prof. Yannis Hamilakis – University of Southampton/British School at Athens and Dr Kyparissi-Apostolika – Greek Ministry of Culture), offers a unique opportunity to revolutionise the way we study and understand prehistoric figurines.

The video presents the project ‘Corporeal engagements with clay’ (funded by the British Academy/ directed by Prof. Y. Hamilakis) showing aspects of our work in recording, visualising and replicating the figurines from Koutroulou Magoula by using a tailor-made database, as well as drawing, photography, photogrammetry, laser scanning, reflectance transformation imaging and 3D printing.

 

Update on the Hoa Hakananai’a Statue

In 2012 ACRG members, James Miles and Hembo Pagi, completed a series of RTI captures and a photogrammetry model of the Easter Island Statue, Hoa Hakananai’a, which is currently housed in the British Museum. Since then, in collaboration with Mike Pitts, we have examined the results of these RTI files and compared them with the photogrammetry […]

In 2012 ACRG members, James Miles and Hembo Pagi, completed a series of RTI captures and a photogrammetry model of the Easter Island Statue, Hoa Hakananai’a, which is currently housed in the British Museum. Since then, in collaboration with Mike Pitts, we have examined the results of these RTI files and compared them with the photogrammetry model. A brief discussion of this work can be seen in a previous blog post. The methodology that we utilised allowed for a full analysis of the statue that has previously been impossible. It allowed us to examine the RTI files in fine detail through the changing surface detail identified through the raking light and rendering modes. Where we thought we had identified something important we could then see if it existed in the 3D model. This comparison then allowed for a combination of the subtle 2D differences in the RTI to be mapped and compared to the 3D surface differences in the virtual replica of the model. Through this we were able to clear up some of the often ambiguous interpretations of the petroglyphs. More on our results and methodological approach can be seen in our recently published Antiquity paper, our Antiquaries Journal paper, Mike’s paper in the Rapa Nui Journal and our soon to be published paper in the Proceedings of the 41st Computer Applications and Quantitative Methods in Archaeology Conference. The work has also gained a lot of public interest and our research features in recent online publications which are a Google search away.

Since these publications I have been working on ways, as part of our initial research aims, at producing online versions of the datasets, where users can view and manipulate the records that we have. This will allow different people, with different backgrounds to come to their own conclusions. Part of this blog post is to make available for the first time, the RTI files that were produced within our investigations. At Southampton we are working on creating a newer and better online RTI viewer, but at the minute we are left with having to produce low resolution datasets for online use. The following then is a severely reduced RTI dataset but the results are still evident none the less. The RTI files have been separated into five different sections, one of the front and four of the back (with a slight overlap) to allow for a greater understanding.

In Van Tilburg’s recent response to Mike’s paper (in the same Rapa Nui Journal), she states that “The major issue with regard to PTM (RTI), is that to advance a thesis of interpretation and avoid bias one must allow review of all of the images produced, not just selected ones that support a given point of view.” I would just like to clarify this statement as it seems that the purpose of RTI has been overlooked. Each of the five files contained between 57–87 images as gathering anymore than 90 would be counter-intuitive through the way in which the individual images are processed. With Van Tilburg’s understanding it is clear to see that there has been some confusion as to how an RTI is produced. Rather than examine each individual image, the RTI builder combines all of the static images and merges them together into a file format that allows for the virtual movement of a light source. This then moves away from the need to examine each image, which I agree, could provide incorrect conclusions. Instead it allows for a greater understanding and greater depth of investigation through the combination of these different static images and movement of virtual light, as it removes the ambiguous and problematic context of previous investigations of the statue. Van Tilburg also negates to mention the use of our virtual model (which was based on 150 images) within our interpretations as she wrote her article before ours were fully published. Going from the RTI files to the 3D model is something that has never been done before in reference to this statue and her argument of using selected views is wrong and short-sighted. She has made derogatory remarks regarding our work without viewing the entirety of our research. This is something that needs to be corrected and so also included within this blog post are updated versions of our photogrammetry model through different online viewers. This will then allow you to make a fuller and more complete interpretation as you too can also go between the two different datasets that we have used and come to your own conclusions.

Although I may not have a full understanding of Rapa Nui archaeology, I do have a fairly high expertise in the digital technology that was used within our investigation. I can say with a high certainty that the results shown are the most accurate and most complete investigation ever completed on the Hoa Hakananai’a statue. I spent many months going through these datasets to find hard to see details, making sure that any results were checked many times. I have used theses technologies on a range of different items, from prehistoric through to Victorian times, from small to large objects, and I have been lucky enough to work all over the world doing this. The technology stands firm in every instance and provides clearer results than previous methodological approaches. It is therefore hoped that with the inclusion of the original files that this too will provide a further insight and clarification to the full published record that we have presented over the last year. We will discuss our results in yet further in the new television series “Treasures Decoded” which will be broadcast September 10th on More4 (in the UK) and via the History Channel. In the meantime please carefully view the results shown below and please do get in contact with us if you see anything that we have missed!

The following RTI files are of the same resolution as those used within our investigation. Each of the following links are of the separate RTI files and are accessible through a web RTI viewer produced by the Visual Computing Lab in Pisa, Italy.

Front of the Statue

Lower Back of the Statue

Middle back of the Statue

Top of back of the Statue

Back of head of the Statue

 

For our highest resolution online model please visit this custom made website. Most online viewers limit the number of faces that can be included but the viewer (Based on 3Dhop) that I built loads partial amounts of the model based on your internet connection.

 

 

The 2014 Excavation Season is here!

Goodness me, it has been a busy three days! We have arrived on site and have been busily removing turf and beginning to set out how we are going to work on the archaeology that we are already uncovering. This site is so rich in standing remains and there are thousands of bricks just inches […]

Goodness me, it has been a busy three days! We have arrived on site and have been busily removing turf and beginning to set out how we are going to work on the archaeology that we are already uncovering. This site is so rich in standing remains and there are thousands of bricks just inches under the surface. Tudor bricks everywhere!

We’ve based the location of our first trench on our geophysical results from the survey carried out by students and headed up by Dom Barker, Tim Sly and Kris Strutt of the University of Southampton. Dom is on site with us this year, and has been talking the students who are doing their compulsory summer fieldwork dig through the ways that we can use magnetometry data to plan trench locations.

Two of the students who are digging this year are planning a blog post that will be published later in the week that will summarise the week’s findings, but for now we wanted to give you an idea of the kinds of research questions that we will be asking this year as we dig in the New House.

Research Aims

1. To gain a better understanding of the form and arrangement  of the New House and to think about different phases of building.

2. To consider the appearance of the house within the landscape.

3. To connect plans from the early excavations to the archaeology and see how accurate the original plans were

4. To better understand how the New House was destroyed and whether there were multiple stages to this process.

We’ll write a fuller description of each of these questions over the next few days so that you can follow along as we find evidence for our working ideas.

Next week we are going to be creating some 3D models of the trench so far so that we have a record of the first phase of our digging. So we will share these with you as we make them.

What an exciting season we have ahead of us!


Filed under: 2014 Excavation, Excavation Plans Tagged: 2014, 3D models, buildings, civil war, photogrammetry, plans, recording, research questions, tudor

Aerial Photogrammetry at Portus

In the previous post by Stephen Kay on Unmanned Aerial Vehicles at Portus he discussed the work that has been completed on site in terms of capturing aerial photography. Aerial photography plays a significant part within the understanding of any archaeological site and this is especially true at Portus. As Stephen says it provides the ability for the archaeologist to …

In the previous post by Stephen Kay on Unmanned Aerial Vehicles at Portus he discussed the work that has been completed on site in terms of capturing aerial photography. Aerial photography plays a significant part within the understanding of any archaeological site and this is especially true at Portus. As Stephen says it provides the ability for the archaeologist to have a bird’s-eye view of an excavation and it gives them the opportunity to see the plan of structures, their relationships with each other and alignments which are not visible at ground level. These images are however still a static representations of the area of interest, represented through 2D views. Part of the work that I have completed for the Portus Project incorporates these static images to produce 3D reconstruction through photogrammetry.

For the first time in 2013, Portus utilised UAVs in the documentation of these aerial photographs having previously used balloons, elevation cranes and a local police helicopter. The introduction of these UAVs has allowed the team at Portus to control the documentation of the site in more ways than was previously possible. The UAVs that were used at the end of field school allowed for a full control over how the images were taken. We could focus on specific areas for longer, using the high resolution cameras that are used on site, as we were able to control the flight path of the UAVs. It allowed us to not only gather information from great heights but we could capture images from varying angles and height variations. This change in angle and distance is essential within the production of a photogrammetry model as the images need to contain enough pixel resolution and overlapping data to allow for the production of a 3D model. Previously having used balloons and cranes, the field of view gathered has been limited because there has not been enough room to manoeuvre the crane around the area of interest and the balloon system is affected too much by the change in wind. At Portus we were able to work closely with our Italian colleagues in producing the suitable images needed for the photogrammetry modelling and the results that will be shown highlight the potential that this methodology has within the understanding of the site.

Photogrammetry model of the 2013 excavation of the Palazzo Imperiale
Photogrammetry model of the 2013 excavation of the Palazzo Imperiale

 

Photogrammetry model of the 2013 excavation of the Navalia
Photogrammetry model of the 2013 excavation of the Navalia

The datasets provided show unique views that archaeologists from the ground are unable to see. The Navalia excavation highlights the potential of recording the subtle details of the surface that are often missed when recorded with traditional methods. The datasets are not as good as the laser scan models produced because the models were produced explicitly from the aerial photographs. As this was a case study we wanted to examine how well photogrammetry could be utilised on site and in future the models that will be produced will combine these aerial photographs with photographs taken within a normal data capture. Laser scanning will always provide better results of large areas than photogrammetry does. This is due to the way in which both recording methods processes the data. In order to gain more comparable results there is a need to gather more photos at lower height variations, as there will be a great number of higher resolution images of the same area (instead of one image we could have twenty) allowing for a greater resolution model. Again our aim is to do this in the future.

Having processed the photogrammetry models from 2013 there was a second recording session in early 2014. The same UAV was unable to be used and instead data was captured using a GoPro Hero black 3. This offers a lower resolution than the 36mpx camera used previously but the results that were produced are again very comparable. This was due to the way in which the data was captured. GoPros offer a greater lens distortion than normal cameras because of the fisheye lens that it uses and will affect the production of the photogrammetry model because of the changing pixel representation. As a result the camera was set on its medium resolution with the captured images providing a 7mpx resolution. A more focussed flight path was chosen to allow for a series of close up images that incorporated the necessary overlap needed. In total several thousand images were collected. Each of these were processed to remove the lens distortion and cropped to remove any erroneous data that were inherent within the images (due to the removal of the lens distortion). The best images were then chosen and a number of models were produced.

Photogrammetry model of the NS mole
Photogrammetry model of the NS mole
Photogrammetry model of the mole
Photogrammetry model of the mole

Having processed this data my attention then turned to the archive that the Portus project has. There are over 30,000 images that have been taken around the site documenting its changing pattern. The images were never intended to be used within the production of 3D models but rather as a way to simply record the site year by year. I have personally posted a review of AGIsoft Photoscan here and the great thing about the software that we use to process our models, is that it will try and produce a virtual model regardless of the images that are inputted. As a trial method I used the images that Professor Simon Keay captured in 2009 from the police helicopter and I successfully managed to produce a high resolution model of the entire site. I have since produced a number of other models based on our archive and I will slowly make my way through to see what else I can produce. Although the results are missing some data because of gaps of information, the main thing that this offers is that not only are we using our archived images but we are now able to virtually document the changing pattern of the site since the excavation at Portus began. This offers further research avenues that can be incorporated into the future digital recording that will take place over the next few years.

Photogrammetry model of Portus from 2009
Photogrammetry model of Portus from 2009

 

Photogrammetry model of Building 1 and Period 6A defensive wall from 2008
Photogrammetry model of Building 1 and Period 6A defensive wall from 2008
Photogrammetry model of the excavation of Building 1 with the footings of Building 4 and Period 6A defensive wall from 2009
Photogrammetry model of the excavation of Building 1 with the footings of Building 4 and Period 6A defensive wall from 2009

A video from the on board GoPro camera used on the dji Inovations drone highlighting the field of view, stability and flight path of the UAV.

A video of the dji Inovations S800 spreading wings drone taking off and landing. Notice the stability provided by the multiple propellers

Integrating Types of Archaeological Data – Dan’s Major Project

Dan Joyce, our trench supervisor for the 2013 summer field season last year, has written a blog post to summarise his major dissertation project. Dan studied the University of Southampton Masters in Archaeological Computing last year, which he completed at the end of 2013 (well done Dan from the Basing House team!!!)! Dan’s project looked […]

Dan Joyce, our trench supervisor for the 2013 summer field season last year, has written a blog post to summarise his major dissertation project.

Dan studied the University of Southampton Masters in Archaeological Computing last year, which he completed at the end of 2013 (well done Dan from the Basing House team!!!)!

Dan’s project looked at how archaeologists can mesh together different types of archaeological data.  Dan is a graduate of the University of Southampton’s Masters in Archaeological Computing run by the Archaeological Computing Research Group.

The course has two major strands to it, one concentrates more on 3D graphics and the theory of archaeological visualisation (Gareth and I are also graduates from this programme), and the other on geographical information systems and survey.

Thanks to Dan for writing this post. 

CLICK ON AN IMAGE IN THIS ARTICLE TO SEE IT UP CLOSE.

Dissertation on the integration of digital archaeological data

Introduction

My dissertation for my masters in Archaeological Computing (Virtual pasts) at the University of Southampton was concerned with integrating different types of digital archaeological data from Basing House. This included a total station and GPS (Global positioning system) topographical survey of the site, a total station building survey of the 16th century manorial barn, lidar data of the site, geophysical survey data, total station and photogrammetry data and section drawings of the 2013 excavations as well as digital context information.

Topographical and building survey integration

As part of the practical aspect of the Advanced Archaeological Survey course undertaken at the University of Southampton a topographical survey was undertaken on the site using a total station and GPS to record points on the ground. These points were then processed in both the GIS (Geographic information system) software ArcMap and AutoCAD Civil to form a coherent surface. In the case of AutoCAD Civil a TIN (Triangulated Irregular Network) was created, this forms a surface by joining the points together to form triangles (figure 2). In ArcMap a raster DEM (Digital Elevation Model) was created, this forms a much smoother surface by interpolating the surface between the known points (figure 3).

A standing building survey was also undertaken on the 16th century manorial barn using a total station (figure 1).

Figure 1 – Total station building survey of manorial barn

The two surveys were combined, with the topographical survey and the building survey appearing together in their correct positions within the British National Grid reference system, as can be seen in figures 2 and 3.

Figure 2 – TIN of topographical survey with the building survey of manorial barn

Figure 3 – Raster of topographical survey with the building survey of manorial barn

Geophysical survey integration

As part of the practical aspect of the Archaeological Geophysics course at the university, a geophysical survey of much of the site was undertaken, this involved resistivity, magnetometry and ground penetrating radar surveys. The results from these surveys were integrated with that from the topographical survey within ArcMap as can be seen in figures 4 and 5.

Figure 4 – Resistivity survey overlain on top of TIN of topographical survey

Figure 5 – Magnetometry survey overlain on top of TIN of topographical survey

Another step was to integrate the lidar data procured of the site with GIS and the geophysical survey data as seen in figure 6.

Figure 6 – Geo-physical survey data overlain on top of lidar data of the old house

Lidar data

As we had procured lidar data of the site from the Environment Agency it was decided to experiment on it. A number of features could be seen in the lidar data of the common using a hillshade in ArcMap (or other software). A hillside creates an artificial light source within the software from a set direction and altitude causing shadows to be formed by any raised areas in the lidar data (figure 7); altering the direction and altitude of the light source can reveal different features. More on this can be seen in my blog on processing lidar data.

Figure 7 – Lidar data of the Common showing a number of interesting features

Some of these features can also be seen in the geophysical survey of the common undertaken by Clare Allen.

Figure 8 – Geo-physical survey of the common overlain on top of the lidar data

3D visualisation of existing archaeological data

As an aid to understanding the 1960s excavations before we began the 2013 excavations I digitised the plans and sections and created a 3D model in AutoCAD, although far from perfectly accurate the model made it much easier to understanding how features related to each other in this earlier dig and what was missing.

Figure 9 – 3D model created from 1960s excavation data

I experimented with a number of methods of tying the context information from these excavations to the sections, including just displaying it next to the section within AutoCAD.

Figure 10 – Digitised section with context information

A later attempt with the data from the 2013 excavations involved the entering of the context information from the excavation into an ARK database (a web accessible database solution created by L-P Archaeology), a hyperlink was created and linked to each context which referenced the relevant webpage associated with the data in the database and this data could then be displayed with one click of the relevant context within AutoCAD.

Figure 11 – Digitised section with ARK database record

Special find information could be displayed in the same manner by clicking on the relevant point in the model.

Photogrammetry

Photogrammetry is a technique where 3D models can be created from multiple overlapping photographs by matching the same point in each photograph. As well as using it to record the 2013 excavation I experimented with it to see if slides from the 1978-83 excavations could be used to create a 3D model of this dig. Although it was quite successful much more work needs to be done on the process including surveying in known points on site to aid with stitching the photographs together.

Figure 12 – Photogrammetry model of the old house gatehouse from the 1978-82 excavations

Photogrammetry was also undertaken on box 9A during the 2013 excavations to see how good a 3D model could be created (figure 13), four nails were driven in at the four corners of the box to act as ground control points.

Figure 13 – Photogrammetry model of Box 8A

Figure 13 shows Box 8A with the four ground control points which were surveyed in allowing the integration of the photogrammetry model with ArcMap as can be seen in figure 14 where the model is displayed in its correct position underneath the TIN created from the topographical survey.

Figure 14 – Integration of photogrammetry data with topographical survey within ArcMap

Experimentation was also conducted on recording the excavations from above using a camera attached to a 3m pole (figure 15).

Figure 15 – Elevated photography on a pole being undertaken

This technique allowed the creation of a 3D photogrammetry model of the whole excavation (figure 16).

Figure 16 – Photogrammetry model of the 2013 excavations

3D contexts

Due to the fact that the 2013 excavation was recorded with a total station by surveying the outline of contexts and taking levels on the top of them it was possible to experiment with the technique of creating 3D contexts within AutoCAD. First the points from the context were turned into TIN surface (figure 17).

Figure 17 – Wireframe surface created from total station survey of a context

Then the surface was extruded downwards (figure 18)., the same was done with the context below and the second 3D object was subtracted from the first to form a 3D context. This was continued until all the contexts had been created in 3D

Figure 18 – Wireframe 3D context

Due to the fact that few contexts were actually removed during the excavation part of one of the sections was chosen for this process resulting in the creation of a series of 3D contexts within AutoCAD which could be removed at will virtually recreating the excavation process (figure 19). The volume of the 3D context could also be calculated adding this information to that recorded during the excavation.

Figure 19 – Section of 3D contexts created from total station data

Integration of total station excavation data

Due to the fact that the excavation was recorded digitally using a total station it could easily be incorporated with the topographical survey and building survey data recorded previously. This can be seen in figure 20 where the surfaces created from the excavation data can be seen under the TIN created from the topographical survey in AutoCAD.

Figure 20 – Integration of excavation data with topographical survey in AutoCAD

While figure 21 shows the point data underneath a TIN surface in ArcMap which is unable to display the surfaces created n AutoCAD.

Figure 21 – Integration of excavation data with topographical survey within ArcMap

Conclusions

Although this work demonstrates the potential for the integration of many different types of digital archaeological data a great deal of work still needs to be done to make it a practical process and to solve a number of problems.

Blog post by Dan Joyce


Filed under: Dan Joyce, Data Processing, Digital Methods, Excavation Plans, Geophysical Survey, Images, Magnetometry Survey, Spring Survey, Summer Excavation Tagged: 3D, 3D context, Archaeological Computing, ArcMap, ARK, AutoCAD, barn, building survey, context, Environment Agency, excavation, gps, Lidar, MSc, photogrammetry, TIN, topographic, total station, wireframe

The Rise of 3-D printing in Archaeology

With the announcement that Adobe, one of the world’s largest software companies, is to integrate 3-D printing support into it’s Photoshop package, it’s about time that the rise of 3-D printing was assessed, and the impact it’s having on archaeology. It’s rise has been unprecedented, but what have archaeologists done with it? Definition and background […]

With the announcement that Adobe, one of the world’s largest software companies, is to integrate 3-D printing support into it’s Photoshop package, it’s about time that the rise of 3-D printing was assessed, and the impact it’s having on archaeology. It’s rise has been unprecedented, but what have archaeologists done with it?

Definition and background

Defined as a “three-dimensional solid object of virtually any shape from a digital model”, a 3-D printer costs anywhere between £300-£3,000, although these are generic. A 3-D scanner for archaeologists is being developed through Kickstarter, the crowd-funding site, which can then be used to create a 3-D model, which could then be printed (see below). This shows public support for these technologies are becoming more and more popular (at the time of going to press). However, the “printed gun” incident shows that, combined with the open-source nature of the internet, thousands of people can create undetectable lethal weapons in their own home. However, this is a one-off case, and the popularity of 3-D printing is being shown with children’s drawings being turned into toys and decorations.

Grant Cox Kickstarter

The bad news

Increased 3-D compatibility opens up the door to hundreds of businesses. For archaeologists, this gives us hundreds of opportunities. For example, we could build a scale model of a landscape using accurate data, or have a 3-D printer create a model of an artefact to pass around the class. Alright, it would probably be in plastic and not metal or stone, but you get the idea. It would allow the collection to come to the classroom without having to go to the exhibit itself! This would remove the risk of damaging the artefact, and throw new light onto an artefact through sensual analysis, and you would be able to repeat this because the experiment can be done again without damaging the original. But this raises all sorts of issues; for one, does this then remove the need for museums? Of course not, but it does jeopardise the raison d’être. Museums contain so much more than just the artefacts, but if the focal point of a display can be recreated in your living room, then what is the point of seeing the real thing? Several conferences have debated this matter of recreating archaeology, and whether the real thing is worth seeing over a copy or a virtual exhibit. Finally, when you recreate something from an original, it will rely on the quality of the 3-D printer; you will never get the 3-D printer to perfectly replicate the artefact in question.

The good news

It’s not all doom and gloom for archaeology and heritage. Arch Aerial, a new company that specialises in using drones to record landscapes and sites, is using 3-D printers in an innovative way; by using 3-D printers to create prototypes of their drones, they can cut costs significantly by bringing every part of the design process into their office, rather than outsourcing to an external firm. These drones can fly into areas that other technologies simply can’t, like a rainforest, and operate far quicker too; the photographs can be combined into a photogrammetry package (such as Photoscan) to create 3-D models within 20 minutes of downloading a flight, allowing for real-time interpretation of sites, and it doesn’t stop the excavation for more than a few minutes, since the drones are so stable (albeit noisy). Contrast that with the use of helium balloons, or kites, or a camera on a stick (I’ve seen most of these being used); where the quality and quantity of the photos, the limited range of environments they can work in and the way they disrupt the site’s excavation on a budget often make aerial photography an unpopular choice for most site directors. Additionally, this makes drones affordable to archaeologists; for us, you can buy one of their kits for less than £2,500, or hire one for £1,000 for a 3 month period, to capture high-quality images of your sites, anywhere form 20-300 metres above the site. The quality of the photogrammetric models is also very similar to the more expensive Archeotech drone. Arch Aerial plan to install LIDAR units in place of the cameras; imagine the potential that would have for excavations and photogrammetric models!

Therefore, the 3-D printer is making sites more recordable than ever before, and combined with software packages, it makes for an excellent tool to help interpret the site “at the lens edge” (to murder Ian Hodder’s quote). It should be pointed out that these are not the same drones that get used by the military, although if archaeologists start using drones in this way, then we would have to comply with the Civil Aviation Authority (CAA) Civil Aviation Authority in a new way, by having to obtain permission from the CAA for work purposes, and if we capture images of identifiable people, then the images come under the Data Protection Act. So this affordable technology comes with a small price to pay, in more ways than one.

3-D models for all archaeologists?

Related to this is the use of the internet to create, upload and share 3-D models for free, such as Sketchfab, where a number of archaeologists have already used it to upload their models of excavations and contexts. The Post Hole Journal has an article on this application. This is an area that I can only see going upwards; the sky is literally the limit in terms of how many models you want to make, and what scales should be used for the landscape approach. If you check out the models on Robert Barratt’s Sketchfab portfolio though, you will see that it is the artefacts that get the attention of the modellers, preserving the artefact in multiple mediums, rather than the landscape itself. More innovative ways of using Adobe’s package to create 3-D models will only help with the semantic element of the web, where data can be linked together in single, understandable, “one-size-fits-all” terms that transcend languages and national agendas, in line with websites like Sketchfab.

Robert Barratt2013sketchfab

A word of warning

Adobe’s package will allow customers more freedom to invest in home-grown and startup businesses, but where does this leave big companies that rely on recreating 3-D products and goods? In particular, an interesting battle is raging on the tabletop, with Games Workshop (the world’s largest tabletop wargaming company) competing with smaller firms, who are presenting a far cheaper (and arguably more high-quality) product. This has lead to lawsuits in some cases! While this is not something I foresee with archaeological firms, we should bear in mind that the government is still waking up to the legal precedents that 3-D printers will set, which will have implications for the future. While archaeology may not be affected as much by these developments, it still serves as a warning for us not to get into the same battles.

Summary

While I wouldn’t trust a 3-D printed trowel for excavations, the potential for archaeology and 3-D printers is incredible. Some of the technology is already and a lot of it is affordable or can be run from your laptop. Big businesses may still be around, but expect to see a lot more home based businesses using 3-D printers, and with that, archaeological innovations. In particular, recording archaeology is becoming a lot easier, relating 3-D printers with photogrammetric software in poetic harmony that even a few years ago would have been considered impossible, “taking off” into a world where high-quality data is accessible to all. However, some interesting issues still remain, particularly over copyright and potentially some legal battles, and we must be ahead of the game in this respect; for more information about using drones in archaeology, I recommend the Civil Aviation Authority‘s guidelines on UAV’s (unmanned aerial vehicles).

En route for Easter Island and a piece of Google’s doodle

A century ago today, the Mana, an auxiliary schooner captained by Scoresby Routledge, stewarded by his wife Katherine and crewed by a collection of English seamen, fishermen, scientists and the odd Royal Navy lieutenant, had just been hauled up onto a floating deck in Talcahuano on the Chilean coast. They were nearly a year into […]

Photogrammetry image of the statue featured in the Google doodle from 15 January 2014 (James Miles, ACRG)

A century ago today, the Mana, an auxiliary schooner captained by Scoresby Routledge, stewarded by his wife Katherine and crewed by a collection of English seamen, fishermen, scientists and the odd Royal Navy lieutenant, had just been hauled up onto a floating deck in Talcahuano on the Chilean coast. They were nearly a year into their voyage. While the ship was being cleaned and checked, they collected supplies sent from England and divided up the stores to last for a six-month stay on Easter Island, their ultimate goal. In the meantime, Scoresby and Katherine took the Trans-Andine railway to Valparaiso. They visited Williamson & Balfour (owners of the island lease) and the Company for the Exploitation of Easter Island in Valparaiso, and studied the Easter Island collection in Santiago museum.

They sighted the island on March 29. Caught up in Pacific repercussions of the first world war, in the event they were to remain there for nearly 17 months.

One of the many things Katherine Routledge achieved during that stay, was to study the petroglyphs and stone houses at the south-west tip of the island. One of those houses had been where Hoa Hakananai’a, the beautiful statue depicted in the Google doodle today, was standing when found and taken by a Royal Navy crew in 1868. She also recorded details about the island’s birdman ceremony.

We found both of these studies offered critical evidence for understanding Hoa Hakananai’a, which we incorporated in the analysis of our new 3D digital survey. Peer-reviewed articles about our project have now been accepted, and we hope will appear later this year.

Google doodle from 15 January 2014

Photogrammetry

During the 2013 excavation season whilst completing a series of laser scan models of the site I also completed a number of photogrammetry captures of specific artefacts. The following are a few examples of the work completed and allows for a virtual record that can be used by archaeologists off site within their analysis of these artefacts. The Roman architectural …

During the 2013 excavation season whilst completing a series of laser scan models of the site I also completed a number of photogrammetry captures of specific artefacts. The following are a few examples of the work completed and allows for a virtual record that can be used by archaeologists off site within their analysis of these artefacts. The Roman architectural fragments illustrated here are currently being studied by Dottssa Eleonora Gasparini through this process.

I am also currently working on creating a series of photogrammetry models based on the aerial images that were captured during the field season and will be used a direct comparison to the laser scan models that are currently being processed. The model of the Navalia excavation has been included below and highlights the potential that photogrammetry has within the capturing of the excavation process

Rendered image of the Portus Capital Rendered image of the Portus Capital Rendered image of the Portus Capital Rendered image of a Roman Volute Rendered image of a Roman Volute Rendered image of the Navalia Excavation

Netley Abbey

In January of this year myself and Dan Joyce completed a series of recording techniques at Netley Abbey, including time of flight and phase scanning, photogrammetry and Reflectance Transformation Imaging. The work was organised by Dan in collaboration with English Heritage for his individually negotiated topic for his masters degree. The main aim of the […]

In January of this year myself and Dan Joyce completed a series of recording techniques at Netley Abbey, including time of flight and phase scanning, photogrammetry and Reflectance Transformation Imaging. The work was organised by Dan in collaboration with English Heritage for his individually negotiated topic for his masters degree.

The main aim of the investigation was to teach Dan how to correctly capture difficult architectural remains using these techniques and the following are examples of the work completed. Furthermore, our investigation followed on from a past Southampton University project and has now led to a full geophysical survey, building survey and 3D models of the standing ruined architecture.

Part of the remains of the abbey were first recorded using a Leica Scanstation C10 and were then captured further using a Faro Focus 3D. The C10 was used as a means to record high resolution data of specific parts of the abbey at mm accuracy that were too high to be recorded at the same resolution as the Faro. Instead the Faro was used to capture an overall model of the building and below are the scan positions that were used followed by a short animation of the laser scan model.

Scan positions

Scan positions

 

Photogrammetry was used on a number of different areas within the building to highlight the differences between small object and large object data collection techniques. These were then used within Dan’s work as a way to compare the different recording techniques through the final representations available. Below are two renders of the the East window of the abbey and a a rendered image of a column

Column East Window East Window

Further to these two methods, RTI was also used on a series of graffiti markings to make the writing more legible and also included within the research was the capturing of a series of gigapixel images, the first showing an overview of the building, the second highlighting the East window and the third showing the West window 

Conservation Project at Maori Meeting House

Recording Hinemihi using Computational Photography

On Sunday the 23rd June 2013, a team from the University of Southampton took part in Hinemihi’s annual Maintenance Day. Using cameras, combined with new computational photography techniques, the team recorded some interesting details of Hinemihi.

Hinemihi is a Maori Meeting House, one of only four outside of New Zealand. Hinemihi is situated in the grounds of Clandon Park, a National Trust managed site. Every year, University College London Institute of Archaeology brings Conservation students and staff to Clandon Park, and along with volunteers, work to clean Hinemihi. Yvonne Marshall, Eleonora Gandolfi and I (Nicole Beale), spent the day working with the staff and students present, alongside many volunteers, to create some RTIs of Hinemihi. We were delighted to be able to attend this year’s event.

Hinemihi in the grounds of Clandon Park.
The impressive facade of Clandon Park, which sits opposite Hinemihi
An example of the challenge that the UCL Conservation team and the Hinemihi community face in preserving and protecting Hinemihi. Note the flaking paint, and the damp green covering.

The meeting house has some interesting etched words on the wooden carved sculptures that make up her interior and exterior. These engraved words have been eroded and in some cases almost entirely lost underneath the many layers of paint that protect the wood underneath. We were interested to find out how useful RTI would be to record and decipher this text. I think that you’ll agree that the results below give the answer: Very useful indeed!

One of the many intricately carved wooden sculptures of Hinemihi. Upon which words are engraved.
An example of the engraved words that we were recording using RTI.

We also carried out a few other quick recording techniques, which I’ve outlined below. We’re hoping to return to Hinemihi soon with UCL, to contribute more to the recording of this important Maori Meeting House.

Useful Links

Hinemihi’s website, Te Maru O Hinemihi, is: http://www.hinemihi.co.uk/

National Trust’s webpages for Clandon Park and Hinemihi: http://www.nationaltrust.org.uk/clandon-park/things-to-see-and-do/maori-meeting-house/

UCL IoA website: http://www.ucl.ac.uk/archaeology/calendar/articles/20130619

You can read more about UCL’s involvement in Hinemihi here: http://www.ucl.ac.uk/archaeology/research/directory/hinemihi_sully

Photogrammetry

Taking a series of photographs which overlap by a minimum of 30%, we created a 3D model of some of Hinemihi’s sculptures. If you would like to try this, there is a free version of photogrammetry software here: http://www.123dapp.com/

Photogrammetry Results

This is the result of the photogrammetry that we did on the roof support pole.

 

Reflectance Transformation Imaging

With a technique called Reflectance Transformation Imaging, we made images of objects that have an interactive light-source. These image look just like photographs, but allow you to see surface information of the objects that would normally be invisible to the naked eye. You can try this yourself as the software is free, visit: http://culturalheritageimaging.org/Technologies/RTI/

RTI Results

Click on the images below to see the results. Each image will rotate through the results of the RTI. Our Flickr page has the high resolution images, ACRG Flickr Group:

Word no. 1

Word no. 1 close-up

Word no. 2

Word no. 2 close-up

Word no. 3

Word no. 3 close-up

Word no. 4

Word no. 4 close-up

Word no. 5

Word no. 5 close-up

Panoramas

Using a Gigapan, a robotic tripod head which attaches to a digital camera, we created 360 degree panoramas of high resolution images of the areas around Hinemihi. You can try a free version of panorama software here: http://research.microsoft.com/en-us/um/redmond/groups/ivm/ice/

Panoramas Results

Hinemihi Hangi

On Sunday 30th June, Yvonne Marshall returned to Hinemihi to attend the annual Hangi fundraiser, organised by the Te Kohanga Reo o Rānana. Yvonne handed out leaflets explaining the results of the RTI and photogammetry, that we’ve outlined above, and showed people the images of the 3D model of the central post that we’ve made.

Action songs being performed at the Hangi.

The food was fabulous, and the weather gorgeous.

The Hangi is opened!
The cooked food is lifted and prepared for serving.

You can see more about the event at their website: http://kohanga.co.uk/hangi/