Hacking our core photos to pieces...by Simon Harris

As part of the inaugural 2016 BGS Hack Challenge, we wanted to see if anything could be made of the BGS collection of digital photographs of core images; we have around 125,000 high resolution photos.
We have around 125,000 high resolution core photos

It would be nice if we could reassemble the core so it appears as if it is laid out in a long line on the floor. This could help with visualising longer runs of core, for logging, for instance. The catch is that there is some variation between the images which to the human eye look “the same”. This can be easily demonstrated by overlaying a few images and adjusting the transparency on them:
A few of the core images overlaid with adjusted transparency
There are also a number of different styles of plastic tray in use to hold the core for photography. So it was clear that we could not use a simple “cookie-cutter” approach for each photo. We would have to consider each image as being potentially 'out of registration' with the next one.

Can we hack it? Yes we can!

Like all good problems, we broke the system down into smaller components. Basically, these were:
  • Load the image from our large image server, which delivers all of our public facing core photos
  • Find the corners of the core boxes in the image, or (even better), the edges of the actual core
  • Cut away the non-data pixels
  • Match the top and bottom depths to the start and end of the core run in the image
  • Join the core end-to-end
  • Repeat until you have a complete core

How did we do it?

Looking at the stages outlined above, it was clear that finding the corners of the boxes would be the greatest challenge. We looked into image analysis, and whilst we were able to run basic filters, we found it was taking quite some time to run, and, perhaps due to our lack of experience with the software, we were not going to be able to tweak it to produce reliable results in the time available.
The results of running the Canny edge detect filter on an image
Therefore, to get us to the next stage in the task, we decided to use an image processing system that was many times more powerful than the graphics cards in our laptops, and that would take only a few microseconds to discern the core from the background. The name of the system? The human brain….

Splitting up into three pairs, our team designed the three components that we needed to successfully complete the task:
  • Paul and Brian designed a JavaScript web interface which would retrieve the image and make marking the corners as easy as clicking with the mouse
  • Andy and Roman built Oracle database tables to hold the positions of the points, so we could re-use them at any later point in the processing
  • Paul and Simon looked into the viability of using image processing software, and wrote a simple script for ImageMagick which would perform the cropping and stitching
After barely a day of hacking in unseasonably hot conditions, we had a working demo. The user can enter a corebox number which loads the image and allows them to mark the corners of each box. This information is then stored and sent to a script which cuts out the core, joins and rotates it, to give something like the image to the right.

Running over WiFi, and on a laptop, the whole process took about 60-90 seconds. However of this only around 10 seconds is user input time, and the remaining processing could be stored until later. We also realised that we could “cascade” the cropping information to the next image, simply requiring the user to confirm the crop or perhaps adjust it as necessary.

If in the future we are able to find the corners using entirely automated image analysis, we can simply drop the code into the workflow without too much hassle. For the purposes of the hack, however, I think it was a valid choice to use the method which allowed us to demonstrate the process in the time available.

We also still have to work out how each sub-run is joined together to form the whole core, and write some more script which adds depth markers. There are some definite speed improvements that we can make, for example creating the JPEG preview from the JP2 in advance, or using the large image server to generate it on the fly.

What could we do differently?

We’d still like to get the image analysis working, if not on the existing images then on any new images we take. There were enticing suggestions in the form of utilities such as “visgrep” and “zbar-img” that we could use to identify the type of tray in use and also the position in the image, and from there apply a standard crop.

Overall?

The judging panel

I thoroughly enjoyed the hack – often I feel that I know that something can be done, but am clueless to how to actually achieve it. Working directly with people whose skills complemented mine was an ideal and effective way to solve this problem.

I feel that, given the time available, we were able to show that a process we previously believed to be impossibly time consuming to achieve could in fact be broken down into smaller chunks and made achievable. The next step is to consult some of the many users of the core images, and see what their thoughts on the matter are.

Simon Harris (BGS Conservator / Hack participant) and the hack team of Paul Denton, Brian Hamilton, Andy Riddick, Roman Roth and Paul Williams.

Comments