How did ‘play’ shape the design and experience of creating Serendip-o-matic?

Here are my notes from the Digital Humanities 2014 paper on ‘Play as Process and Product’ I did with Brian Croxall, Scott Kleinman and Amy Papaelias based on the work of the 2013 One Week One Tool team.

Scott has blogged his notes about the first part of our talk, Brian’s notes are posted as ‘“If hippos be the Dude of Love…”: Serendip-o-matic at Digital Humanities 2014‘ and you’ll see Amy’s work adding serendip-o-magic design to our slides throughout our three posts.

I’m Mia, I was dev/design team lead on Serendipomatic, and I’ll be talking about how play shaped both what you see on the front end and the process of making it.

How did play shape the process?

The playful interface was a purposeful act of user advocacy – we pushed against the academic habit of telling, not showing, which you see in some form here. We wanted to entice people to try Serendipomatic as soon as they saw it, so the page text, graphic design, 1 – 2 – 3 step instructions you see at the top of the front page were all designed to illustrate the ethos of the product while showing you how to get started.

How can a project based around boring things like APIs and panic be playful? Technical decision-making is usually a long, painful process in which we juggle many complex criteria. But here we had to practice ‘rapid trust’ in people, in languages/frameworks, in APIs, and this turned out to be a very freeing experience compared to everyday work.
Serendip-o-matic_ Let Your Sources Surprise You.png
First, two definitions as background for our work…

Just in case anyone here isn’t familiar with APIs, APIs are a set of computational functions that machines use to talk to each other. Like the bank in Monopoly, they usually have quite specific functions, like taking requests and giving out information (or taking or giving money) in response to those requests. We used APIs from major cultural heritage repositories – we gave them specific questions like ‘what objects do you have related to these keywords?’ and they gave us back lists of related objects.
2013-08-01 10.14.45.jpg
The term ‘UX‘ is another piece of jargon. It stands for ‘user experience design’, which is the combination of graphical, interface and interaction design aimed at making products both easy and enjoyable to use. Here you see the beginnings of the graphic design being applied (by team member Amy) to the underlying UX related to the 1-2-3 step explanation for Serendipomatic.

Feed.

serendipomatic_presentation p9.png
The ‘feed’ part of Serendipomatic parsed text given in the front page form into simple text ‘tokens’ and looked for recognisable entities like people, places or dates. There’s nothing inherently playful in this except that we called the system that took in and transformed the text the ‘magic moustache box’, for reasons lost to time (and hysteria).

Whirl.

These terms were then mixed into database-style queries that we sent to different APIs. We focused on primary sources from museums, libraries, archives available through big cultural aggregators. Europeana and the Digital Public Library of America have similar APIs so we could get a long way quite quickly. We added Flickr Commons into the list because it has high-quality, interesting images and brought in more international content. [It also turns out this made it more useful for my own favourite use for Serendipomatic, finding slide or blog post images.] The results are then whirled up so there’s a good mix of sources and types of results. This is the heart of the magic moustache.

Marvel.

User-focused design was key to making something complicated feel playful. Amy’s designs and the Outreach team work was a huge part of it, but UX also encompasses micro-copy (all the tiny bits of text on the page), interactions (what happened when you did anything on the site), plus loading screens, error messages, user documentation.

We knew lots of people would be looking at whatever we made because of OWOT publicity; you don’t get a second shot at this so it had to make sense at a glance to cut through social media noise. (This also meant testing it for mobiles and finding time to do accessibility testing – we wanted every single one of our users to have a chance to be playful.)


Without all this work on the graphic design – the look and feel that reflected the ethos of the product – the underlying playfulness would have been invisible. This user focus also meant removing internal references and in-jokes that could confuse people, so there are no references to the ‘magic moustache machine’. Instead, ‘Serendhippo’ emerged as a character who guided the user through the site.

moustache.png But how does a magic moustache make a process playful?

magicmoustachediagram.jpgThe moustache was a visible signifier of play. It appeared in the first technical architecture diagram – a refusal to take our situation too seriously was embedded at the heart of the project. This sketch also shows the value of having a shared physical or visual reference – outlining the core technical structure gave people a shared sense of how different aspects of their work would contribute to the whole. After all, if there aren’t any structure or rules, it isn’t a game.

This playfulness meant that writing code (in a new language, under pressure) could then be about making the machine more magic, not about ticking off functions on a specification document. The framing of the week as a challenge and as a learning experience allowed a lack of knowledge or the need to learn new skills to be a challenge, rather than a barrier. My role was to provide just enough structure to let the development team concentrate on the task at hand.

In a way, I performed the role of old-fashioned games master, defining the technical constraints and boundaries much as someone would police the rules of a game. Previous experience with cultural heritage APIs meant I was able to make decisions quickly rather than letting indecision or doubt become a barrier to progress. Just as games often reduce complex situations to smaller, simpler versions, reducing the complexity of problems created a game-like environment.

UX matters


Ultimately, a focus on the end user experience drove all the decisions about the backend functionality, the graphic design and micro-copy and how the site responded to the user.

It’s easy to forget that every pixel, line of code or text is there either through positive decisions or decisions not consciously taken. User experience design processes usually involve lots of conversation, questions, analysis, more questions, but at OWOT we didn’t have that time, so the trust we placed in each other to make good decisions and in the playful vision for Serendipomatic created space for us to focus on creating a good user experience. The whole team worked hard to make sure every aspect of the design helps people on the site understand our vision so they can get with exploring and enjoying Serendipomatic.

Some possible real-life lessons I didn’t include in the paper

One Week One Tool was an artificial environment, but here are some thoughts on lessons that could be applied to other projects:

  • Conversations trump specifications and showing trumps telling; use any means you can to make sure you’re all talking about the same thing. Find ways to create a shared vision for your project, whether on mood boards, technical diagrams, user stories, imaginary product boxes. 
  • Find ways to remind yourself of the real users your product will delight and let empathy for them guide your decisions. It doesn’t matter how much you love your content or project, you’re only doing right by it if other people encounter it in ways that make sense to them so they can love it too (there’s a lot of UXy work on ‘on-boarding’ out there to help with this). User-centred design means understanding where users are coming from, not designing based on popular opinion.you can use tools like customer journey maps to understand the whole cycle of people finding their way to and using your site (I guess I did this and various other UXy methods without articulating them at the time). 
  • Document decisions and take screenshots as you go so that you’ve got a history of your project – some of this can be done by archiving task lists and user stories. 
  • Having someone who really understands the types of audiences, tools and materials you’re working with helps – if you can’t get that on your team, find others to ask for feedback – they may be able to save you lots of time and pain.
  • Design and UX resources really do make a difference, and it’s even better if those skills are available throughout the agile development process.

So we made a thing. Announcing Serendip-o-matic at One Week, One Tool

So we made a thing. And (we think) it’s kinda cool! Announcing Serendip-o-matic http://t.co/mQsHLqf4oX #OWOT
— Mia (@mia_out) August 2, 2013

Source code at GitHub Serendipomatic – go add your API so people can find your stuff! Check out the site at serendipomatic.org.

Update: and already we’ve had feedback that people love the experience and have found it useful – it’s so amazing to hear this, thank you all! We know it’s far from perfect, but since the aim was to make something people would use, it’s great to know we’ve managed that:

Congratulations @mia_out and the team of #OWOT for http://t.co/cNbCbEKlUf Already try it & got new sources about a Portuguese King. GREAT!!!
— Daniel Alves (@DanielAlvesFCSH) August 2, 2013

Update from Saturday morning – so this happened overnight:

Cool, Serendipmatic cloned and local dev version up and running in about 15 mins. Now to see about adding Trove to the mix. #owot
— Tim Sherratt (@wragge) August 3, 2013

And then this:

Just pushed out an update to http://t.co/uM13iWLISU — now includes Trove content! #owot
— RebeccaSuttonKoeser (@suttonkoeser) August 3, 2013

From the press release: One Week | One Tool Team Launches Serendip-o-matic

serendip-o-maticAfter five days and nights of intense collaboration, the One Week | One Tool digital humanities team has unveiled its web application: Serendip-o-matic <http://serendipomatic.org>. Unlike conventional search tools, this “serendipity engine” takes in any text, such as an article, song lyrics, or a bibliography. It then extracts key terms, delivering similar results from the vast online collections of the Digital Public Library of America, Europeana, and Flickr Commons. Because Serendip-o-matic asks sources to speak for themselves, users can step back and discover connections they never knew existed. The team worked to re-create that moment when a friend recommends an amazing book, or a librarian suggests a new source. It’s not search, it’s serendipity.

Serendip-o-matic works for many different users. Students looking for inspiration can use one source as a springboard to a variety of others. Scholars can pump in their bibliographies to help enliven their current research or to get ideas for a new project. Bloggers can find open access images to illustrate their posts. Librarians and museum professionals can discover a wide range of items from other institutions and build bridges that make their collections more accessible. In addition, millions of users of RRCHNM’s Zotero can easily run their personal libraries through Serendip-o-matic.
Serendip-o-matic is easy to use and freely available to the public. Software developers may expand and improve the open-source code, available on GitHub. The One Week | One Tool team has also prepared ways for additional archives, libraries, and museums to make their collections available to Serendip-o-matic. 

Highs and lows, day four of OWOT

If you’d asked me at 6pm, I would have said I’d have been way too tired to blog later, but it also felt like a shame to break my streak at this point. Today was hard work and really tiring – lots to do, lots of finicky tech issues to deal with, some tricky moments to work through – but particularly after regrouping back at the hotel, the dev/design team powered through some of the issues we’d butted heads against earlier and got some great work done.  Tomorrow will undoubtedly be stressful and I’ll probably triage tasks like mad but I think we’ll have something good to show you.

As I left the hotel this morning I realised an intense process like this isn’t just about rapid prototyping – it’s also about rapid trust. When there’s too much to do and barely any time for communication, let alone  checking someone else’s work, you just have to rely on others to get the bits they’re doing right and rely on goodwill to guide the conversation if you need to tweak things a bit.  It can be tricky when you’re working out where everyone’s sense of boundaries between different areas are as you go, but being able to trust people in that way is a brilliant feeling. At the end of a long day, I’ve realised it’s also very much about deciding which issues you’re willing to spend time finessing and when you’re happy to hand over to others or aim for a first draft that’s good enough to go out with the intention to tweak if it you ever get time. I’d asked in the past whether a museum’s obsession with polish hinders innovation so I can really appreciate how freeing it can be to work in an environment where to get a product that works, let alone something really good, out in the time available is a major achievement.

Anyway, enough talking. Amrys has posted about today already, and I expect that Jack or Brian probably will too, so I’m going to hand over to some tweets and images to give you a sense of my day. (I’ve barely had any time to talk to or get to know the Outreach team so ironically reading their posts has been a lovely way to check in with how they’re doing.)

Our GitHub repository punch card report tells the whole story of this week – from nothing to huge levels of activity on the app code

I keep looking at the #OWOT commits and clapping my hands excitedly. I am a great. big. dork.
— Mia (@mia_out) August 1, 2013

OH at #owot ‘I just had to get the hippo out of my system’ (More seriously, so exciting to see the design work that’s coming out!)
— Mia (@mia_out) August 1, 2013

OH at #OWOT ‘I’m not sure JK Rowling approves of me’. Also, an earlier unrelated small round of applause. Progress is being made.
— Mia (@mia_out) August 1, 2013

#OWOT #owotleaks it turns out our mysterious thing works quite well with song lyrics.
— Mia (@mia_out) August 1, 2013

Halfway through. Day three of OWOT.

Crikey. Day three. Where do I start?

We’ve made great progress on our mysterious tool. And it has a name! Some cool design motifs are flowing from that, which in turn means we can really push the user experience design issues over the next day and a half (though we’ve already been making lots of design decisions on the hoof so we can keep dev moving). The Outreach team have also been doing some great communications work, including a Press Release and have lots more in the pipeline. The Dev/Design team did a demo of our work for the Outreach team before dinner – there are lots of little things but the general framework of the tool works as it should – it’s amazing how far we’ve come since lunchtime yesterday.  We still need to do a full deployment (server issues, blah blah), and I’ll feel a lot better when we’ve got that process working and then running smoothly, so that we can keep deploying as we finish major features up to a few hours before launch rather than doing it at the end in a mad panic. I don’t know how people managed code before source control – not only does Github manage versions for it, it makes pulling in code from different people so much easier.

There’s lots to tackle on many different fronts, and it may still end up in a mad rush at the end, but right now, the Dev/Design team is humming along. I’ve been so impressed with the way people have coped with some pretty intense requirements for working with unfamiliar languages or frameworks, and with high levels of uncertainty in a chaotic environment.  I’m trying to keep track of things in Github (with Meghan and Brian as brilliant ‘got my back’ PMs) and keep the key current tasks on a whiteboard so that people know exactly what they need to be getting done at any time. Now that the Outreach team have worked through the key descriptive texts, name and tagline we’ll need to coordinate content production – particularly documentation, microcopy to guide people through the process – really closely, which will probably get tricky as time is short and our tasks are many, but given the people gathered together for OWOT, I have faith that we’ll make it work.


Things I have learnt today: despite two years working on a PhD in digital humanities/digital history, I still have a brain full of technical stuff – it’s a relief to realise it hasn’t atrophied through lack of use. I’ve also realised how much the work I’ve done designing workshops and teaching since starting my PhD have fed into how I work with teams, though it’s hard right now to quantify exactly *how*. Finally, it’s re-affirmed just how much I like making things – but also that it’s important to make those things in the company of people who are scholarly (or at least thoughtful) about subjects beyond tech and inter-disciplinary, and ideally to make things that engage the public as well as researchers. As the end of my PhD approaches, it’s been really useful to step back into this world for a week, and I’ll definitely draw on it when figuring out what to do after the PhD. If someone could just start a CHNM in the UK, I’d be very happy.

I still can’t tell you what we’re making, but I *can* tell you that one of these photos in this post contains a clue (and they all definitely have nothing to do with mild lightheadedness at the end of a long day).

And so it begins: day two of OWOT

Day two of One Week, One Tool. We know what we’re making, but we’re not yet revealing exactly what it is. (Is that mean? It’s partly a way of us keeping things simple so we can focus on work.) Yesterday (see Working out what we’re doing: day one of One Week, One Tool) already feels like weeks ago, and even this morning feels like a long time ago. I can see that my posts are going to get less articulate as the week goes on, assuming I keep posting. I’m not sure how much value this will have, but I suppose it’s a record of how fast you can move in the right circumstances…

We spent the morning winnowing the ideas we’d put up for feedback on overnight down from c12 to 4, then 3, then 2, then… It’s really hard killing your darlings, and it’s also difficult choosing between ideas that sound equally challenging or fun or worthy. There was a moment when we literally wiped ideas that had been ruled out from the whiteboard, and it felt oddly momentous. In the end, the two final choices both felt like approaches to the same thing – perhaps because we’d talked about them for so long that they started to merge (consciously or not) or because they both fell into a sweet spot of being accessible to a wide audience and had something to do with discovering new things about your research (which was the last thing I tweeted before we made our decision and decided to keep things in-house for a while).  Finally, eventually, we had enough of a critical mass behind one idea to call it the winner.

Personally, our decision only started to feel real as we walked back from lunch – our task was about to get real.  It’s daunting but exciting. Once back in the room, we discussed the chosen idea a bit more and I got a bit UX/analysty and sketched stuff on a whiteboard. I’m always a bit obsessed with sketching as a way to make sure everyone has a more concrete picture (or shared mental model) of what the group is talking about, and for me it also served as a quick test of the technical viability of the idea. CHNM’s Tom Scheinfeldt then had the unenviable task of corralling/coaxing/guiding us into project management, dev/design and outreach teams. Meghan Frazer and Brian Croxall are project managing, I’m dev/design team lead, with Scott Kleinman, Rebecca Sutton Koeser, Amy Papaelias, Eli Rose, Amanda Visconti and Scott Williams (and in the hours since then I have discovered that they all rock and bring great skills to the mix), and Jack Dougherty is leading the outreach team of Ray Palin and Amrys Williams in their tasks of marketing, community development, project outreach, grant writing, documentation. Amrys and Ray are also acting as user advocates and they’ve all contributed user stories to help us clarify our goals. Lots of people will be floating between teams, chipping in where needed and helping manage communication between teams.

The Dev/Design team began with a skills audit so that we could figure out who could do what on the front- and back-end, which in turn fed into our platform decision (basically PHP or Python, Python won), then a quick list of initial tasks that would act as further reality checks on the tool and our platform choice. The team is generally working in pairs on parallel tasks so that we’re always moving forward on the three main functional areas of the tool and to make merging updates on github simpler. We’re also using existing JavaScript libraries and CSS grids to make the design process faster. I then popped over to the Outreach team to check in with the descriptions and potential user stories they were discussing. Meghan and Brian got everyone back together at the end of the day, and the dev/design team had a chance to feed back on the outreach team’s work (which also provided a very ad hoc form of requirements elicitation but it started some important conversations that further shaped the tool). Then it was back over to the hotel lobby where we planned to have a dev/design team meeting before dinner, but when two of our team were kidnapped by a shuttle driver (well, sorta) we ended up working through some of the tasks for tomorrow. We’re going to have agile-style stand-up meetings twice a day, with the aim to give people enough time to get stuck into tasks while still keeping an eye on progress with a forum to help deal with any barriers or issues. Some ideas will inevitably fall by the wayside, but because the OWOT project is designed to run over a year, we can put ideas on a wishlist for future funded development, leave as hooks for other developers to expand on, or revisit once we’re back home. In hack day mode I tend to plan so that there’s enough working code that you have something to launch, then go back and expand features in the code and polish the UX with any time left. Is this the right approach here? Time will tell.

#owot dev team is hard at work. #fb pic.twitter.com/Zj5PW0Kj2a
— Brian Croxall (@briancroxall) July 31, 2013

Working out what we’re doing: day one of One Week, One Tool

Hard at work in The Well

I’m sitting in a hotel next to the George Mason University’s Fairfax campus with a bunch of people I (mostly) met last night trying to work out what tool we’ll spend the rest of the week building. We’re all here for One Week, One Tool, a ‘digital humanities barn raising’ and our aim is to launch a tool for a community of scholarly users by Friday evening. The wider results should be some lessons about rapidly developing scholarly tools, particularly building audience-focused tools, and hopefully a bunch of new friendships and conversations, and in the future, a community of users and other developers who might contribute code. I’m particularly excited about trying to build a ‘minimum viable product‘ in a week, because it’s so unlike working in a museum. If we can keep the scope creep in check, we should be able to build for the most lightweight possible interaction that will let people use our tool while allowing room for the tool to grow according to uses.

We met up last night for introductions and started talking about our week. I’m blogging now in part so that we can look back and remember what it was like before we got stuck into building something – if you don’t capture the moment, it’s hard to retrieve. The areas of uncertainty will reduce each day, and based on my experience at hack days and longer projects, it’s often hard to remember how uncertain things were at the start.

Are key paradoxes of #owot a) how we find a common end user, b) a common need we can meet and c) a common code language/framework?
— Mia (@mia_out) July 29, 2013

Meghan herding cats to get potential ideas summarised

Today we heard from CHNM team members Sharon Leon on project management, Sheila Brennan on project outreach and Patrick Murray-John on coding and then got stuck into the process of trying to figure out what on earth we’ll build this week. I don’t know how others felt but by lunchtime I felt super impatient to get started because it felt like our conversations about how to build the imaginary thing would be more fruitful when we had something concrete-ish to discuss. (I think I’m also used to hack days, which are actually usually weekends, where you’ve got much less time to try and build something.) We spent the afternoon discussing possible ideas, refining them, bouncing up and down between detail, finding our way through different types of jargon, swapping between problem spaces and generally finding our way through the thicket of possibilities to some things we would realistically want to make in the time. We went from a splodge of ideas on a whiteboard to more structured ‘tool, audience, need’ lines based on agile user stories, then went over them again to summarise them so they’d make sense to people viewing them on ideascale.

#owotleaks #owot – we’re building a tool that converts whiteboard brainstorming notes into fully developed applications
— Jack Dougherty (@DoughertyJack) July 29, 2013

So now it’s over to you (briefly). We’re working out what we should build this week, and in addition to your votes, we’d love you to comment on two specific things:

  • How would a suggested tool change your work? 
  • Do you know of similar tools (we don’t want to replicate existing work)?
So go have a look at the candidate ideas at http://oneweekonetool.ideascale.com and let us know what you think. It’s less about voting than it is about providing more context for ideas you like, and we’ll put all the ideas through a reality check based on whether it has identifiable potential users and whether we can build it in a few days. We’ll be heading out to lunch tomorrow (Viriginia time) with a decision, so it’s a really short window for feedback: 10am American EST. (If it’s any consolation, it’s a super-short window for us building it too.)

Update Tuesday morning: two other participants have written posts, so go check them out! Amanda Visconti’s Digital Projects from Start to Finish: DH Mentorship from One Week One Tool (OWOT), Brian Croxall’s Day 1 of OWOT: Check Your Ego at the Door and Jack Dougherty’s Learning Moments at One Week One Tool 2013, Day 1.

‘…and they all turn on their computers and say ‘yay!” (aka, ‘mapping for humanists’)

I’m spending a few hours of my Sunday experimenting with ‘mapping for humanists’ with an art historian friend, Hannah Williams (@_hannahwill).  We’re going to have a go at solving some issues she has encountered when geo-coding addresses in 17th and 18th Century Paris, and we’ll post as we go to record the process and hopefully share some useful reflections on what we found as we tried different tools.

We started by working out what issues we wanted to address.  After some discussion we boiled it down to two basic goals: a) to geo-reference historical maps so they can be used to geo-locate addresses and b) to generate maps dynamically from list of addresses. This also means dealing with copyright and licensing issues along the way and thinking about how geospatial tools might fit into the everyday working practices of a historian.  (i.e. while a tool like Google Refine can generate easily generate maps, is it usable for people who are more comfortable with Word than relying on cloud-based services like Google Docs?  And if copyright is a concern, is it as easy to put points on an OpenStreetMap as on a Google Map?)

Like many historians, Hannah’s use of maps fell into two main areas: maps as illustrations, and maps as analytic tools.  Maps used for illustrations (e.g. in publications) are ideally copyright-free, or can at least be used as illustrative screenshots.  Interactivity is a lower priority for now as the dataset would be private until the scholarly publication is complete (owing to concerns about the lack of an established etiquette and format for citation and credit for online projects).

Maps used for analysis would ideally support layers of geo-referenced historic maps on top of modern map services, allowing historic addresses to be visually located via contemporaneous maps and geo-located via the link to the modern map.  Hannah has been experimenting with finding location data via old maps of Paris in Hypercities, but manually locating 18th Century streets on historic maps then matching those locations to modern maps is time-consuming and she suspects there are more efficient ways to map old addresses onto modern Paris.

Based on my research interviews with historians and my own experience as a programmer, I’d also like to help humanists generate maps directly from structured data (and ideally to store their data in user-friendly tools so that it’s as easy to re-use as it is to create and edit).  I’m not sure if it’s possible to do this from existing tools or whether they’d always need an export step, so one of my questions is whether there are easy ways to get records stored in something like Word or Excel into an online tool and create maps from there.  Some other issues historians face in using mapping include: imprecise locations (e.g. street names without house numbers); potential changes in street layouts between historic and modern maps; incomplete datasets; using markers to visually differentiate types of information on maps; and retaining descriptive location data and other contextual information.

Because the challenge is to help the average humanist, I’ve assumed we should stay away from software that needs to be installed on a server, so to start with we’re trying some of the web-based geo-referencing tools listed at http://help.oldmapsonline.org/georeference.

Geo-referencing tools for non-technical people

The first bump in the road was finding maps that are re-usable in technical and licensing terms so that we could link or upload them to the web tools listed at http://help.oldmapsonline.org/georeference.  We’ve fudged it for now by using a screenshot to try out the tools, but it’s not exactly a sustainable solution.  
Hannah’s been trying georeferencer.org, Hypercities and Heurist (thanks to Lise Summers ‏@morethangrass on twitter) and has written up her findings at Hacking Historical Maps… or trying to.  Thanks also to Alex Butterworth @AlxButterworth and Joseph Reeves @iknowjoseph for suggestions during the day.

Yahoo! Mapmixer’s page was a 404 – I couldn’t find any reference to the service being closed, but I also couldn’t find a current link for it.

Next I tried Metacarter Labs’ Map Rectifier.  Any maps uploaded to this service are publicly visible, though the site says this does ‘not grant a copyright license to other users’, ‘[t]here is no expectation of privacy or protection of data’, which may be a concern for academics negotiating the line between openness and protecting work-in-progress or anyone dealing with sensitive data.  Many of the historians I’ve interviewed for my PhD research feel that some sense of control over who can view and use their data is important, though the reasons why and how this is manifested vary.

Screenshot from http://labs.metacarta.com/rectifier/rectify/7192


The site has clear instructions – ‘double click on the source map… Double click on the right side to associate that point with the reference map’ but the search within the right-hand side ‘source map’ didn’t work and manually navigating to Paris, then the right section of Paris was a huge pain.  Neither of the base maps seemed to have labels, so finding the right location at the right level of zoom was too hard and eventually I gave up.  Maybe the service isn’t meant to deal with that level of zoom?  We were using a very small section of map for our trials.

Inspired by Metacarta’s Map Rectifier, Map Warper was written with OpenStreetMap in mind, which immediately helps us get closer to the goal of images usable in publications.  Map Warper is also used by the New York Public Library, which described it as a ‘tool for digitally aligning (“rectifying”) historical maps … to match today’s precise maps’.  Map Warper also makes all uploaded maps public: ‘By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points’, but also offers ‘Map visibility’ options ‘Public(default)’ and ‘Don’t list the map (only you can see it)’.

Screenshot showing ‘warped’ historical map overlaid on OpenStreetMap at http://mapwarper.net/

Once a map is uploaded, it zooms to a ‘best guess’ location, presumably based on the information you provided when uploading the image.  It’s a powerful tool, though I suspect it works better with larger images with more room for error.  Some of the functionality is a little obscure to the casual user – for example, the ‘Rectify’ view tells me ‘[t]his map either is not currently masked. Do you want to add or edit a mask now?’ without explaining what a mask is.  However, I can live with some roughness around the edges because once you’ve warped your map (i.e. aligned it with a modern map), there’s a handy link on the Export tab, ‘View KML in Google Maps’ that takes you to your map overlaid on a modern map.  Success!

Sadly not all the export options seem to be complete (they weren’t working on my map, anyway) so I couldn’t work out if there was a non-geek friendly way to open the map in OpenStreetMap.

We have to stop here for now, but at this point we’ve met one of the goals – to geo-reference historical maps so locations from the past can be found in the present, but the other will have to wait for another day.  (But I’d probably start with openheatmap.com when we tackle it again.  Any other suggestions would be gratefully received!)

(The title quote is something I heard one non-geek friend say to another to explain what geeks get up to at hackdays. We called our experiment a ‘hackday’ because we were curious to see whether the format of a hackday – working to meet a challenge within set parameters within a short period of time – would work for other types of projects. While this ended up being almost an ‘anti-hack’, because I didn’t want to write code unless we came across a need for a generic tool, the format worked quite well for getting us to concentrate solidly on a small set of problems for an afternoon.)

‘Share What You See’ at hack4europe London

A quick report from hack4europe London, one of four hackathons organised by Europeana to ‘showcase the potential of the API usage for data providers, partners and end-users’.

I have to confess that when I arrived I wasn’t feeling terribly inspired – it’s been a long month and I wasn’t sure what I could get done at a one-day hack.  I was intrigued by the idea of ‘stealth culture’ – putting cultural content out there for people to find, whether or not they were intentionally looking for ‘a cultural experience’ – but I couldn’t think of a hack about it I could finish in about six hours.  But I happened to walk past Owen Stephen’s (@ostephens) screen and noticed that he was googling something about WordPress, and since I’ve done quite a lot of work in WordPress, I asked what his plans were.  After a chat we decided to work together on a WordPress plugin to help people blog about cool things they found on museum visits.  I’d met Owen at OpenCulture 2011 the day before (though we’d already been following each other on twitter) but without the hackday it’s unlikely we would have ever worked together.

So what did we make?  ‘Share What You See’ is a plugin designed to make a museum and gallery visit more personal, memorable and sociable.  There’s always that one object that made you laugh, reminded you of friends or family, or was just really striking.  The plugin lets you search for the object in the Europeana collection (by title, and hopefully by venue or accession number), and instantly create a blog post about it (screenshot below) to share it with others.

Screenshot: post pre-populated with information about the object. 

Once you’ve found your object, the plugin automatically inserts an image of it, plus the title, description and venue name.

You can then add your own text and whatever other media you like.  The  plugin stores the originally retrieved information in custom fields so it’s always there for reference if it’s updated in the post.  Once an image or other media item is added, you can use all the usual WordPress tools to edit it.

If you’re in a gallery with wifi, you could create a post and share an object then and there, because WordPress is optimised for mobile devices.  This help makes collection objects into ‘social objects’, embedding them in the lives of museum and gallery visitors.  The plugin could also be used by teachers or community groups to elicit personal memories or creative stories before or after museum visits.

The code is at https://github.com/mialondon/Share-what-you-see and there’s a sample blog post at http://www.museumgames.org.uk/jug/.  There’s still lots of tweaks we could have made, particularly around dealing with some of the data inconsistencies, and I’d love a search by city (in case you can’t quite remember the name of the museum), etc, but it’s not bad for a couple of hours work and it was a lot of fun.  Thanks to the British Library for hosting the day (and the drinks afterwards), the Collections Trust/Culture Grid for organising, and Europeana for setting it up, and of course to Owen for working with me.  Oh, and we won the prize for “developer’s choice” so thank you to all the other developers!

Notes from Culture Hack Day (#chd11)

Culture Hack Day (#chd11) was organised by the Royal Opera House (the team being @rachelcoldicutt, @katybeale, @beyongolia, @mildlydiverting, @dracos – and congratulations to them all on an excellent event). As well as a hack event running over two days, they had a session of five minute ‘lightning talks’ on Saturday, with generous time for discussion between sessions. This worked quite well for providing an entry point to the event for the non-technical, and some interesting discussion resulted from it. My notes are particularly rough this time as I have one arm in a sling and typing my hand-written notes is slow.

Lightning Talks
Tom Uglow @tomux “What if the Web is a Fad?”
‘We’re good at managing data but not yet good at turning it into things that are more than points of data.’ The future is about physical world, making things real and touchable.

Clare Reddington, @clarered, “What if We Forget about Screens and Make Real Things?”
Some ace examples of real things: Dream Director; Nuage Vert (Helsinki power station projected power consumption of city onto smoke from station – changed people’s behaviour through ambient augmentation of the city); Tweeture (a conch, ‘permission object’ designed to get people looking up from their screens, start conversations); National Vending Machine from Dutch museum.

Leila Johnston, @finalbullet talked about why the world is already fun, and looking at the world with fresh eyes. Chromaroma made Oyster cards into toys, playing with our digital footprint.

Discussion kicked off by Simon Jenkins about helping people get it (benefits of open data etc) – CR – it’s about organisational change, fears about transparency, directors don’t come to events like this. Understand what’s meant by value – cultural and social as well as economic. Don’t forget audiences, it has to be meaningful for the people we’re making it (cultural products) for’.

Comment from @fidotheCultural heritage orgs have been screwed over by software companies. There’s a disconnect between beautiful hacks around the edges and things that make people’s lives easier. [Yes! People who work in cultural heritage orgs often have to deal with clunky tools, difficult or vendor-dependent data export proccesses, agencies that over-promise and under-deliver. In my experience, cultural orgs don’t usually have internal skills for scoping and procuring software or selecting agencies so of course they get screwed over.]

TU: desire to be tangible is becoming more prevalent, data to enhance human experience, the relationship between culture and the way we live our lives.

CR: don’t spend the rest of the afternoon reinforcing silos, shouldn’t be a dichotomy between cultural heritage people and technologists. [Quick plug for http://museum30.ning.com/, http://groups.google.com/group/antiquist, http://museum-api.pbwiki.com/ and http://museumscomputergroup.org.uk/email-list/ as places where people interested in intersection between cultural heritage and technology can mingle – please let me know of any others!] Mutual respect is required.

Tom Armitage, @infovore “Sod big data and mashups: why not hack on making art?”
Making culture is more important than using it. 3 trends: 1) collection – tools to slice and dice across time or themes; 2) magic materials 3) mechanical art, displays the shape of the original content; 3a) satire – @kanyejordan ‘a joke so good a machine could make it’.

Tom Dunbar, @willyouhelp – story-telling possibilites of metadata embedded in media e.g. video [check out Waisda? for game designed to get metdata added to audio-visual archives]. Metadata could be actors, characters, props, action…

Discussion [?]:remixing in itself isn’t always interesting. Skillful appropriation across formats… Universe of editors, filterers, not only creators. ‘in editing you end up making new things’.

Matthew Somerville, @dracos, Theatricalia, “What if You Never Needed to Miss a Show?”
‘Quite selfish’, makes things he needs. Wants not to miss theatre productions with people he likes in/working on them. Theatricalia also collects stories about productions. [But in discussion it came up that the National Theatre asked him to remove data – why?! A recommendation system would definitely get me seeing more theatre, and I say that as a fairly regular but uninformed theatre-goer who relies on word-of-mouth to decide where to spend ticket money.]

Nick Harkaway, @Harkaway on IP and privacy
IP as way of ringfencing intangible ideas, requiing consent to use. Privacy is the same. Not exciting, kind of annoying but need to find ways to make it work more smoothly while still proving protection. ‘Buying is voting’, if you buy from Tesco, you are endorsing their policies. ‘Code for the change you want to see in the world’, build the tools you want cultural orgs to have so they can do better. [Update: Nick has posted his own notes at Notes from Culture Hack Day. I really liked the way he brought ethical considerations to hack enthusiasm for pushing the boundaries of what’s possible – the ability to say ‘no’ is important even if a pain for others.]

Chris Thorpe, @jaggeree. ArtFinder, “What if you could see through the walls of every museum and something could tell you if you’d like it?”

Culture for people who don’t know much about culture. Cultural buildings obscure the content inside, stop people being surprised by what’s available. It’s hard if you don’t know where to start. Go for user-centric information. Government Art Collection Explorer – ace! Wants an angel for art galleries to whisper information about the art in his ear. Wants people to look at the art, not the screen of their device [museums also have this concern]. SAP – situated audio platform. Wants a ‘flight data recorder’ for trips around cultural places.

Discussion around causes of fear and resistance to open data – what do cultural orgs fear and how can they learn more and relax? Fear of loss of provenance – response was that for developers displaying provenance alongside the data gives it credibility; counter-response was that organisations don’t realise that’s possible. [My view is that the easiest way to get this to change is to change the metrics by which cultural heritage organisations are judged, and resolve the tension between demands to commercialise content to supplement government grants and demands for open access to that same data. Many museums have developed hybrid ‘free tombstone, low-res, paid-for high-res’ models to deal with this, but it’s taken years of negotiation in each institution.] I also ranted about some of these issues at OpenTech 2010, notes at ‘Museums meet the 21st century’.

Other discussion and notes from twitter – re soap/drama characters tweeting – I managed to out myself as a Neighbours watcher but it was worth it to share that Neighbours characters tweet and use Facebook. Facebook relationship status updates and events have been included as plot points, and references are made to twitter but not to the accounts of the characters active on the service. I wonder if it’s script writers or marketing people who write the characters tweets? They also tweet in sync with the Australian showings, which raises issues around spoilers and international viewers.

Someone said ‘people don’t want to interact with cultural institutions online. They want to interact with their content’ but I think that’s really dependent on the definition of content – as pointed out, points of data have limited utility without further context. There’s a catch-22 between cultural orgs not yet making really engaging data and audiences not yet demanding it, hopefully hack days like CHD11 help bridge the gap and turn data into stories and other meaningful content. We’re coming up against the limits of what can be dome programmatically, especially given variation in quality and extent of cultural heritage data (and most of it is data rather than content).

[Update: after writing this I found a post The lightning talks at Culture Hack Day about the day, which happily picks up on lots of bits I missed. Oh, and another, by Roo Reynolds.]

After the lightning talks I popped over the road to check out the hacking and ended up getting sucked in (the lure of free pizza had a powerful effect!).  I worked on a WordPress plugin with Ian Ibbotson @ianibbo that lets you search for a term on the Culture Grid repository and imports the resulting objects into my museum metadata games so that you can play with objects based on your favourite topic.  I’ve put the code on github [https://github.com/mialondon/mmg-import] and will move it from my staging server to live over the next few days so people can play with the objects.  It’s such a pain only having one hand, and I’m very grateful to Ian for the chance to work together and actually get some code written.  This work means that any organisation that’s contributed records to the Culture Grid can start to get back tags or facts to enhance their collections, based on data generated by people playing the games.  The current 300-ish objects have about 4400 tags and 30 facts, so that’s not bad for a freebie. OTOH, I don’t know of many museums with the ability to display content created by others on their collections pages or store it in their collections management systems – something for another hack day?

Something I think I’ll play around with a bit more is the idea of giving cultural heritage data a quality rating as it’s ingested.  We discussed whether the ratings would be local to an app (as they could be based on the particular requirements of that application) or generalised and recorded in the CultureGrid service.  You could record the provence of a rating which might be an approach that combines the benefits of both approaches.  At the moment, my requirements for a ‘high quality’ record would be: title (e.g. ‘The Ashes trophy’, if the object has one), name or type of object (e.g. cup), date, place, decent sized image, description.

Finally, if you’re interested in hacking around cultural heritage data, there’s also historyhackday next weekend. I’m hoping to pop in (dependent on fracture and MSc dissertation), not least because in March I’m starting a PhD in digital humanities, looking at participatory digitisation of geo-located historical material (i.e. getting people to share the transcriptions and other snippets of ad hoc digitisation they do as part of their research) and it’s all hugely relevant.

Final thoughts on open hack day (and an imaginary curatr)

I think hack days are great – sure, 24 hours in one space is an artificial constraint, but the sheer brilliance of the ideas and the ingenuity of the implementations is inspiring. They’re a reminder that good projects don’t need to take years and involve twenty circles of sign-off, even if that’s the reality you face when you get back to the office.

I went because it tied in really well with some work projects (like the museum metadata mashup competition we’re running later in the year or the attempt to get a critical mass of vaguely compatible museum data available for re-use) and stuff I’m interested in personally (like modern bluestocking, my project for this summer – let me know if you want to help, or just add inspiring women to freebase).

I’m also interested in creating something like a Dopplr for museums – you tell it what you’re interested in, and when you go on a trip it makes you a map and list of stuff you could see while you’re in that city.

Like: I like Picasso, Islamic miniatures, city museums, free wine at contemporary art gallery openings, [etc]; am inspired by early feminist history; love hearing about lived moments in local history of the area I’ll be staying in; I’m going to Barcelona.

The ‘list of cultural heritage stuff I like’ could be drawn from stuff you’ve bookmarked, exhibitions you’ve attended (or reviewed) or stuff favourited in a meta-museum site.

(I don’t know what you’d call this – it’s like a personal butlr or concierge who knows both your interests and your destinations – curatr?)

The talks on RDFa (and the earlier talk on YQL at the National Maritime Museum) have inspired me to pick a ‘good enough’ protocol, implement it, and see if I can bring in links to similar objects in other museum collections. I need to think about the best way to document any mapping I do between taxonomies, ontologies, vocabularies (all the museumy ‘ies’) and different API functions or schemas, but I figure the museum API wiki is a good place to draft that. It’s not going to happen instantly, but it’s a good goal for 2009.

These are the last of my notes from the weekend’s Open Hack London event, my notes from various talks are tagged openhacklondon.