And so it begins: day two of OWOT

Day two of One Week, One Tool. We know what we're making, but we're not yet revealing exactly what it is. (Is that mean? It's partly a way of us keeping things simple so we can focus on work.) Yesterday (see Working out what we're doing: day one of One Week, One Tool) already feels like weeks ago, and even this morning feels like a long time ago. I can see that my posts are going to get less articulate as the week goes on, assuming I keep posting. I'm not sure how much value this will have, but I suppose it's a record of how fast you can move in the right circumstances…

We spent the morning winnowing the ideas we'd put up for feedback on overnight down from c12 to 4, then 3, then 2, then… It's really hard killing your darlings, and it's also difficult choosing between ideas that sound equally challenging or fun or worthy. There was a moment when we literally wiped ideas that had been ruled out from the whiteboard, and it felt oddly momentous. In the end, the two final choices both felt like approaches to the same thing – perhaps because we'd talked about them for so long that they started to merge (consciously or not) or because they both fell into a sweet spot of being accessible to a wide audience and had something to do with discovering new things about your research (which was the last thing I tweeted before we made our decision and decided to keep things in-house for a while).  Finally, eventually, we had enough of a critical mass behind one idea to call it the winner.

Personally, our decision only started to feel real as we walked back from lunch – our task was about to get real.  It's daunting but exciting. Once back in the room, we discussed the chosen idea a bit more and I got a bit UX/analysty and sketched stuff on a whiteboard. I'm always a bit obsessed with sketching as a way to make sure everyone has a more concrete picture (or shared mental model) of what the group is talking about, and for me it also served as a quick test of the technical viability of the idea. CHNM's Tom Scheinfeldt then had the unenviable task of corralling/coaxing/guiding us into project management, dev/design and outreach teams. Meghan Frazer and Brian Croxall are project managing, I'm dev/design team lead, with Scott Kleinman, Rebecca Sutton Koeser, Amy Papaelias, Eli Rose, Amanda Visconti and Scott Williams (and in the hours since then I have discovered that they all rock and bring great skills to the mix), and Jack Dougherty is leading the outreach team of Ray Palin and Amrys Williams in their tasks of marketing, community development, project outreach, grant writing, documentation. Amrys and Ray are also acting as user advocates and they've all contributed user stories to help us clarify our goals. Lots of people will be floating between teams, chipping in where needed and helping manage communication between teams.

The Dev/Design team began with a skills audit so that we could figure out who could do what on the front- and back-end, which in turn fed into our platform decision (basically PHP or Python, Python won), then a quick list of initial tasks that would act as further reality checks on the tool and our platform choice. The team is generally working in pairs on parallel tasks so that we're always moving forward on the three main functional areas of the tool and to make merging updates on github simpler. We're also using existing JavaScript libraries and CSS grids to make the design process faster. I then popped over to the Outreach team to check in with the descriptions and potential user stories they were discussing. Meghan and Brian got everyone back together at the end of the day, and the dev/design team had a chance to feed back on the outreach team's work (which also provided a very ad hoc form of requirements elicitation but it started some important conversations that further shaped the tool). Then it was back over to the hotel lobby where we planned to have a dev/design team meeting before dinner, but when two of our team were kidnapped by a shuttle driver (well, sorta) we ended up working through some of the tasks for tomorrow. We're going to have agile-style stand-up meetings twice a day, with the aim to give people enough time to get stuck into tasks while still keeping an eye on progress with a forum to help deal with any barriers or issues. Some ideas will inevitably fall by the wayside, but because the OWOT project is designed to run over a year, we can put ideas on a wishlist for future funded development, leave as hooks for other developers to expand on, or revisit once we're back home. In hack day mode I tend to plan so that there's enough working code that you have something to launch, then go back and expand features in the code and polish the UX with any time left. Is this the right approach here? Time will tell.

#owot dev team is hard at work. #fb pic.twitter.com/Zj5PW0Kj2a
— Brian Croxall (@briancroxall) July 31, 2013

Working out what we're doing: day one of One Week, One Tool

Hard at work in The Well

I'm sitting in a hotel next to the George Mason University's Fairfax campus with a bunch of people I (mostly) met last night trying to work out what tool we'll spend the rest of the week building. We're all here for One Week, One Tool, a 'digital humanities barn raising' and our aim is to launch a tool for a community of scholarly users by Friday evening. The wider results should be some lessons about rapidly developing scholarly tools, particularly building audience-focused tools, and hopefully a bunch of new friendships and conversations, and in the future, a community of users and other developers who might contribute code. I'm particularly excited about trying to build a 'minimum viable product' in a week, because it's so unlike working in a museum. If we can keep the scope creep in check, we should be able to build for the most lightweight possible interaction that will let people use our tool while allowing room for the tool to grow according to uses.

We met up last night for introductions and started talking about our week. I'm blogging now in part so that we can look back and remember what it was like before we got stuck into building something – if you don't capture the moment, it's hard to retrieve. The areas of uncertainty will reduce each day, and based on my experience at hack days and longer projects, it's often hard to remember how uncertain things were at the start.

Are key paradoxes of #owot a) how we find a common end user, b) a common need we can meet and c) a common code language/framework?
— Mia (@mia_out) July 29, 2013

Meghan herding cats to get potential ideas summarised

Today we heard from CHNM team members Sharon Leon on project management, Sheila Brennan on project outreach and Patrick Murray-John on coding and then got stuck into the process of trying to figure out what on earth we'll build this week. I don't know how others felt but by lunchtime I felt super impatient to get started because it felt like our conversations about how to build the imaginary thing would be more fruitful when we had something concrete-ish to discuss. (I think I'm also used to hack days, which are actually usually weekends, where you've got much less time to try and build something.) We spent the afternoon discussing possible ideas, refining them, bouncing up and down between detail, finding our way through different types of jargon, swapping between problem spaces and generally finding our way through the thicket of possibilities to some things we would realistically want to make in the time. We went from a splodge of ideas on a whiteboard to more structured 'tool, audience, need' lines based on agile user stories, then went over them again to summarise them so they'd make sense to people viewing them on ideascale.

#owotleaks #owot – we're building a tool that converts whiteboard brainstorming notes into fully developed applications
— Jack Dougherty (@DoughertyJack) July 29, 2013

So now it's over to you (briefly). We're working out what we should build this week, and in addition to your votes, we’d love you to comment on two specific things:

  • How would a suggested tool change your work? 
  • Do you know of similar tools (we don’t want to replicate existing work)?
So go have a look at the candidate ideas at http://oneweekonetool.ideascale.com and let us know what you think. It's less about voting than it is about providing more context for ideas you like, and we'll put all the ideas through a reality check based on whether it has identifiable potential users and whether we can build it in a few days. We'll be heading out to lunch tomorrow (Viriginia time) with a decision, so it's a really short window for feedback: 10am American EST. (If it's any consolation, it's a super-short window for us building it too.)

Update Tuesday morning: two other participants have written posts, so go check them out! Amanda Visconti's Digital Projects from Start to Finish: DH Mentorship from One Week One Tool (OWOT), Brian Croxall's Day 1 of OWOT: Check Your Ego at the Door and Jack Dougherty's Learning Moments at One Week One Tool 2013, Day 1.

Planes, trains and automobiles…

This week I'm heading to Lincoln, Nebraska for Digital Humanities 2013 (abstracts) (where I'm also doing a half-day workshop on 'Designing successful digital humanities crowdsourcing projects' and attending my first meeting as a member of the ACH Executive Council).

After DH2013, I'm gradually making my way east by Amtrak and Greyhound, ending up at One Week, One Tool ('a digital humanities barn raising'!). I'll be in Chicago from Sunday afternoon (July 21) until late 22nd, arriving in Cleveland on the 23rd and jumping on another bus to Pittsburgh for  July 24-27. If you're going to be nearby and fancy a chat about crowdsourcing, museums or digital history, or have a suggestion for sights I should see, let me know! You can get a sense of my interests at the never-properly updated Upcoming talks and travel and My PhD research.

Catch the wind? (Re-post from Polis blog on Spatial Narratives and Deep Maps)

[This post was originally written for the Polis Center's blog.]

Our time at the NEH Institute on Spatial Narratives & Deep Maps is almost at an end.  The past fortnight feels both like it’s flown by and like we’ve been here for ages, which is possibly the right state of mind for thinking about deep maps.  After two weeks of debate deep maps still seem definable only when glimpsed in the periphery and yet not-quite defined when examined directly.  How can we capture the almost-tangible shape of a truly deep map that we can only glimpse through the social constructs, the particular contexts of creation and usage, discipline and the models in current technology?  If deep maps are an attempt to get beyond the use of location-as-index and into space-as-experience, can that currently be done more effectively on a screen or does covering a desk in maps and documents actually allow deeper immersion in a space at a particular time?

We’ve spent the past three days working in teams to prototype different interfaces to deep maps or spatial narratives, and each group presented their interfaces today. It’s been immensely fun and productive and also quite difficult at times.  It’s helped me realise that deep maps and spatial narratives are not dichotomous but exist on a scale – where do you draw the line between curating data sources and presenting an interpreted view of them?  At present, a deep map cannot be a recreation of the world, but it can be a platform for immersive thinking about the intersection of space, time and human lives.  At what point do you move from using a deep map to construct a spatial and temporal argument to using a spatial narrative to present it?

The experience of our (the Broadway team) reinforces Stuart’s point about the importance of the case study.  We uncovered foundational questions whilst deep in the process of constructing interfaces: is a deep map a space for personal exploration, comparison and analysis of sources, or is it a shared vision that is personalised through the process of creating a spatial narrative?  We also attempted to think through how multivocality translates into something on a screen, and how interfaces that can link one article or concept to multiple places might work in reality, and in the process re-discovered that each scholar may have different working methods, but that a clever interface can support multivocality in functionality as well as in content.

Halfway through 'deep maps and spatial narratives' summer institute

I'm a week and a bit into the NEH Institute for Advanced Topics in the Digital Humanities on 'Spatial Narrative and Deep Maps: Explorations in the Spatial Humanities', so this is a (possibly self-indulgent) post to explain why I'm over in Indianapolis and why I only seem to be tweeting with the #PolisNEH hashtag.  We're about to dive into three days of intense prototyping before wrapping things up on Friday, so I'm posting almost as a marker of my thoughts before the process of thinking-through-making makes me re-evaluate our earlier definitions.  Stuart Dunn has also blogged more usefully on Deep maps in Indy.

We spent the first week hearing from the co-directors David Bodenhamer (history, IUPUI), John Corrigan (religious studies, Florida State University), and Trevor Harris (geography, West Virginia University) and guest lecturers Ian Gregory (historical GIS and digital humanities, Lancaster University) and May Yuan (geonarratives, University of Oklahoma), and also from selected speakers at the Digital Cultural Mapping: Transformative Scholarship and Teaching in the Geospatial Humanities at UCLA. We also heard about the other participants projects and backgrounds, and tried to define 'deep maps' and 'spatial narratives'.

It's been pointed out that as we're at the 'bleeding edge', visions for deep mapping are still highly personal. As we don't yet have a shared definition I don't want to misrepresent people's ideas by summarising them, so I'm just posting my current definition of deep maps:

A deep map contains geolocated information from multiple sources that convey their source, contingency and context of creation; it is both integrated and queryable through indexes of time and space.  

Essential characteristics: it can be a product, whether as a snapshot static map or as layers of interpretation with signposts and pre-set interactions and narrative, but is always visibly a process.  It allows open-ended exploration (within the limitations of the data available and the curation processes and research questions behind it) and supports serendipitous discovery of content. It supports curiosity. It supports arguments but allows them to be interrogated through the mapped content. It supports layers of spatial narratives but does not require them. It should be compatible with humanities work: it's citable (e.g. provides URL that shows view used to construct argument) and provides access to its sources, whether as data downloads or citations. It can include different map layers (e.g. historic maps) as well as different data sources. It could be topological as well as cartographic.  It must be usable at different scales:  e.g. in user interface  – when zoomed out provides sense of density of information within; e.g. as space – can deal with different levels of granularity.

Essential functions: it must be queryable and browseable.  It must support large, variable, complex, messy, fuzzy, multi-scalar data. It should be able to include entities such as real and imaginary people and events as well as places within spaces.  It should support both use for presentation of content and analytic use. It should be compelling – people should want to explore other places, times, relationships or sources. It should be intellectually immersive and support 'flow'.

Looking at it now, the first part is probably pretty close to how I would have defined it at the start, but my thinking about what this actually means in terms of specifications is the result of the conversations over the past week and the experience everyone brings from their own research and projects.

For me, this Institute has been a chance to hang out with ace people with similar interests and different backgrounds – it might mean we spend some time trying to negotiate discipline-specific language but it also makes for a richer experience.  It's a chance to work with wonderfully messy humanities data, and to work out how digital tools and interfaces can support ambiguous, subjective, uncertain, imprecise, rich, experiential content alongside the highly structured data GIS systems are good at.  It's also a chance to test these ideas by putting them into practice with a dataset on religion in Indianapolis and learn more about deep maps by trying to build one (albeit in three days).

As part of thinking about what I think a deep map is, I found myself going back to an embarrassingly dated post on ideas for location-linked cultural heritage projects:

I've always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. … The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it's scattered all over the web, in all kinds of formats with all kinds of 'trustability', from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. … Location-linked data isn't only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren't 'notable' or 'historic' enough for recording officially, whether that's grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.

Amusingly, four years ago my obsession with 'open sourcing history' was apparently already well-developed and I was asking questions about authority and trust that eventually informed my PhD – questions I hope we can start to answer as we try to make a deep map.  Fun!

Finally, my thanks to the NEH and the Institute organisers and the support staff at the Polis Center and IUPUI for the opportunity to attend.

Well, gosh.

If you see this post it means… I'm on a bus to Heathrow.  I'm on my way to New York for a week's residency at the Cooper-Hewitt  then onto Indianapolis for an NEH Institute for Advanced Topics in the Digital Humanities on 'Spatial Narrative and Deep Maps: Explorations in the Spatial Humanities', and since I'm not sure when I'll next have time to post, I thought I'd leave you with this little provocation:

Museums should stick to what they do best – to preserve, display, study and where possible collect the treasures of civilisation and of nature. They are not fit to do anything else. It is this single rationale for the museum that makes each one unique, which gives each its own distinctive character. It is the hard work of scholars and curators in their own areas of expertise that attracts visitors. Everybody knows that the harder you try to win friends and ingratiate yourself with people, the more repel you them. It would seem however that those running our new museums need to learn afresh this simple human lesson.

Source: Josie Appleton, "Museums for 'The People'?" in 'Museums and their Communities', edited by Sheila Watson (2007).

If that polemic has depressed you too much, you can read this inspiring article instead, 'The wide open future of the art museum: Q&A with William Noel':

We just think that Creative Commons data is real data. It’s data that people can really use. It’s all about access, and access is about several things: licensing and publishing the raw data. Any data that you capture should be available to be the public. … The other important thing is to put the data in places where people can find it… The Walters is a museum that’s free to the public, and to be public these days is to be on the Internet. Therefore to be a public museum your digital data should be free. And the great thing about digital data, particularly of historic collections, is that they’re the greatest advert that these collections have. … The digital data is not a threat to the real data, it’s just an advertisement that only increases the aura of the original…

…people go to the Louvre because they’ve seen the Mona Lisa; the reason people might not be going to an institution is because they don’t know what’s in your institution. Digitization is a way to address that issue, in a way that with medieval manuscripts, it simply wasn’t possible before. People go to museums because they go and see what they already know, so you’ve got to make your collections known. Frankly, you can write about it, but the best thing you can do is to put out free images of it. This is not something you do out of generosity, this is something you do because it makes branding sense, and it even makes business sense. So that’s what’s in it for the institution.

The other main reason to do it is to increase the knowledge of and research on your collection by the people, which has to be part of your mission at least, even in the most conservative of institutions. 

Btw, if you're in New York and fancy meeting up for a coffee before June 17, drop me a line in the comments or @mia_out.  (Or ditto for Indianapolis June 17-30).

Geek for a week: residency at the Powerhouse Museum

I've spent the last week as 'geek-in-residence' with the Digital, Social and Emerging Technologies team at the Powerhouse Museum. I wasn't sure what 'geek-in-residence' would mean in reality, but in this case it turned out to be a week of creativity, interesting constraints and rapid, iterative design.

When I arrived on Monday morning, I had no idea what I'd be working on, let alone how it would all work. By the end of the first day I knew how I'd be working, but not exactly what I'd focus on. I came in with fresh questions on Tuesday, and was sketching ideas by lunchtime. The next few days were spent getting stuck into wireframes to focus in on specific issues within that problem space; I turned initial ideas into wireframes and basic copy; and put that through two rounds of quick-and-dirty testing with members of the public and Powerhouse volunteers. By the time I left on Friday I was able to handover wireframes for a site called 'conversations about collections' which aims to record people's memories of items from the collection. (I ran out of time to document the technical aspects of how the site could be built in WordPress, but given the skills of the team I think they'll cope.)

The first day and a half were about finding the right-sized problem. In conversations with Paula (Manager of the Visual & Digitisation services team) and Luke (Web Manager), we discussed what each of us were interested in exploring, looking for the intersection between what was possible in the time and with the material to hand.

After those first conversations, I went back to Powerhouse's strategy document for inspiration. If in doubt, go back to the mission! I was looking for a tie-in with their goals – luckily their plan made it easy to see where things might fit. Their strategy talked about ideas and technology that have changed our world and stories of people who create and inspire them, about being open to 'rich engagement, to new conversations about the collections'.

I also considered what could be supported by the existing API, what kinds of activities worked well with their collections and what could be usefully built and tested as paper or on-screen prototypes.  Like many large collections, most of the objects lack the types of data that supports deeper engagement for non-experts (though the significance statements that exist are lovely).

Two threads emerged from the conversations: bringing social media conversations and activity back into the online collections interfaces to help provide an information scent for users of the site; and crowdsourcing games based around enhancing the collections data.
The first was an approach to the difficulties in surfacing the interesting objects in very large collections. Could you create a 'heat map' based on online activity about objects to help searchers and browsers spot objects that might be more interesting?

At one point Nico (Senior Producer) and I had a look at Google Analytics to see what social media sites were sending traffic to the collections and suss out how much data could be gleaned. Collection objects are already showing up on Pinterest, and I had wild thoughts about screen-scraping Pinterest (they have no API) to display related boards on the OPAC search results or object pages…

I also thought about building a crowdsourcing game that would use expert knowledge to data to make better games possible for the general public – this would be an interesting challenge, as open-ended activities are harder to score automatically so you need to design meaningful rewards and ensure an audience to help provide them. However, it was probably a bigger task than I had time for, especially with most of the team already busy on other tasks, though I've been interested in that kind of dual-phased project since my MSc project on crowdsourcing games for museums.

But in the end, I went back to two questions: what information is needed about the collections, what's the best way to get it?  We decided to focus on conversations, stories and clues about objects in the collections with a site aimed at collecting 'living memories' about objects by asking people what they remember about an object and how they'd explain it to a kid.  The name, 'Conversations about collections' came directly from the strategy doc and was just too neat a description to pass up, though 'memory bank' was another contender.
I ended up with five wireframes (clickable PDF at that link) to cover the main tasks of the site: to persuade people (particularly older people) that their memories are worth sharing, and to get the right object in front of the right person.  Explaining more about the designs would be a whole other blog post, but in the interests of getting this post out I'll save that for another day… I'm dashing out this post before I head out, but I'll update in response to questions (and generally things out when I have more time).

My week at the Powerhouse was a brilliant chance to think through the differences between history of science/social history objects and art objects, and between history and art museums, but that's for another post (perhaps when if I ever get around to posting my notes from the MCN session on a similar topic).

It also helped me reflect on my interests, which I would summarise as 'meaningful audience participation' – activities that are engaging and meaningful for the audience and also add value for the museum, activities that actually change the museum in some way (hopefully for the better!), whether that's through crowdsourcing, co-curation or other types of engagement.

Finally, I owe particular thanks to Paula Bray and Luke Dearnley for running with Seb Chan's original suggestion and for their time and contributions to shaping the project; to Nicolaas Earnshaw for wireframe work and Suse Cairns for going out testing on the gallery floor with me; and to Dan Collins, Estee Wah, Geoff Barker and everyone else in the office and on various tours for welcoming me into their space and their conversations.

 

Photo: behind the scenes at the (then) Powerhouse Museum, Sydney