Why we need to save the material experience of software objects

Conversations at last month’s Sustainable History: Ensuring today’s digital history survives event [my slides] (and at the pub afterwards) touched on saving the data underlying websites as a potential solution for archiving them. This is definitely better than nothing, but as a human-computer interaction researcher and advocate for material culture in historical research, I don’t think it’s enough.

Just as people rue the loss of the information and experiential data conveyed by the material form of objects when they’re converted to digital representations – size, paper and print/production quality, marks from wear through use and manufacture, access to its affordances, to name a few – future researchers will rue the information lost if we don’t regard digital interfaces and user experiences as vital information about the material form of digital content and record them alongside the data they present.

Can you accurately describe the difference between using MySpace and Facebook in their various incarnations? There’s no perfect way to record the experience of using Facebook in December 2013 so it could be compared with the experience of using MySpace in 2005, but usability techniques like screen-recording software linked to eyetracking or think-aloud tests would help preserve some of the tacit knowledge and context users bring to sites alongside the look-and-feel, algorithms and treatments of data the sites present to us. It’s not a perfect solution, but a recording of the interactions and designs from both sites for common tasks like finding and adding a friend would tell future researchers infinitely more about changes to social media sites over eight years than simple screenshots or static webpages. But in this case we’re still missing the notifications on other people’s screens, the emails and algorithmic categorisations that fan out from simple interactions like these…

Even if you don’t care about history, anyone studying software – whether websites, mobile apps, digital archives, instrument panels or procedural instructions embedded in hardware – still needs solid methods for capturing the dynamic and subjective experience of using digital technologies. As Lev Manovich says in The Algorithms of Our Lives, when we use software we’re “engaging with the dynamic outputs of computation; studying software culture requires us to “record and analyze interactive experiences, following individual users as they navigate a website or play a video game … to watch visitors of an interactive installation as they explore the possibilities defined by the designer—possibilities that become actual events only when the visitors act on them”.

The Internet Archive does a great job, but in researching the last twenty years of internet history I’m constantly hitting the limits of their ability to capture dynamic content, let alone the nuance of interfaces. The paradox is that as more of our experiences are mediated through online spaces and the software contained within small boxy devices, we risk leaving fewer traces of our experiences than past generations.

Collaboration, constraints and cloning and ‘the open museum’: notes from UKMW13

MCG’s UK Museums on the Web 2013: ‘Power to the people’ was held at Tate Modern on November 15, 2013. These are very selected notes but you can find out more about the sessions and see most slides on the MCG’s site. UKMW13 began with a welcome from me (zzz) and from Tate’s John Stack (hoorah!) then an announcement from our sponsors, Axiell Adlib and CALM, that CALM, Mimsy and AdLib are merging to create ‘next generation’ collections system – the old school collections management geek in me is really curious to see what that means for museums, libraries and archives and their data.

Our first keynote, Hannah Freeman, presented on the Guardian’s work to reach and engage new audiences. This work is underpinned by editor Alan Rusbridger’s vision for ‘open journalism‘:

‘journalism which is fully knitted into the web of information that exists in the world today. It links to it; sifts and filters it; collaborates with it and generally uses the ability of anyone to publish and share material to give a better account of the world’. 

At a casual glance the most visible aspect may be comments on pages, but the Guardian is aiming for collaborations between the reader and the newsroom – if you haven’t seen Guardian Witness, go check it out. (I suspect the Witness WWI assignment will do better than many heritage crowdsourcing efforts.) I know some museums are aiming to be of the web, not just on the web, but this ambition is usually limited to making their content of the web, while a commitment to open journalism suggests that the very core practices of journalism are open to being shaped by the public.

The Guardian is actively looking for ways to involve the audience; Freeman prompts editors and authors to look at interesting comments, but ‘following as well as leading is a challenge for journalists’. She said that ‘publication can be the beginning, not the end of the process’ and that taking part in the conversation generated is now part of the deal when writing for the Guardian (possibly not all sections, and possibly staff journalists rather than freelancers?). From a reader’s point of view, this is brilliant, but it raises questions about how that extra time is accounted for. Translating this into the museum sector and assuming that extra resources aren’t going to appear, if you ask curators to blog or tweet, what other work do you want them to give up?

Hannah Freeman, Guardian Community coordinator for culture at UKMW13. Photo: Andrew Lewis

Our closing keynote, the Science Gallery’s Michael John Gorman was equally impressive. Dublin’s Science Gallery has many constraints – a small space, no permanent collection, very little government funding, but he seems to be one of those people who sees interesting problems to solve where other people see barriers. The Science Gallery acts as funnel for ideas, from an open call for shows to some people working on their ideas as a ‘brains trust’ with the gallery and eventually a few ideas making it through the funnel and onto the gallery floor to incubate and get feedback from the public. Their projects have a sense of ‘real science’ about them – some have an afterlife in publications or further projects, some might go horribly wrong or just not work. I can’t wait until their gallery opens in London so I can check out some of their shows and see how they translate real scientific questions into interesting participatory experiences. Thinking back over the day, organisations like the Science Gallery might be the museum world’s version of open journalism: the Science Gallery’s ‘funnel’ is one way of putting the principles of the ‘open museum’ into practice (I’ve copied the Guardian’s 10 principles of open journalism below for reference).

Michael John Gorman, The Ablative Museum

Possible principles for ‘the open museum’?

While the theme of the day was the power of participation, I’ve found myself reflecting more on the organisational challenges this creates. Below are the Guardian’s 10 principles of open journalism. As many of the presentations at UKMW13 proved, museums are already doing some of these, but which others could be adapted to help museums deal with the challenges they face now and in the future?
  • It encourages participation. It invites and/or allows a response
  • It is not an inert, “us” or “them”, form of publishing
  • It encourages others to initiate debate, publish material or make suggestions. We can follow, as well as lead. We can involve others in the pre-publication processes
  • It helps form communities of joint interest around subjects, issues or individuals
  • It is open to the web and is part of it. It links to, and collaborates with, other material (including services) on the web
  • It aggregates and/or curates the work of others
  • It recognizes that journalists are not the only voices of authority, expertise and interest
  • It aspires to achieve, and reflect, diversity as well as promoting shared values
  • It recognizes that publishing can be the beginning of the journalistic process rather than the end
  • It is transparent and open to challenge – including correction, clarification and addition

The open museum isn’t necessarily tied to technology, though the affordances of digital platforms are clearly related, but perhaps its association with technology is one reason senior managers are reluctant to engage fully with digital methods?

A related question that arose from Hannah’s talk – are museums now in the media business, like it or not? And if our audiences expect museums to be media providers, how do we manage those expectations? (For an alternative model, read David Weinberger’s Library as Platform.)

Emerging themes from UKMW13

I’ve already posted my opening notes for Museums on the Web 2013: ‘Power to the people’ but I want to go back to two questions I was poking around there: ‘how can technologists share our knowledge and experience with others?’, and ‘why isn’t the innovation we know happens in museum technology reflected in reports like last week’s ‘Digital Culture: How arts and cultural organisations in England use technology‘? (Or, indeed, in the genre of patronising articles and blog posts hectoring museums for not using technology.) This seems more relevant than I thought it would be in 2013. Last year I was wondering how to define the membership of the Museums Computer Group when everyone in museums was a bit computer-y, but maybe broad digital literacy and comfort with technology-lead changes in museum practice is further off than I thought. (See also Rachel Coldicutt’s ‘I Say “Digital!”, You Say “Culture!”‘). How do we bridge the gap? Is it just a matter of helping every museum go through the conversations necessary to create a digital strategy and come out the other side? And whose job is it to help museum staff learn how to manage public engagement, ecommerce, procurement, hiring when the digital world changes so quickly?
Another big theme was a reminder of how much is possible when you have technical expertise on hand to translate all the brilliant ideas museums have into prototypes or full products. At one point I jokingly tweeted that the museum and heritage sector would make huge leaps if we could just clone Jim O’Donnell (or the BBC’s R&D staff). Perhaps part of the ‘museums are digitally innovative’/’museums suck at digital’ paradox is that technologists can see the potential of projects and assume that a new standard has been set, but it takes a lot more time and work to get them integrated into mainstream museum practice. Part of this may be because museums struggle to hire and keep really good developers, and don’t give their developers the time or headspace to play and innovate. (Probably one reason I like hackdays – it’s rare to get time to try new things when there is more worthy work than there is developer/technologist time – being inspired at conferences only goes so far when you can’t find a bit of server space and a free day to try something out.) This has also been a theme at the first day at MCN2013, from what I’ve seen on twitter/webcasts from afar, so it’s not only about the budget cuts in the UK. The Digital Culture report suggests that it may also be because senior management in museums don’t know how to value ‘digital experimentation’?

Other, more positive, themes emerged to link various presentations during the day. Community engagement can be hugely rewarding, but it takes resources – mostly staff time – to provide a conduit between the public and the organisation. It also takes a new mindset for content creators, whether journalists, educators or curators to follow the crowds’ lead, but it can be rewarding, whether it’s getting help identifying images from ‘armchair archaeologists’, working with online music communities to save their memories before they’re lost to living memory or representing residents experiences of their city. Both presenters and the audience were quick to raise questions about the ethics of participatory projects and the wider implications of content/item collecting projects and citizen history.

Constraints, scaffolding, the right-sized question or perfectly themed niche collection – whatever you call it, giving people boundaries when asking for contributions is effective. Meaningful participation is valued, and valuable.

Open content enables good things to happen. Digital platforms are great at connecting people, but in-person meetups and conversations are still special.

Finally, one way or another the audience will shape your projects to their own ends, and the audience proved it that day by taking to twitter to continue playing Curate-a-Fact between tea breaks.

We should have a proper archive of all the #UKMW13 tweets at some point, but in the meantime, here’s a quick storify for MCG’s Museums on the Web 2013: Power to the people. Oh, and thank you, thank you, thank you to all the wonderful people who helped the day come together.

Opening notes for Museums on the Web 2013: ‘Power to the people’

It’ll take me a few days to digest the wonderfulness that was MCG’s UK Museums on the Web 2013: ‘Power to the people’, so in lieu of a summary, here are my opening notes for the conference… (With the caveat that I didn’t read this but still hopefully hit most of these points on the day).

Welcome to Museums on the Web 2013! I’m Mia Ridge, Chair of the Museums Computer Group.

Hopefully the game that began at registration has helped introduce you to some people you hadn’t met before…You can vote on the game in the auditorium over the lunch break, and the winning team will be announced before the afternoon tea break. Part of being a welcoming community is welcoming others, so we tried to make it easier to start conversations. If you see someone who maybe doesn’t know other people at the event, say hi. I know that many of you can feel like you’re working alone, even within a big organisation, so use this time to connect with your peers.

This week saw the launch of a report written for Nesta, the Arts Council, and the Arts and Humanities Research Council in relation to the Digital R&D Fund for the Arts, ‘Digital Culture: How arts and cultural organisations in England use technology‘. One line in the report stood out: ‘Museums are less likely than the rest of the sector to report positive impacts from digital technologies’ – which seems counter-intuitive given what I know of museums making their websites and social media work for them, and the many exciting and effective projects we’ve heard about over the past twelve years of MCG’s UK Museums on the Web conferences (and on our active discussion list).

The key to that paradox may lie in another statement in the report: museums report ‘lower than average levels of digital expertise and empowerment from their senior management and a lower than average focus on digital experimentation, and research and development’.* (It may also be that a lot of museum work doesn’t fit into an arts model, but that’s a conversation for another day.) Today’s theme almost anticipates this – our call for papers around ‘Power to the people’ asked for responses around the rise of director-level digital posts the rise of director-level digital posts and empowering museum staff to learn through play as well as papers on grassroots projects and the power of embedding digital audience participation and engagement into the overall public engagement strategy for a museum.

Today we’ll be hearing about great projects from museums and a range of other organisations, but reports like this – and perhaps the wider issue of whether senior management and funders understand the potential of digital beyond new forms of broadcast and ticket sales – raises the question of whether we’re preaching to the converted. How can we help others in museums benefit from the hard-won wisdom and lessons you’ll hear today?

The Museums Computer Group has always been a platform for people working with museum technology who want to create positive change in the sector: our motto is ‘connect, support, inspire’, and we’re always keen to hear your ideas about how we can help you connect, support and inspire you, but as a group we should also be asking: how can we share our knowledge and experience with others? It can be difficult to connect with and support others when you’re flat out with your own work, yet the need to scale up the kinds of education we might have done with small groups working on digital projects is becoming more urgent as audience expectations change and resources need to be spent even more carefully. Ultimately we can help each other by helping the sector get better at technology and recognise the different types of expertise already available within the heritage sector. Groups like the MCG can help bridge the gap; we need your voices to reach senior management as well as practitioners and those who want to work with museums who’ll shape the sector in the future.

It’s rare to find a group so willing to share their failures alongside their successes, so willing to generously share their expertise and so keen to find lessons in other sectors. We appreciate the contributions of many of you who’ve spoken honestly about the successes and failures of your projects in the past, and applaud the spirit of constructive conversation that encourages your peers to share so openly and honestly with us. I’m looking forward to learning from you all today.

* Update to add a link to an interview with MTM’s Richard Ellis who co-authored the Nesta report, who says the ‘sheer extent of the divide between those in the know and those not’ was one of the biggest surprises working in the culture sector.

Impressions from Mona, Hobart’s Museum of Old and New Art

I went to Mona – David Walsh’s Museum of Old and New Art – in Hobart with my parents this week, and I’m quickly posting my impressions now, as my best intentions of posting a proper review later will probably be squished by the demands of my PhD and travel. I’ve also posted photos from my visit, though you may not be able to see my longer notes without clicking through to each photo.

Quick context: I’m a museum technologist and experience designer/analyst (though I’m currently a full-time PhD candidate researching digital history and crowdsourcing), we went from Melbourne to Tasmania specifically to see Mona, my parents are beyond retirement age but keep up with technology and are generally pretty active (physically and culturally). I had read various bits and pieces from other museum professionals about their visits, but didn’t discuss them with my parents beforehand because I wanted to observe their reactions. (Being observed while engaging with technology or museum experiences is an occupational hazard for my friends and family and I thank them for their patience with me!) I’d deliberately gone with very few expectations about the building and artworks, not least because one of the works I’d most wanted to see had already been removed from display and I didn’t want to be disappointed if I missed others.

The onboarding experience

Mona from the boat

Forgive the UX jargon-laden pun, but your experience of Mona begins with your journey there. Both transport options that leave from the matt black ferry terminal are called ‘Mona Roma’ (geddit? ‘Roamer’, though it probably only works with an Australian accent). The boat is painted camouflage greys and the mini bus has hot pink flames down its sides. The boat trip up the Derwent River was a nice bit of bonus sightseeing for a tourist like me, and the captain provided a brief commentary as we travelled. The passengers mostly seemed to be tourists, from backpackers to retirees, from Australia and across the world. Some people near us talked about their visit to the Guggenheim in Bilbao, others seemed to be there because Mona is on the list of things to do in Hobart. I’d love to know how many were going for the whole ‘controversial’ experience, how many to tick off one of Hobart’s sites and how many were going for the art.
When you arrive on site, you head up stairs from the landing, then a courtyard draws some visitors on to explore the grounds before entering the museum (and presumably helping avoid queues when a ferry arrives). I loved Wim Delvoye’s concrete truck (not that I knew what it was at the time, because Mona doesn’t have captions – one of the reasons it’s been ‘controversial’) and the views across the suburbs and river.

You’re given a printed Visitor Guide with your ticket (including a map, though printed in elegant thin grey type on black so almost impossible for my parents to read). The rules at the top of the stairs were clear – no food or drink, ‘no flash’ (so presumably other photography is ok – though I’ve just seen that the Visitor Guide says you can’t put photos on ‘personal websites’ without permission – does that include social media? The guide blithely says ‘Buy a postcard’, assuming you found one of the artwork you liked in the shop, but the O page encourages you to ‘share artworks with friends via facebook and twitter’ so I’m a bit confused about what’s ok and I take back it about the rules being clear!).

Then it’s down the spiral staircase into the depths of the earth. You get glimpses of other galleries on the way down to the third level, interspersed with sandstone and concrete walls that still bear construction marks.

The O

Would this prompt you to save the tour?

At the bottom of the stairs, you’re given your ‘O’, or interactive guide (basically an iPod Touch in a solid case). The ‘O’ is one reason museum technologists and exhibition designers have been so curious about MONA. As the guide says:

‘We don’t have labels on the walls. We have the O. Use it to read about the art on display and to listen to interviews with the artists. It’s free.’

There are seats near the Void Bar that are also handily placed for sitting down and sorting yourself out before you start, so I took a few photos as I got started with my O. Getting started is pretty simple (and as expected, my parents had no trouble with it). It explains that you should ‘tap the O update button’ when moving between galleries to get a list of artworks nearby, then ‘tap an artwork in the list to delve further’. When you tap into an artwork, you see a thumbnail image, artwork title, date, artist name, then a brief artist bio and list of materials used in the artwork. There are options in the top right-hand corner to ‘love’ or ‘hate’ the artwork. There’s no room for neutrality, though I wonder if a shrug is possibly the worse possible response to a modern artwork and worth recording on some level? (Though they could presumably easily get a list of the works that elicited the fewest love or hate responses.)

The additional information icons for the first work I looked at were tied to the ‘Red Queen’ exhibition theme – Ruminations, Tweedledum, Jabberwocky (additional media, often audio). Others were ‘gonzo’ (David Walsh’s voice), ‘art wank’ (art historical information), ‘ideas’ (often quotes from literature, sometimes questions, but only once a clunky museum education-style question). There seemed to be a ‘Red Queen exhibition’ view that shows only nearby artworks with special interpretation (Mum discovered it accidentally but as the icon change was very subtle she didn’t realise why it wasn’t showing anything around her; with a bit more signposting it’d be a useful function for repeat visitors who want to catch up on new stuff). Rather than a traditional exhibition with ‘key messages’ and learning outcomes, the Red Queen seemed to be a group of works collected together to think about particular themes (and in a sense is probably a microcosm of Walsh’s overall collecting strategy). Intellectual concerns emerged in some of the interpretation, but there wasn’t an overall narrative, and I didn’t miss that one little bit. Mona probably showed me that I love stories at an individual level but can feel a bit lectured-at by the whole-gallery narratives I’ve encountered in other museums. I discovered some audio content while still near the entrance so went back to ask for headphones, but they weren’t handed out by default when we visited.

Saving ‘your tour’

I was curious about when and how I’d be prompted to ‘save my tour’ for viewing later. The prompt appeared to be triggered after I’d tapped through to a few artworks, but when it appeared, it didn’t really convince me to sign up – I’d love to know what their response rate is and whether they’ve tested different versions of the text. ‘All the works on display at Mona will be available to you on our website’ isn’t as informative as the text on the O page which you’ll probably only see if you’d saved your tour while onsite: ‘Saving your tour while at Mona enables you to see your entire path through the museum including a list of viewed, loved and hated works. You can read all available interpretive material, share artworks with friends via facebook and twitter, change ratings and more…’ Dad saved his tour, Mum didn’t. I did because I had a sense of what the website would offer me, but I don’t know if I would have otherwise.

What’s around you?

The O’s location awareness seemed to work pretty well (an achievement in itself), but I’d love a smarter version that knew the difference between physical proximity and physical accessibility. It’s all very well to know an artwork is two metres from me, but if there’s a gallery wall between me and the work, it’s just another thing to scroll past in search of the artworks that are actually in the same space as me. The biggest usability issue with the O (for me) was the length of the list – if it more accurately reflected the artworks visible in the space (as opposed to physically nearby) then it’d be much easier to find the work you were looking for. Perhaps it doesn’t need location at all – broadcasting a short list of the artworks in the room would be just as effective (though the list would still be quite long in some of the galleries), or electronic wall labels that can be read in low light could replace printed captions. The list view was pretty handy for working out whether you’d seen everything in a particular area, as it added ‘viewed’ to artworks you’d tapped into.

But if you couldn’t match the artwork in front of you to a picture in the list, you were out of luck. No caption, nothing. I was reminded of Mary Beard’s recent statement about “letting the objects speak for themselves” — which usually means “letting the objects speak to those who know about them already”‘.

Overall, the O…

…kinda worked. I preferred reading about the works to listening to an audio guide (I hate having to listen to slow talkers when I could be skim-reading). Given the amount of material there was to read or listen to while you’re around the artworks, more seats would have been ace (but at least there were some around, particularly in the higher levels). And the content was great – it took me two hours to go through the lowest floor because I wanted to read or listen to everything while I could relate it to the artwork in front of me. As the O screens glow when you need to read text, the galleries themselves could be dark and as a result some of the objects were *beautifully* lit.

There are some kinks to work out – I accidentally ‘loved’ or ‘hated’ one or two works when the O bumped about and tapped from a list to a work and hit a button, and couldn’t undo it. It was also tricky when viewing artworks set into slits in the wall – it made the art feel both more monumental and intimate, but it meant scrabbling around on the O to find the right artwork while being aware that you were blocking the view for others in the meantime. That said, I’ve been wondering where friction has been deliberately left in and where it’s a bug. Does it matter that it only registers an artwork as ‘seen’ if you’ve tapped through from the list to the caption? And if labels don’t matter, why do you have to tap through to one for a work to count as ‘seen’? Does it matter that you’re poking at a device instead of doing an emu dart in-and-back to read a caption on a wall?

But overall, I would have preferred basic captions on the walls, leaving the O for works I wanted to explore beyond a simple what/who/when caption. Being able to find out more with the O added to my experience and I loved the different voices and approaches it enabled, but I spent an awful lot of time scrolling around trying to find the entry for the artwork I was standing in front of (and I helped other people find artworks when they got stuck). The technology doesn’t exactly distract from the art, but it does get in the way a bit.

[Update: I realised a while later that they can get away with a lot with the O’s text because a) the whole set-up is iconoclastic and b) we don’t look to Walsh and his curator mates for authority. It doesn’t matter if you think they’re wrong or that they haven’t been representative and even-handed – it’s not their job. Public museums don’t have that freedom, though they could still learn something from the amount of personality the O manages to convey.]

The O website

If you give your email address to save your tour, you get an email later that day with a link to retrieve it from the website. I can’t see how to change my ratings, share artworks on twitter or facebook, and I only seem to be filter by ‘Works you viewed’ and ‘Works you missed’ not those I’ve loved or hated – which would be fine if all that wasn’t promised on the front page. The timeline/map of what you saw is pretty but didn’t give me direct access to works I remember seeing at different points in my visit. Artworks don’t have permanent (indeed, any) URLs, so I can’t easily save or share the artworks I’m still thinking about.

Since it only counts an artwork as ‘viewed’ if you tapped through from the list view, it’s not really an accurate list of what you viewed or missed. I also have a feeling the O will beep if you take it out of the building, which makes ‘viewing’ some works outside the building tricky. I’d also love to be able to see pieces that aren’t on display any more, and personally I think I’d have gotten more out of my visit if I’d been able to get a sense of some of the artworks on the website before I went – I’m definitely a ‘listen to the album before going to the concert’ kinda person. That said, being able to check the name of an artist or work easily is great – I wish all museum websites made it so easy to find the objects you’ve seen.

Art wank?

The O’s ‘art wank’ label and icon

I don’t think I would have thought anything of this, except that an American friend (hi @erodley!) was a bit taken aback by it. I didn’t have to ask Mum (who is quite proper) what she thought of it as she came up to me and said she liked ‘the art thing’. She wasn’t bothered when she put on her glasses and realised the label said ‘art wank’ – she’s heard it used in Parliament – though when she realised what it was I don’t think she was too keen on the icon itself. I asked Dad later, and he thought it matched Walsh’s ‘knockabout character’, deflating people who are a bit ‘up themselves’.

Finally, the art…

I loved a few pieces, I didn’t hate any pieces though one was mildly irritating, some I would have loved to label ‘meh’. Mum made me jump on a trampoline so she could hear the bells, I lined up to experience Death with my parents, and I realised that there’s something about ‘traces of pigment’ on old statues that gets me every time. By the time I left, I felt a bit like I’d spent the day at a playground for art – partly because all my senses had been involved at some point, and partly because of the eclectic range of works I’d encountered (and maybe even because of the ‘mild peril’ hinted at in the lead up to the Death gallery experience).

Many of the artworks I liked best had a story attached, though it might have come from the original context of its creation, from Walsh’s gonzo pieces or related to something in my own life. Others were just plain beautiful or charming or made me think, which is probably a good line on which to finish.

Update: I’ve snuck away from the PhD write-up for a minute to collate a list of other museum nerds’ reviews of Mona and the O:

Let me know of any others in the comments…

Also in poking around I’ve also found a link to a tiny snippet of Mona’s art (mostly) not on display, including some of the content you probably would have seen on the O at the time.

[If I ever re-write this, I’m going to add a clickbait headline ‘3 things you’ll love about MONA and 1 you’ll hate’. Or ‘This one weird trick that really works for art history’.]

Lighting beacons: research software engineers event and related topics

I’ve realised that it could be useful to share my reading at the intersection of research software engineers/cultural heritage technologist/digital humanities, so at the end I’ve posted some links to current discussions or useful reference points and work to provide pointers to interesting work.

But first;  notes from last week’s workshop for research software engineers, an event for people who ‘not only develop the software, they also understand the research that it makes possible’. The organisers did a great job with the structure (and provided clear instructions on running a breakout session) – each unconference-style session had to appoint a scribe and report back to a plenary session as well as posting their notes to the group’s discussion list so there’s an instant archive of the event.

Discussions included:

  • How do you manage quality and standards in training – how do you make sure people are doing their work properly, and what are the core competencies and practices of an RSE?
  • How should the research community recognise the work of RSEs?
  • Sharing Research Software
  • Routes into research software development – why did you choose to be an RSE?
  • Do we need a RSE community?
  • and the closing report from the Steering Committee and group discussion on what an RSE community might be or do.

I ended up in the ‘How should the research community recognise the work of RSES?‘ session. I like the definition we came up with: ‘research software engineers span the role of researchers and software engineers. They have the domain knowledge of researchers and the development skills to be able to represent this knowledge in code’. On the other hand, if you only work as directed, you’re not an RSE. This isn’t about whether you make stuff, it’s about how much you’re shaping what you’re making. The discussion also teased out different definitions of ‘recognition’ and how they related to people’s goals and personal interests; the impact of ‘short-termism’ and project funding on stable careers, software quality, training and knowledge sharing. Should people cite the software they use in their research in the methods section of any publications? How do you work out and acknowledge someone’s contribution to on-going or collaborative projects – and how do you account for double-domain expertise when recognising contributions made in code?

I’d written about the event before I went (in Beyond code monkeys: recognising technologists’ intellectual contributions, which relates it to digital humanities and cultural heritage work) but until I was there I hadn’t realised the extra challenges RSEs in science face – unlike museum technologists, science RSEs are deeply embedded in a huge variety of disciplines and can’t easily swap between them.

The event was a great chance to meet people facing similar issues in their work and careers, and showed how incredibly useful the right label can be for building a community. If you work with science+software in the UK and want to help work out what a research software engineer community might be, join in the RSE discussion.

If you’re reading this post, you might also be interested in:

In ye olden days, beacon fires were lit on hills to send signals between distant locations. These days we have blogs.

Beyond code monkeys: recognising technologists’ intellectual contributions

Two upcoming events suggest that academia is starting to recognise that specialist technologists – AKA ‘research software engineers’ or ‘digital humanities software developers’ – make intellectual contributions to research software, and further, that it is starting to realise the cost of not recognising them. In the UK, there’s a ‘workshop for research software engineers‘ on September 11; in the US there’s Speaking in Code in November (which offers travel bursaries and is with ace people, so do consider applying).

But first, who are these specialist technologists, and why does it matter? The UK Software Sustainability Institute’s ‘workshop for research software engineers’ says ‘research software engineers … not only develop the software, they also understand the research that it makes possible’. In an earlier post, The Craftsperson and the Scholar, UCL’s James Hetherington says a ‘good scientific coder combines two characters: the scholar and the craftsperson’. Research software needs people who are both scholar – ‘the archetypical researcher who is driven by a desire to understand things to their fullest capability’ and craftsperson who ‘desires to create and leave behind an artefact which reifies their efforts in a field’: ‘if you get your kicks from understanding the complex and then making a robust, clear and efficient tool, you should consider becoming a research software engineer’. A supporting piece in the Times Higher Education, ‘Save your work – give software engineers a career track‘ points out that good developers can leave for more rewarding industries, and raises one of the key issues for engineers: not everyone wants to publish academic papers on their development work, but if they don’t publish, academia doesn’t know how to judge the quality of their work.

Over in the US, and with a focus on the humanities rather than science, the Scholar’s Lab is running the ‘Speaking in Code‘ symposium to highlight ‘what is almost always tacitly expressed in our work: expert knowledge about the intellectual and interpretive dimensions of DH code-craft, and unspoken understandings about the relation of that work to ethics, scholarly method, and humanities theory’. In a related article, Devising New Roles for Scholars Who Can Code, Bethany Nowviskie of the Scholar’s Lab discussed some of the difficulties in helping developers have their work recognised as scholarship rather than ‘service work’ or just ‘building the plumbing’:

“I have spent so much of my career working with software developers who are attached to humanities projects,” she says. “Most have higher degrees in their disciplines.” Unlike their professorial peers, though, they aren’t trained to “unpack” their thinking in seminars and scholarly papers. “I’ve spent enough time working with them to understand that a lot of the intellectual codework goes unspoken,” she says.

Women at work on C-47 Douglas cargo transport.
LOC image via Serendip-o-matic

Digital humanists spend a lot of time thinking about the role of ‘making things’ in the digital humanities but, to cross over to my other domain of interest, I think the international Museums and the Web conference‘s requirement for full written papers for all presentations has helped more museum technologists translate some of their tacit knowledge into written form. Everyone who wants to present their work has to find a way to write up their work, even if it’s painful at the time – but once it’s done, they’re published as open access papers well before the conference. Museum technologists also tend to blog and discuss their work on mailing lists, which provides more opportunities to tease out tacit knowledge while creating a visible community of practice.

I wasn’t at Museums and the Web 2013 but one of the sessions I was most interested in was Rich Cherry and Rob Stein’s ‘What’s a Museum Technologist today?‘ as they were going to report on the results of a survey they ran earlier this year to come up with ‘a more current and useful description of our profession’. (If you’re interested in the topic, my earlier posts on museum technologists include On ‘cultural heritage technologists’Confluence on digital channels; technologists and organisational change?Museum technologists redux: it’s not about usSurvey results: issues facing museum technologists.) Rob’s posted their slides at What is a Museum Technologist Anyway? and I’d definitely recommend you go check them out.  Looking through the responses, the term ‘museum technologist’ seems to have broadened as more museum jobs involve creating content for or publishing on digital channels (whether web sites, mobile apps, ebooks or social media), but to me, a museum technologist isn’t just someone who uses technology or social media – rather, there’s a level of expertise or ‘domain knowledge’ across both museums and technology – and the articles above have reinforced my view that there’s something unique in working so deeply across two or more disciplines. (Just to be clear: this isn’t a diss for people who use social media rather than build things – there’s also a world of expertise in creating content for the web and social media). Or to paraphrase James Hetherington, ”if you get your kicks from understanding the complex and then making a robust, clear and efficient tool, you should consider becoming a museum technologist’.

To further complicate things, not everyone needs their work to reflect all their interests – some programmers and tech staff are happy to leave their other interests outside the office door, and leave engineering behind at the end of the day – and my recent experiences at One Week | One Tool reminded me that promiscuous interdisciplinarity can be tricky. Even when you revel in it, it’s hard to remember that people wear multiple hats and can swap from production-mode to critically reflecting on the product through their other disciplinary lenses, so I have some sympathy for academics who wonder why their engineer expects their views on the relevant research topic to be heard. That said, hopefully events like these will help the research community work out appropriate ways of recognising and rewarding the contributions of researcher developers.

[Update, September 2013: I’ve posted brief notes and links to session reports from the research software engineers event at Lighting signals: research software engineers event and related topics.]

So we made a thing. Announcing Serendip-o-matic at One Week, One Tool

So we made a thing. And (we think) it’s kinda cool! Announcing Serendip-o-matic http://t.co/mQsHLqf4oX #OWOT
— Mia (@mia_out) August 2, 2013

Source code at GitHub Serendipomatic – go add your API so people can find your stuff! Check out the site at serendipomatic.org.

Update: and already we’ve had feedback that people love the experience and have found it useful – it’s so amazing to hear this, thank you all! We know it’s far from perfect, but since the aim was to make something people would use, it’s great to know we’ve managed that:

Congratulations @mia_out and the team of #OWOT for http://t.co/cNbCbEKlUf Already try it & got new sources about a Portuguese King. GREAT!!!
— Daniel Alves (@DanielAlvesFCSH) August 2, 2013

Update from Saturday morning – so this happened overnight:

Cool, Serendipmatic cloned and local dev version up and running in about 15 mins. Now to see about adding Trove to the mix. #owot
— Tim Sherratt (@wragge) August 3, 2013

And then this:

Just pushed out an update to http://t.co/uM13iWLISU — now includes Trove content! #owot
— RebeccaSuttonKoeser (@suttonkoeser) August 3, 2013

From the press release: One Week | One Tool Team Launches Serendip-o-matic

serendip-o-maticAfter five days and nights of intense collaboration, the One Week | One Tool digital humanities team has unveiled its web application: Serendip-o-matic <http://serendipomatic.org>. Unlike conventional search tools, this “serendipity engine” takes in any text, such as an article, song lyrics, or a bibliography. It then extracts key terms, delivering similar results from the vast online collections of the Digital Public Library of America, Europeana, and Flickr Commons. Because Serendip-o-matic asks sources to speak for themselves, users can step back and discover connections they never knew existed. The team worked to re-create that moment when a friend recommends an amazing book, or a librarian suggests a new source. It’s not search, it’s serendipity.

Serendip-o-matic works for many different users. Students looking for inspiration can use one source as a springboard to a variety of others. Scholars can pump in their bibliographies to help enliven their current research or to get ideas for a new project. Bloggers can find open access images to illustrate their posts. Librarians and museum professionals can discover a wide range of items from other institutions and build bridges that make their collections more accessible. In addition, millions of users of RRCHNM’s Zotero can easily run their personal libraries through Serendip-o-matic.
Serendip-o-matic is easy to use and freely available to the public. Software developers may expand and improve the open-source code, available on GitHub. The One Week | One Tool team has also prepared ways for additional archives, libraries, and museums to make their collections available to Serendip-o-matic. 

Highs and lows, day four of OWOT

If you’d asked me at 6pm, I would have said I’d have been way too tired to blog later, but it also felt like a shame to break my streak at this point. Today was hard work and really tiring – lots to do, lots of finicky tech issues to deal with, some tricky moments to work through – but particularly after regrouping back at the hotel, the dev/design team powered through some of the issues we’d butted heads against earlier and got some great work done.  Tomorrow will undoubtedly be stressful and I’ll probably triage tasks like mad but I think we’ll have something good to show you.

As I left the hotel this morning I realised an intense process like this isn’t just about rapid prototyping – it’s also about rapid trust. When there’s too much to do and barely any time for communication, let alone  checking someone else’s work, you just have to rely on others to get the bits they’re doing right and rely on goodwill to guide the conversation if you need to tweak things a bit.  It can be tricky when you’re working out where everyone’s sense of boundaries between different areas are as you go, but being able to trust people in that way is a brilliant feeling. At the end of a long day, I’ve realised it’s also very much about deciding which issues you’re willing to spend time finessing and when you’re happy to hand over to others or aim for a first draft that’s good enough to go out with the intention to tweak if it you ever get time. I’d asked in the past whether a museum’s obsession with polish hinders innovation so I can really appreciate how freeing it can be to work in an environment where to get a product that works, let alone something really good, out in the time available is a major achievement.

Anyway, enough talking. Amrys has posted about today already, and I expect that Jack or Brian probably will too, so I’m going to hand over to some tweets and images to give you a sense of my day. (I’ve barely had any time to talk to or get to know the Outreach team so ironically reading their posts has been a lovely way to check in with how they’re doing.)

Our GitHub repository punch card report tells the whole story of this week – from nothing to huge levels of activity on the app code

I keep looking at the #OWOT commits and clapping my hands excitedly. I am a great. big. dork.
— Mia (@mia_out) August 1, 2013

OH at #owot ‘I just had to get the hippo out of my system’ (More seriously, so exciting to see the design work that’s coming out!)
— Mia (@mia_out) August 1, 2013

OH at #OWOT ‘I’m not sure JK Rowling approves of me’. Also, an earlier unrelated small round of applause. Progress is being made.
— Mia (@mia_out) August 1, 2013

#OWOT #owotleaks it turns out our mysterious thing works quite well with song lyrics.
— Mia (@mia_out) August 1, 2013

Halfway through. Day three of OWOT.

Crikey. Day three. Where do I start?

We’ve made great progress on our mysterious tool. And it has a name! Some cool design motifs are flowing from that, which in turn means we can really push the user experience design issues over the next day and a half (though we’ve already been making lots of design decisions on the hoof so we can keep dev moving). The Outreach team have also been doing some great communications work, including a Press Release and have lots more in the pipeline. The Dev/Design team did a demo of our work for the Outreach team before dinner – there are lots of little things but the general framework of the tool works as it should – it’s amazing how far we’ve come since lunchtime yesterday.  We still need to do a full deployment (server issues, blah blah), and I’ll feel a lot better when we’ve got that process working and then running smoothly, so that we can keep deploying as we finish major features up to a few hours before launch rather than doing it at the end in a mad panic. I don’t know how people managed code before source control – not only does Github manage versions for it, it makes pulling in code from different people so much easier.

There’s lots to tackle on many different fronts, and it may still end up in a mad rush at the end, but right now, the Dev/Design team is humming along. I’ve been so impressed with the way people have coped with some pretty intense requirements for working with unfamiliar languages or frameworks, and with high levels of uncertainty in a chaotic environment.  I’m trying to keep track of things in Github (with Meghan and Brian as brilliant ‘got my back’ PMs) and keep the key current tasks on a whiteboard so that people know exactly what they need to be getting done at any time. Now that the Outreach team have worked through the key descriptive texts, name and tagline we’ll need to coordinate content production – particularly documentation, microcopy to guide people through the process – really closely, which will probably get tricky as time is short and our tasks are many, but given the people gathered together for OWOT, I have faith that we’ll make it work.

Things I have learnt today: despite two years working on a PhD in digital humanities/digital history, I still have a brain full of technical stuff – it’s a relief to realise it hasn’t atrophied through lack of use. I’ve also realised how much the work I’ve done designing workshops and teaching since starting my PhD have fed into how I work with teams, though it’s hard right now to quantify exactly *how*. Finally, it’s re-affirmed just how much I like making things – but also that it’s important to make those things in the company of people who are scholarly (or at least thoughtful) about subjects beyond tech and inter-disciplinary, and ideally to make things that engage the public as well as researchers. As the end of my PhD approaches, it’s been really useful to step back into this world for a week, and I’ll definitely draw on it when figuring out what to do after the PhD. If someone could just start a CHNM in the UK, I’d be very happy.

I still can’t tell you what we’re making, but I *can* tell you that one of these photos in this post contains a clue (and they all definitely have nothing to do with mild lightheadedness at the end of a long day).

And so it begins: day two of OWOT

Day two of One Week, One Tool. We know what we’re making, but we’re not yet revealing exactly what it is. (Is that mean? It’s partly a way of us keeping things simple so we can focus on work.) Yesterday (see Working out what we’re doing: day one of One Week, One Tool) already feels like weeks ago, and even this morning feels like a long time ago. I can see that my posts are going to get less articulate as the week goes on, assuming I keep posting. I’m not sure how much value this will have, but I suppose it’s a record of how fast you can move in the right circumstances…

We spent the morning winnowing the ideas we’d put up for feedback on overnight down from c12 to 4, then 3, then 2, then… It’s really hard killing your darlings, and it’s also difficult choosing between ideas that sound equally challenging or fun or worthy. There was a moment when we literally wiped ideas that had been ruled out from the whiteboard, and it felt oddly momentous. In the end, the two final choices both felt like approaches to the same thing – perhaps because we’d talked about them for so long that they started to merge (consciously or not) or because they both fell into a sweet spot of being accessible to a wide audience and had something to do with discovering new things about your research (which was the last thing I tweeted before we made our decision and decided to keep things in-house for a while).  Finally, eventually, we had enough of a critical mass behind one idea to call it the winner.

Personally, our decision only started to feel real as we walked back from lunch – our task was about to get real.  It’s daunting but exciting. Once back in the room, we discussed the chosen idea a bit more and I got a bit UX/analysty and sketched stuff on a whiteboard. I’m always a bit obsessed with sketching as a way to make sure everyone has a more concrete picture (or shared mental model) of what the group is talking about, and for me it also served as a quick test of the technical viability of the idea. CHNM’s Tom Scheinfeldt then had the unenviable task of corralling/coaxing/guiding us into project management, dev/design and outreach teams. Meghan Frazer and Brian Croxall are project managing, I’m dev/design team lead, with Scott Kleinman, Rebecca Sutton Koeser, Amy Papaelias, Eli Rose, Amanda Visconti and Scott Williams (and in the hours since then I have discovered that they all rock and bring great skills to the mix), and Jack Dougherty is leading the outreach team of Ray Palin and Amrys Williams in their tasks of marketing, community development, project outreach, grant writing, documentation. Amrys and Ray are also acting as user advocates and they’ve all contributed user stories to help us clarify our goals. Lots of people will be floating between teams, chipping in where needed and helping manage communication between teams.

The Dev/Design team began with a skills audit so that we could figure out who could do what on the front- and back-end, which in turn fed into our platform decision (basically PHP or Python, Python won), then a quick list of initial tasks that would act as further reality checks on the tool and our platform choice. The team is generally working in pairs on parallel tasks so that we’re always moving forward on the three main functional areas of the tool and to make merging updates on github simpler. We’re also using existing JavaScript libraries and CSS grids to make the design process faster. I then popped over to the Outreach team to check in with the descriptions and potential user stories they were discussing. Meghan and Brian got everyone back together at the end of the day, and the dev/design team had a chance to feed back on the outreach team’s work (which also provided a very ad hoc form of requirements elicitation but it started some important conversations that further shaped the tool). Then it was back over to the hotel lobby where we planned to have a dev/design team meeting before dinner, but when two of our team were kidnapped by a shuttle driver (well, sorta) we ended up working through some of the tasks for tomorrow. We’re going to have agile-style stand-up meetings twice a day, with the aim to give people enough time to get stuck into tasks while still keeping an eye on progress with a forum to help deal with any barriers or issues. Some ideas will inevitably fall by the wayside, but because the OWOT project is designed to run over a year, we can put ideas on a wishlist for future funded development, leave as hooks for other developers to expand on, or revisit once we’re back home. In hack day mode I tend to plan so that there’s enough working code that you have something to launch, then go back and expand features in the code and polish the UX with any time left. Is this the right approach here? Time will tell.

#owot dev team is hard at work. #fb pic.twitter.com/Zj5PW0Kj2a
— Brian Croxall (@briancroxall) July 31, 2013