Finding museum, digital humanities and public history projects and communities online

Every once in a while I see someone asking for sources on digital, participatory, social media projects around museums, public history, social history, etc but I don't always have a moment to reply.  To make it easier to help people, here's a quick collection of good places to get started.

I think the best source for museums and digital/social media projects is the site and community around the Museums and the Web conference, including 'Best of the Web' nominations and awards (2012-1997)  and conference proceedings: 201220112010-1987.

Other projects might be listed at the new Digital Humanities Awards (nominations closed on the 11th so presumably they'll publish the list of nominees soon) or the (US) National Council on Public History Awards. The Digital Humanities conferences also include some social history, public history and participatory projects e.g. DH2012, as did the first Digital Humanities Australasia conference and the MCG's UK Museums on the Web conference reports.

To start finding online communities, look for people tweeting with #dhist, #digitalhumanities, #lodlam, #drinkingaboutmuseums, #musetech (and variations) or join the Museums Computer Group or the Museum Computer Network lists (or check their archives).

I'd like to add a list of museum bloggers (whether they focus on social media, technology, education, exhibition design, audience research, etc) but don't know of any comprehensive, up-to-date lists (or delicious etc tags).  (Though since I originally posted @gretchjenn pointed me to the new 'Meet a museum blogger' series and @alexandrematos told me about Cultural blogging in Europe which includes a map of the European cultural blogging scene.) Where do you look for museum bloggers?

This is only a start, so please chip in!  Add any resources I'm missing in the comments below, or tweet @mia_out.

Keeping corridors clear of dragons (on agency and digital humanities tools)

A while ago I posted 'Reflections on teaching Neatline', which was really about growing pains in the digital humanities. I closed by asking 'how do you balance the need for fast-moving innovative work-in-progress to be a bit hacky and untidy around the edges with the desires of a wider group of digital humanities-curious scholars [for stable, easy-to-use software]? Is it ok to say 'here be dragons, enter at your own risk'?' Looking back, I started thinking about this in terms of museum technologists (in Museum technologists redux: it's not about us) but there I was largely thinking of audiences, and slightly less of colleagues within museums or academia.  I'm still not sure if this is a blog post or just an extended comment on those post, but either way, this is an instance of posting-as-thinking.

Bethany Nowviskie has problematised and contextualised some of these issues in the digital humanities far more elegantly for an invited talk at the MLA 2013 conference. You should go read the whole thing at resistance in the materials, but I want to quickly highlight some of her points here.

She quotes William Morris: '…you can’t have art without resistance in the material. No! The very slowness with which the pen or the brush moves over the paper, or the graver goes through the wood, has its value. And it seems to me, too, that with a machine, one’s mind would be apt to be taken off the work at whiles by the machine sticking or what not' and discusses her realisation that:

"Morris’s final, throwaway complaint is not about that positive, inherent resistance—the friction that makes art—which we happily seek within the humanities material we practice upon. It’s about resistance unhealthily and inaccessibly located in a toolset. … precisely this kind of disenfranchising resistance is the one most felt by scholars and students new to the digital humanities. Evidence of friction in the means, rather than the materials, of digital humanities inquiry is everywhere evident."

And she includes an important call to action for digital humanities technologists: "we diminish our responsibility to address this frustration by naming it the inevitable “learning curve” of the digital humanities. Instead, we might confess that among the chief barriers to entry are poorly engineered and ineptly designed research tools and social systems". Her paper is also a call for a more nuanced understanding and greater empathy from tool-builders toward those who are disenfranchised by tools they didn't create and can't hack to fit their needs. It's too easy to forget that an application or toolset that looks like something I can happily pick up and play with to make it my own may well look as unfathomable and un-interrogable as the case of a mobile phone to someone else.

Digital humanities is no longer a cosy clubhouse, which can be uncomfortable for people who'd finally found an academic space where they felt at home. But DH is also causing discomfort for other scholars as it encroaches on the wider humanities, whether it's as a funding buzzword, as a generator of tools and theory, or as a mode of dialogue. This discomfort can only be exacerbated by the speed of change, but I suspect that fear of the unknown demands of DH methods or anxiety about the mental capabilities required are even more powerful*. (And some of it is no doubt a reaction to the looming sense of yet another thing to somehow find time to figure out.) As Sharon Leon points out in 'Digital Methods for Mid-Career Avoiders?', digital historians are generally 'at home with the sense of uncomfortableness and risk of learning new methods and approaches' and can cope with 'a feeling of being at sea while figuring out something completely new', while conversely 'this kind of discomfort is simply to overwhelming for historians who are defined by being the expert in their field, being the most knowledgable, being the person who critiques the shortfalls of the work of others'.

In reflecting on March 2012's Digital Humanities Australasia and the events and conversations I've been part of over the last year, it seems that we need ways of characterising the difference between scholars using digital methods and materials to increase their productivity (swapping card catalogues for online libraries, or type-writers for Word) without fundamentally interrogating their new working practices, and those who charge ahead, inventing tools and methods to meet their needs.  It should go without saying that any characterisations should not unfairly or pejoratively label either group (and those in-between).

Going beyond the tricky 'on-boarding' moments I talked about in 'Reflections on teaching Neatline', digital humanities must consider the effect of personal agency in relation to technology, issues in wider society that affect access to 'hack' skills and what should be done to make the tools, or the means, of DH scholarship more accessible and transparent. Growing pains are one thing, and we can probably all sympathise with an awkward teenage phase, but as digital humanities matures as a field, it's time to accept our responsibility for the environment we're creating for other scholars. Dragons are fine in the far reaches of the map where the adventurous are expecting them, but they shouldn't be encountered in the office corridor by someone who only wanted to get some work done.

* Since posting this, I've read Stephen Ramsey's 'The Hot Thing', which expresses more anxieties about DH than I've glanced at here: 'Digital humanities is the hottest thing in the humanities. … So it is meet and good that we talk about this hot thing. But the question is this: Are you hot?'.  But even here, do technologists and the like have an advantage? I'm used to (if not reconciled to) the idea that every few years I'll have to learn another programming language and new design paradigms just to keep up; but even I'm glad I don't have to keep up with the number of frameworks that front-end web developers have to, so perhaps not?

Museums, Libraries, Archives and the Digital Humanities – get involved!

The short version: if you've got ideas on how museums, libraries and archives (i.e. GLAM) and the digital humanities can inspire and learn from each other, it's your lucky day! Go add your ideas about concrete actions the Association for Computers and the Humanities can take to bring the two communities together or suggestions for a top ten 'get started in museums and the digital humanities' list (whether conference papers, journal articles, blogs or blog posts, videos, etc) to: 'GLAM and Digital Humanities together FTW'.

Update, August 23, 2012: the document is shaping up to be largely about ‘what can be done’ – which issues are shared by GLAMs and DH, how can we reach people in each field, what kinds of activities and conversations would be beneficial, how do we explain the core concepts and benefits of each field to the other? This suggests there’d be a useful second stage in focusing on filling in the detail around each of the issues and ideas raised in this initial creative phase. In the meantime, keep adding suggestions and sharing issues at the intersection of digital humanities and memory institutions.

A note on nomenclature: the genesis of this particular conversation was among museumy people so the original title of the document reflects that; it also reflects the desire to be practical and start with a field we knew well. The acronym GLAM (galleries, libraries, archives and museums) neatly covers the field of cultural heritage and the arts, but I'm never quite sure how effective it is as a recognisable call-to-action.  There's also a lot we could learn from the field of public history, so if that's you, consider yourself invited to the party!

The longer version: in an earlier post from July's Digital Humanities conference in Hamburg I mentioned that a conversation over twitter about museums and digital humanities lead to a lunch with @ericdmj, @clairey_ross, @briancroxall, @amyeetx where we discussed simple ways to help digital humanists get a sense of what can be learnt from museums on topics like digital projects, audience outreach, education and public participation. It turns out the Digital Humanities community is also interested in working more closely with museums, as demonstrated by the votes for point 3 of the Association for Computers and the Humanities (ACH)'s 'Next Steps' document, "to explore relationships w/ DH-sympathetic orgs operating beyond the academy (Museum Computer Network, Nat'l Council on Public History, etc)". At the request of ACH's Bethany Nowviskie (@nowviskie) and Stéfan Sinclair (@sgsinclair), Eric D. M. Johnson and I had been tossing around some ideas for concrete next steps and working up to asking people working at the intersection of GLAM and DH for their input.

However, last night a conversation on twitter about DH and museums (prompted by Miriam Posner's tweet asking for input on a post 'What are some challenges to doing DH in the library?') suddenly took off so I seized the moment by throwing the outline of the document Eric and I had been tinkering with onto Google docs. It was getting late in the UK so I tweeted the link and left it so anyone could edit it. I came back the next morning to find lots of useful and interesting comments and additions and a whole list of people who are interested in continuing the conversation.  Even better, people have continued to add to it today and it's already a good resource.  If you weren't online at that particular time it's easy to miss it, so this post is partly to act as a more findable marker for the conversation about museums, libraries, archives and the digital humanities.

Explaining the digital humanities to GLAMs

This definition was added to the document overnight.  If you're a GLAM person, does it resonate with you or does it need tweaking?

"The broadest definition would be 1) using digital technologies to answer humanities research questions, 2) studying born digital objects as a humanist would have studied physical objects, and or 3) using digital tools to transform what scholarship is by making it more accessible on the open web."

How can you get involved?

Off the top of my head…

  • Add your name to the list of people interested in keeping up with the conversation
  • Read through the suggestions already posted; if you love an idea that's already there, say so!
  • Read and share the links already added to the document
  • Suggest specific events where GLAM and DH people can mingle and share ideas/presentations
  • Suggest specific events where a small travel bursary might help get conversations started
  • Offer to present on GLAMs and DH at an event
  • Add examples of digital projects that bridge the various worlds
  • Add examples of issues that bridge the various worlds
  • Write case studies that address some of the issues shared by GLAMs and DH
  • Spread the word via specialist mailing lists or personal contacts
  • Share links to conference papers, journal articles, videos, podcasts, books, blog posts, etc, that summarise some of the best ideas in ways that will resonate with other fields
  • Consider attending or starting something like Decoding Digital Humanities to discuss issues in DH. (If you're in or near Oxford and want to help me get one started, let me know!)
  • Something else I haven't thought of…

I'm super-excited about this because everyone wins when we have better links between museums and digital humanities. Personally, I've spent a decade working in various museums (and their associated libraries and archives) and my PhD is in Digital Humanities (or more realistically, Digital History), and my inner geek itches to find an efficient solution when I see each field asking some of the same questions, or asking questions the other field has been working to answer for a while.  This conversation has already started to help me discover useful synergies between GLAMs and DH, so I hope it helps you too.

Update, November 2012: as a result of discussions around this document/topic, the Museums Computer Group (MCG) and the Association for Computers and the Humanities (ACH) worked together to create 5 bursaries from the ACH for tickets to the MCG's UK Museums on the Web conference.

Messiness, museums and methods: thoughts from #DH2012 so far…

I'm in Hamburg for the 2012 Digital Humanities conference.  The conference only officially started last night, but after two days of workshops and conversations I already feel like my brain is full, so this post is partly a brain dump to free up some space for new ideas.

The first workshop was one I ran on ‘Learning to play like a programmer: Web mash-ups and scripting for beginners’ – I've shared my slides and notes at that link, as well as links for people to find out more about starting with basic code and computational thinking and to keep learning.

The second workshop, Here and There, Then and Now – Modelling Space and Time in the Humanities, was almost a mini-conference in itself.  The wiki for the NeDIMAH – Space Time Working Group includes links to abstracts for papers presented at the workshop, which are also worth a look for pointers to interesting projects in the spatial humanities.  The day also include break-out sessions on Theory, Methods, Tools and Infrastructure

The session I chaired on Methods was a chance to think about the ways in which tools are instantiations of methods.  If the methods underlying tools aren't those of humanists, or aren't designed suitably for glorious but messy humanities data, are they suitable for humanities work? If they're not suitable, then what?  And if they're used anyway, how do humanists learn when to read a visualisation 'with a grain of salt' and distinguish the 'truthiness' of something that appears on a screen from the complex process of selecting and tidying sources that underlies it?  What are the implications of this new type of digital literacy for peer reviews of DH work (whether work that explicitly considers impact of digitality on scholarly practice, or work that uses digital content within more traditional academic frameworks)?  How can humanists learn to critique tool choice in the same way they critique choice of sources?  Humanists must be able to explain the methods behind the tools they've used, as they have such a critical impact on the outcomes. 

[Update: 'FairCite' is an attempt to create 'clear citation guidelines for digital projects that acknowledge the collaborative reality of these undertakings' for the Alliance of Digital Humanities Organizations.]
We also discussed the notion of academic publications designed so that participation and interaction is necessary to unlock the argument or narrative they represent, so that the reader is made aware of the methods behind the tools by participating in their own interpretive process.  How do we get to have 'interactive scholarly works' in academia – what needs to change to enable them?  How are they reviewed, credited, sustained?  And what can we learn from educators and museum people about active reading, participation and engagement?

Our group also came up with the idea of methods as a bridge between different experts (technologists, etc) and humanists, a place for common understanding (generated through the process of making tools?), and I got to use the phrase 'the siren's lure of the shiny tool', which was fun.  We finished on a positive note with mention of the DH Commons as a place to find a technologist or a humanist to collaborate with, but also to find reviewers for digital projects.

Having spent a few days thinking about messy data, tweets about a post on The inevitable messiness of digital metadata were perfectly timed.  The post quotes Neil Jeffries from the Bodleian Library, who points out:

we need to capture additional metadata that qualifies the data, including who made the assertion, links to differences of scholarly opinion, omissions from the collection, and the quality of the evidence. "Rather than always aiming for objective statements of truth we need to realise that a large amount of knowledge is derived via inference from a limited and imperfect evidence base, especially in the humanities," he says. "Thus we should aim to accurately represent the state of knowledge about a topic, including omissions, uncertainty and differences of opinion."

and concludes "messiness is not only the price we pay for scaling knowledge aggressively and collaboratively, it is a property of networked knowledge itself".  Hoorah!

What can the digital humanities learn from museums?

After a conversation over twitter, a few of us (@ericdmj, @clairey_ross, @briancroxall, @amyeetx) went for a chat over lunch.  Our conversation was wide-ranging, but one practical outcomes was the idea of a 'top ten' list of articles, blog posts and other resources that would help digital humanists get a sense of what can be learnt from museums on topics like digital projects, audience outreach, education and public participation.  Museum practitioners are creating spaces for conversations about failures, which popped up in the #DH2012 twitter stream.

So which conference papers, journal articles, blogs or blog posts, etc, would you suggest for a top ten 'get started in museums and the digital humanities' list?

[For further context, the Digital Humanities community is interested in working more closely with museums: see point 3 of the Association for Computers and the Humanities (ACH)'s 'Next Steps' document.

Both technologist and humanist in the academic digital humanities?

I've been reading Andrew Prescott's excellent Making the Digital Human: Anxieties, Possibilities, Challenges:

…in Britain the problem is I think that the digital humanities has failed to develop its own distinctive intellectual agendas and is still to all intents and purposes a support service. The digital humanities in Britain has generally emerged from information service units and has never fully escaped these origins. Even in units which are defined as academic departments, such as my own in King’s, the assumption generally is that the leading light in the project will be an academic in a conventional academic department. The role of the digital humanities specialists in constructing this project is always at root a support one. We try and suggest that we are collaborating in new ways, but at the end of the day a unit like that at King’s is simply an XML factory for projects led by other researchers. 

Beyond the question of how and why digital people are pushed into support roles in digital humanities projects, I've also been wondering whether the academic world actually allows one to simultaneously be a technologist and a humanist.  This is partly because I'm still mulling over the interactions between different disciplines at a recent research institute and partly because of a comment about a recently advertised 'digital historian' job that called it "'Digital Historian' as slave to real thing – no tenure, no topic, no future".

The statement 'no topic' particularly stood out.  I'm not asking whether it's possible for someone to be a good historian and a good programmer (for example) because clearly some people are both, but rather whether hiring, funding, training and academic structures allow one to be both technologist and humanist.  Can one propose both a data architecture and a research question?

It may simply be that people with specialist skills are leant on heavily in a project because their skills are vital for its success, but does this mean an individual is corralled into one type of work to the exclusion of others?  If you are the only programmer-historian in a group of historians, do you only get to be a programmer, and vice versa?  Are there academic roles that truly make the most of both aspects of the humanist technologist?

And does this mean, as Prescott says, that 'intellectually, the digital humanities is always reactive'?

Catch the wind? (Re-post from Polis blog on Spatial Narratives and Deep Maps)

[This post was originally written for the Polis Center's blog.]

Our time at the NEH Institute on Spatial Narratives & Deep Maps is almost at an end.  The past fortnight feels both like it’s flown by and like we’ve been here for ages, which is possibly the right state of mind for thinking about deep maps.  After two weeks of debate deep maps still seem definable only when glimpsed in the periphery and yet not-quite defined when examined directly.  How can we capture the almost-tangible shape of a truly deep map that we can only glimpse through the social constructs, the particular contexts of creation and usage, discipline and the models in current technology?  If deep maps are an attempt to get beyond the use of location-as-index and into space-as-experience, can that currently be done more effectively on a screen or does covering a desk in maps and documents actually allow deeper immersion in a space at a particular time?

We’ve spent the past three days working in teams to prototype different interfaces to deep maps or spatial narratives, and each group presented their interfaces today. It’s been immensely fun and productive and also quite difficult at times.  It’s helped me realise that deep maps and spatial narratives are not dichotomous but exist on a scale – where do you draw the line between curating data sources and presenting an interpreted view of them?  At present, a deep map cannot be a recreation of the world, but it can be a platform for immersive thinking about the intersection of space, time and human lives.  At what point do you move from using a deep map to construct a spatial and temporal argument to using a spatial narrative to present it?

The experience of our (the Broadway team) reinforces Stuart’s point about the importance of the case study.  We uncovered foundational questions whilst deep in the process of constructing interfaces: is a deep map a space for personal exploration, comparison and analysis of sources, or is it a shared vision that is personalised through the process of creating a spatial narrative?  We also attempted to think through how multivocality translates into something on a screen, and how interfaces that can link one article or concept to multiple places might work in reality, and in the process re-discovered that each scholar may have different working methods, but that a clever interface can support multivocality in functionality as well as in content.

Halfway through 'deep maps and spatial narratives' summer institute

I'm a week and a bit into the NEH Institute for Advanced Topics in the Digital Humanities on 'Spatial Narrative and Deep Maps: Explorations in the Spatial Humanities', so this is a (possibly self-indulgent) post to explain why I'm over in Indianapolis and why I only seem to be tweeting with the #PolisNEH hashtag.  We're about to dive into three days of intense prototyping before wrapping things up on Friday, so I'm posting almost as a marker of my thoughts before the process of thinking-through-making makes me re-evaluate our earlier definitions.  Stuart Dunn has also blogged more usefully on Deep maps in Indy.

We spent the first week hearing from the co-directors David Bodenhamer (history, IUPUI), John Corrigan (religious studies, Florida State University), and Trevor Harris (geography, West Virginia University) and guest lecturers Ian Gregory (historical GIS and digital humanities, Lancaster University) and May Yuan (geonarratives, University of Oklahoma), and also from selected speakers at the Digital Cultural Mapping: Transformative Scholarship and Teaching in the Geospatial Humanities at UCLA. We also heard about the other participants projects and backgrounds, and tried to define 'deep maps' and 'spatial narratives'.

It's been pointed out that as we're at the 'bleeding edge', visions for deep mapping are still highly personal. As we don't yet have a shared definition I don't want to misrepresent people's ideas by summarising them, so I'm just posting my current definition of deep maps:

A deep map contains geolocated information from multiple sources that convey their source, contingency and context of creation; it is both integrated and queryable through indexes of time and space.  

Essential characteristics: it can be a product, whether as a snapshot static map or as layers of interpretation with signposts and pre-set interactions and narrative, but is always visibly a process.  It allows open-ended exploration (within the limitations of the data available and the curation processes and research questions behind it) and supports serendipitous discovery of content. It supports curiosity. It supports arguments but allows them to be interrogated through the mapped content. It supports layers of spatial narratives but does not require them. It should be compatible with humanities work: it's citable (e.g. provides URL that shows view used to construct argument) and provides access to its sources, whether as data downloads or citations. It can include different map layers (e.g. historic maps) as well as different data sources. It could be topological as well as cartographic.  It must be usable at different scales:  e.g. in user interface  – when zoomed out provides sense of density of information within; e.g. as space – can deal with different levels of granularity.

Essential functions: it must be queryable and browseable.  It must support large, variable, complex, messy, fuzzy, multi-scalar data. It should be able to include entities such as real and imaginary people and events as well as places within spaces.  It should support both use for presentation of content and analytic use. It should be compelling – people should want to explore other places, times, relationships or sources. It should be intellectually immersive and support 'flow'.

Looking at it now, the first part is probably pretty close to how I would have defined it at the start, but my thinking about what this actually means in terms of specifications is the result of the conversations over the past week and the experience everyone brings from their own research and projects.

For me, this Institute has been a chance to hang out with ace people with similar interests and different backgrounds – it might mean we spend some time trying to negotiate discipline-specific language but it also makes for a richer experience.  It's a chance to work with wonderfully messy humanities data, and to work out how digital tools and interfaces can support ambiguous, subjective, uncertain, imprecise, rich, experiential content alongside the highly structured data GIS systems are good at.  It's also a chance to test these ideas by putting them into practice with a dataset on religion in Indianapolis and learn more about deep maps by trying to build one (albeit in three days).

As part of thinking about what I think a deep map is, I found myself going back to an embarrassingly dated post on ideas for location-linked cultural heritage projects:

I've always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. … The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it's scattered all over the web, in all kinds of formats with all kinds of 'trustability', from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. … Location-linked data isn't only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren't 'notable' or 'historic' enough for recording officially, whether that's grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.

Amusingly, four years ago my obsession with 'open sourcing history' was apparently already well-developed and I was asking questions about authority and trust that eventually informed my PhD – questions I hope we can start to answer as we try to make a deep map.  Fun!

Finally, my thanks to the NEH and the Institute organisers and the support staff at the Polis Center and IUPUI for the opportunity to attend.

Slow and still dirty Digital Humanities Australasia notes: day 3

These are my very rough notes from day 3 of the inaugural Australasian Association for Digital Humanities conference (see also Quick and dirty Digital Humanities Australasia notes: day 1 and Quick and dirty Digital Humanities Australasia notes: day 2) held in Canberra's Australian National University at the end of March.

We were welcomed to Day 3 by the ANU's Professor Marnie Hughes-Warrington (who expressed her gratitude for the methodological and social impact of digital humanities work) and Dr Katherine Bode.  The keynote was Dr Julia Flanders on 'Rethinking Collections', AKA 'in praise of collections'… [See also Axel Brun's live blog.]

She started by asking what we mean by a 'collection'? What's the utility of the term? What's the cultural significance of collections? The term speaks of agency, motive, and implies the existence of a collector who creates order through selectivity. Sites like eBay, Flickr, Pinterest are responding to weirdly deep-seated desire to reassert the ways in which things belong together. The term 'collection' implies that a certain kind of completeness may be achieved. Each item is important in itself and also in relation to other items in the collection.

There's a suite of expected activities and interactions in the genre of digital collections, projects, etc. They're deliberate aggregations of materials that bear, demand individual scrutiny. Attention is given to the value of scale (and distant reading) which reinforces the aggregate approach…

She discussed the value of deliberate scope, deliberate shaping of collections, not craving 'everythingness'. There might also be algorithmically gathered collections…

She discussed collections she has to do with – TAPAS, DHQ, Women Writers Online – all using flavours of TEI, the same publishing logic, component stack, providing the same functionality in the service of the same kinds of activities, though they work with different materials for different purposes.

What constitutes a collection? How are curated collections different to user-generated content or just-in-time collections? Back 'then', collections were things you wanted in your house or wanted to see in the same visit. What does the 'now' of collections look like? Decentralisation in collections 'now'… technical requirements are part of the intellectual landscape, part of larger activities of editing and design. A crucial characteristic of collections is variety of philosophical urgency they respond to.

The electronic operates under the sign of limitless storage… potentially boundless inclusiveness. Design logic is a craving for elucidation, more context, the ability for the reader to follow any line of thought they might be having and follow it to the end. Unlimited informational desire, closing in of intellectual constraints. How do boundedness and internal cohesion help define the purpose of a collection? Deliberate attempt at genre not limited by technical limitations. Boundedness helps define and reflect philosophical purpose.

What do we model when we design and build digital collections? We're modelling the agency through which the collection comes into being and is sustained through usage. Design is a collection of representational practices, item selection, item boundaries and contents. There's a homogeneity in the structure, the markup applied to items. Item-to-item interconnections – there's the collection-level 'explicit phenomena' – the directly comparable metadata through which we establish cross-sectional views through the collection (eg by Dublin Core fields) which reveal things we already know about texts – authorship of an item, etc. There's also collection-level 'implicit phenomena' – informational commonalities, patterns that emerge or are revealed through inspection; change shape imperceptibly through how data is modelled or through software used [not sure I got that down right]; they're always motivated so always have a close connection with method.

Readerly knowledge – what can the collection assume about what the reader knows? A table of contents is only useful if you can recognise the thing you want to find in it – they're not always self-evident. How does the collection's modelling affect us as readers? Consider the effects of choices on the intellectual ecology of the collection, including its readers. Readerly knowledge has everything to do with what we think we're doing in digital humanities research.

The Hermeneutics of Screwing Around (pdf). Searching produces a dynamically located just-in-time collection… Search is an annoying guessing game with a passive-aggressive collection. But we prefer to ask a collection to show its hand in a useful way (i. e. browse)… Search -> browse -> explore.

What's the cultural significance of collections? She referenced Liu's Sidney's Technology… A network as flow of information via connection, perpetually ongoing contextualisation; a patchwork is understood as an assemblage, it implies a suturing together of things previously unrelated. A patchwork asserts connections by brute force. A network assumes that connections are there to be discovered, connected to. Patchwork, mosaic – connects pre-existing nodes that are acknowledged to be incommensurable.

We avow the desirability of the network, yet we're aware of the itch of edge cases, data that can't be brought under rule. What do we treat as noise and what as signal, what do we deny is the meaning of the collection? Is exceptionality or conformance to type the most significant case? On twitter, @aylewis summarised this as 'Patchworking metaphor lets us conceptualise non-conformance as signal not noise'

Pay attention to the friction in the system, rather than smoothing it over. Collections both express and support analysis. Expressing theories of genre etc in internal modelling… Patchwork – the collection articulates the scholarly interest that animated its creation but also interests of the reader… The collection is animated by agency, is modelled by it, even while it respects the agency we bring as readers. Scholarly enquiry is always a transaction involving agency on both ends.

My (not very good) notes from discussion afterwards… there was a question about digital femmage; discussion of the tension between the desire for transparency and the desire to permit many viewpoints on material while not disingenuously disavowing the roles in shaping the collection; the trend at one point for factoids rather than narratives (but people wanted the editors' view as a foundation for what they do with that material); the logic of the network – a collection as a set of parameters not as a set of items; Alan Liu's encouragement to continue with theme of human agency in understanding what collections are about (e.g. solo collectors like John Soane); crowdsourced work is important in itself regardless of whether it comes up with the 'best' outcome, by whatever metric. Flanders: 'the commitment to efficiency is worrisome to me, it puts product over people in our scale of moral assessment' [hoorah! IMO, engagement is as important as data in cultural heritage]; a question about the agency of objects, with the answer that digital surrogates are carriers of agency, the question is how to understand that in relation to object agency?

GIS and Mapping I

The first paper was 'Mapping the Past in the Present' by Andrew Wilson, which was a fast run-through some lovely examples based on Sydney's geo-spatial history. He discussed the spatial turn in history, and the mid-20thC shift to broader scales, territories of shared experience, the on-going concern with the description of space, its experience and management.

He referenced Deconstructing the map, Harley, 1989, 'cartography is seldom what the cartographers say it is'. All maps are lies. All maps have to be read, closely or distantly. He referenced Grace Karskens' On the rocks and discussed the reality of maps as evidence, an expression of European expansion; the creation of the maps is an exercise in power. Maps must be interpreted as evidence. He talked about deriving data from historic maps, using regressive analysis to go back in time through the sources. He also mentioned TGIS – time-enabled GIS. Space-time composite model – when have lots and lots of temporal changes, create polygon that describes every change in the sequence.

The second paper was 'Reading the Text, Walking the Terrain, Following the Map: Do We See the Same Landscape?' by Øyvind Eide. He said that viewing a document and seeing a landscape are often represented as similar activities… but seeing a landscape means moving around in it, being an active participant. Wood (2010) on the explosion of maps around 1500 – part of the development of the modern state. We look at older maps through modern eyes – maps weren't made for navigation but to establish the modern state.

He's done a case study on text v maps in Scandinavia, 1740s. What is lost in the process of converting text to maps? Context, vagueness, under-specification, negation, disjunction… It's a combination of too little and too much. Text has information that can't fit on a map and text that doesn't provide enough information to make a map. Under-specification is when a verbal text describes a spatial phenomenon in a way that can be understood in two different ways by a competent reader. How do you map a negative feature of a landscape? i.e. things that are stated not to be there. 'Or' cannot be expressed on a map… Different media, different experiences – each can mediate only certain aspects for total reality (Ellestrom 2010).

The third paper was 'Putting Harlem on the Map' by Stephen Robertson. This article on 'Writing History in the Digital Age' is probably a good reference point: Putting Harlem on the Map, the site is at Digital Harlem. The project sources were police files, newspapers, organisational archives… They were cultural historians, focussed on individual level data, events, what it was like to live in Harlem. It was one of first sites to employ geo-spatial web rather than GIS software. Information was extracted and summarised from primary sources, [but] it wasn't a digitisation project. They presented their own maps and analysis apart from the site to keep it clear for other people to do their work.  After assigning a geo-location it is then possible to compare it with other phenomena from the same space. They used sources that historians typically treat as ephemera such as society or sports pages as well as the news in newspapers.

He showed a great list of event types they've gotten from the data… Legal categories disaggregate crime so it appears more often in the list though was the minority of data. Location types also offers a picture of the community.

Creating visualisations of life in the neighbourhood…. when mapping at this detailed scale they were confronted with how vague most historical sources are and how they're related to other places. 'Historians are satisfied in most cases to say that a place is 'somewhere in Harlem'.' He talked about visualisations as 'asking, but not explaining, why there?'.

I tweeted that I'd gotten a lot more from his demonstration of the site than I had from looking at it unaided in the past, which lead to a discussion with @claudinec and @wragge about whether the 'search vs browse' accessibility issue applies to geospatial interfaces as well as text or images (i.e. what do you need to provide on the first screen to help people get into your data project) and about the need for as many hooks into interfaces as possible, including narratives as interfaces.

Crowdsourcing was raised during the questions at the end of the session, but I've forgotten who I was quoting when I tweeted, 'by marginalising crowdsourcing you're marginalising voices', on the other hand, 'memories are complicated'.  I added my own point of view, 'I think of crowdsourcing as open source history, sometimes that's living memory, sometimes it's research or digitisation'.  If anything, the conference confirmed my view that crowdsourcing in cultural heritage generally involves participating in the same processes as GLAM staff and humanists, and that it shouldn't be exploitative or rely on user experience tricks to get participants (though having made crowdsourcing games for museums, I obviously don't have a problem with making the process easier to participate in).

The final paper I saw was Paul Vetch, 'Beyond the Lowest Common Denominator: Designing Effective Digital Resources'. He discussed the design tensions between: users, audiences (and 'production values'); ubiquity and trends; experimentation (and failure); sustainability (and 'the deliverable'),

In the past digital humanities has compartmentalised groups of users in a way that's convenient but not necessarily valid. But funding pressure to serve wider audiences means anticipating lots of different needs. He said people make value judgements about the quality of a resource according to how it looks.

Ubiquity and trends: understanding what users already use; designing for intuition. Established heuristics for web design turn out to be completely at odds with how users behave.

Funding bodies expect deliverables, this conditions the way they design. It's difficult to combine: experimentation and high production values [something I've posted on before, but as Vetch said, people make value judgements about the quality of a resource according to how it looks so some polish is needed]; experimentation and sustainability…

Who are you designing for? Not the academic you're collaborating with, and it's not to create something that you as a developer would use. They're moving away from user testing at the end of a project to doing it during the project. [Hoorah!]

Ubiquity and trends – challenges include a very highly mediated environment; highly volatile and experimental… Trying to use established user conventions becomes stifling. (He called useit.com 'old nonsense'!) The ludic and experiential are increasingly important elements in how we present our research back.

Mapping Medieval Chester took technology designed for delivering contextual ads and used it to deliver information in context without changing perspective (i.e. without reloading the page, from memory).  The Gough map was an experiment in delivering a large image but also in making people smile.  Experimentation and failure… Online Chopin Variorum Edition was an experiment. How is the 'work' concept challenged by the Chopin sources? Technical methodological/objectives: superimposition; juxtaposition; collation/interpolation…

He discussed coping strategies for the Digital Humanities: accept and embrace the ephemerality of web-based interfaces; focus on process and experience – the underlying content is persistent even if the interfaces don't last.  I think this was a comment from the audience: 'if a digital resource doesn't last then it breaks the principle of citation – where does that leave scholarship?'

Summary

So those are my notes.  For further reference I've put a CSV archive of #DHA2012 tweets from searchhash.com here, but note it's not on Australian time so it needs transposing to match the session times.

This was my first proper big Digital Humanities conference, and I had a great time.  It probably helped that I'm an Australian expat so I knew a sprinkling of people and had a sense of where various institutions fitted in, but the crowd was also generally approachable and friendly.

I was also struck by the repetition of phrases like 'the digital deluge', the 'tsunami of data' – I had the feeling there's a barely managed anxiety about coping with all this data. And if that's how people at a digital humanities conference felt, how must less-digital humanists feel?

I was pleasantly surprised by how much digital history content there was, and even more pleasantly surprised by how many GLAMy people were there, and consequently how much the experience and role of museums, libraries and archives was reflected in the conversations.  This might not have been as obvious if you weren't on twitter – there was a bigger disconnect between the back channel and conversations in the room than I'm used to at museum conferences.

As I mentioned in my day 1 and day 2 posts, I was struck by the statement that 'history is on a different evolutionary branch of digital humanities to literary studies', partly because even though I started my PhD just over a year ago, I've felt the title will be outdated within a few years of graduation.  I can see myself being more comfortable describing my work as 'digital history' in future.

I have to finish by thanking all the speakers, the programme committee, and in particular, Dr Paul Arthur and Dr Katherine Bode, the organisers and the aaDH committee – the whole event went so smoothly you'd never know it was the first one!

And just because I loved this quote, one final tweet from @mikejonesmelb: Sir Ken Robinson: 'Technology is not technology if it was invented before you were born'.

'…and they all turn on their computers and say 'yay!" (aka, 'mapping for humanists')

I'm spending a few hours of my Sunday experimenting with 'mapping for humanists' with an art historian friend, Hannah Williams (@_hannahwill).  We're going to have a go at solving some issues she has encountered when geo-coding addresses in 17th and 18th Century Paris, and we'll post as we go to record the process and hopefully share some useful reflections on what we found as we tried different tools.

We started by working out what issues we wanted to address.  After some discussion we boiled it down to two basic goals: a) to geo-reference historical maps so they can be used to geo-locate addresses and b) to generate maps dynamically from list of addresses. This also means dealing with copyright and licensing issues along the way and thinking about how geospatial tools might fit into the everyday working practices of a historian.  (i.e. while a tool like Google Refine can generate easily generate maps, is it usable for people who are more comfortable with Word than relying on cloud-based services like Google Docs?  And if copyright is a concern, is it as easy to put points on an OpenStreetMap as on a Google Map?)

Like many historians, Hannah's use of maps fell into two main areas: maps as illustrations, and maps as analytic tools.  Maps used for illustrations (e.g. in publications) are ideally copyright-free, or can at least be used as illustrative screenshots.  Interactivity is a lower priority for now as the dataset would be private until the scholarly publication is complete (owing to concerns about the lack of an established etiquette and format for citation and credit for online projects).

Maps used for analysis would ideally support layers of geo-referenced historic maps on top of modern map services, allowing historic addresses to be visually located via contemporaneous maps and geo-located via the link to the modern map.  Hannah has been experimenting with finding location data via old maps of Paris in Hypercities, but manually locating 18th Century streets on historic maps then matching those locations to modern maps is time-consuming and she suspects there are more efficient ways to map old addresses onto modern Paris.

Based on my research interviews with historians and my own experience as a programmer, I'd also like to help humanists generate maps directly from structured data (and ideally to store their data in user-friendly tools so that it's as easy to re-use as it is to create and edit).  I'm not sure if it's possible to do this from existing tools or whether they'd always need an export step, so one of my questions is whether there are easy ways to get records stored in something like Word or Excel into an online tool and create maps from there.  Some other issues historians face in using mapping include: imprecise locations (e.g. street names without house numbers); potential changes in street layouts between historic and modern maps; incomplete datasets; using markers to visually differentiate types of information on maps; and retaining descriptive location data and other contextual information.

Because the challenge is to help the average humanist, I've assumed we should stay away from software that needs to be installed on a server, so to start with we're trying some of the web-based geo-referencing tools listed at http://help.oldmapsonline.org/georeference.

Geo-referencing tools for non-technical people

The first bump in the road was finding maps that are re-usable in technical and licensing terms so that we could link or upload them to the web tools listed at http://help.oldmapsonline.org/georeference.  We've fudged it for now by using a screenshot to try out the tools, but it's not exactly a sustainable solution.  
Hannah's been trying georeferencer.org, Hypercities and Heurist (thanks to Lise Summers ‏@morethangrass on twitter) and has written up her findings at Hacking Historical Maps… or trying to.  Thanks also to Alex Butterworth @AlxButterworth and Joseph Reeves @iknowjoseph for suggestions during the day.

Yahoo! Mapmixer's page was a 404 – I couldn't find any reference to the service being closed, but I also couldn't find a current link for it.

Next I tried Metacarter Labs' Map Rectifier.  Any maps uploaded to this service are publicly visible, though the site says this does 'not grant a copyright license to other users', '[t]here is no expectation of privacy or protection of data', which may be a concern for academics negotiating the line between openness and protecting work-in-progress or anyone dealing with sensitive data.  Many of the historians I've interviewed for my PhD research feel that some sense of control over who can view and use their data is important, though the reasons why and how this is manifested vary.

Screenshot from http://labs.metacarta.com/rectifier/rectify/7192


The site has clear instructions – 'double click on the source map… Double click on the right side to associate that point with the reference map' but the search within the right-hand side 'source map' didn't work and manually navigating to Paris, then the right section of Paris was a huge pain.  Neither of the base maps seemed to have labels, so finding the right location at the right level of zoom was too hard and eventually I gave up.  Maybe the service isn't meant to deal with that level of zoom?  We were using a very small section of map for our trials.

Inspired by Metacarta's Map Rectifier, Map Warper was written with OpenStreetMap in mind, which immediately helps us get closer to the goal of images usable in publications.  Map Warper is also used by the New York Public Library, which described it as a 'tool for digitally aligning ("rectifying") historical maps … to match today's precise maps'.  Map Warper also makes all uploaded maps public: 'By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points', but also offers 'Map visibility' options 'Public(default)' and 'Don't list the map (only you can see it)'.

Screenshot showing 'warped' historical map overlaid on OpenStreetMap at http://mapwarper.net/

Once a map is uploaded, it zooms to a 'best guess' location, presumably based on the information you provided when uploading the image.  It's a powerful tool, though I suspect it works better with larger images with more room for error.  Some of the functionality is a little obscure to the casual user – for example, the 'Rectify' view tells me '[t]his map either is not currently masked. Do you want to add or edit a mask now?' without explaining what a mask is.  However, I can live with some roughness around the edges because once you've warped your map (i.e. aligned it with a modern map), there's a handy link on the Export tab, 'View KML in Google Maps' that takes you to your map overlaid on a modern map.  Success!

Sadly not all the export options seem to be complete (they weren't working on my map, anyway) so I couldn't work out if there was a non-geek friendly way to open the map in OpenStreetMap.

We have to stop here for now, but at this point we've met one of the goals – to geo-reference historical maps so locations from the past can be found in the present, but the other will have to wait for another day.  (But I'd probably start with openheatmap.com when we tackle it again.  Any other suggestions would be gratefully received!)

(The title quote is something I heard one non-geek friend say to another to explain what geeks get up to at hackdays. We called our experiment a 'hackday' because we were curious to see whether the format of a hackday – working to meet a challenge within set parameters within a short period of time – would work for other types of projects. While this ended up being almost an 'anti-hack', because I didn't want to write code unless we came across a need for a generic tool, the format worked quite well for getting us to concentrate solidly on a small set of problems for an afternoon.)