Catch the wind? (Re-post from Polis blog on Spatial Narratives and Deep Maps)

[This post was originally written for the Polis Center’s blog.]

Our time at the NEH Institute on Spatial Narratives & Deep Maps is almost at an end.  The past fortnight feels both like it’s flown by and like we’ve been here for ages, which is possibly the right state of mind for thinking about deep maps.  After two weeks of debate deep maps still seem definable only when glimpsed in the periphery and yet not-quite defined when examined directly.  How can we capture the almost-tangible shape of a truly deep map that we can only glimpse through the social constructs, the particular contexts of creation and usage, discipline and the models in current technology?  If deep maps are an attempt to get beyond the use of location-as-index and into space-as-experience, can that currently be done more effectively on a screen or does covering a desk in maps and documents actually allow deeper immersion in a space at a particular time?

We’ve spent the past three days working in teams to prototype different interfaces to deep maps or spatial narratives, and each group presented their interfaces today. It’s been immensely fun and productive and also quite difficult at times.  It’s helped me realise that deep maps and spatial narratives are not dichotomous but exist on a scale – where do you draw the line between curating data sources and presenting an interpreted view of them?  At present, a deep map cannot be a recreation of the world, but it can be a platform for immersive thinking about the intersection of space, time and human lives.  At what point do you move from using a deep map to construct a spatial and temporal argument to using a spatial narrative to present it?

The experience of our (the Broadway team) reinforces Stuart’s point about the importance of the case study.  We uncovered foundational questions whilst deep in the process of constructing interfaces: is a deep map a space for personal exploration, comparison and analysis of sources, or is it a shared vision that is personalised through the process of creating a spatial narrative?  We also attempted to think through how multivocality translates into something on a screen, and how interfaces that can link one article or concept to multiple places might work in reality, and in the process re-discovered that each scholar may have different working methods, but that a clever interface can support multivocality in functionality as well as in content.

Halfway through ‘deep maps and spatial narratives’ summer institute

I’m a week and a bit into the NEH Institute for Advanced Topics in the Digital Humanities on ‘Spatial Narrative and Deep Maps: Explorations in the Spatial Humanities‘, so this is a (possibly self-indulgent) post to explain why I’m over in Indianapolis and why I only seem to be tweeting with the #PolisNEH hashtag.  We’re about to dive into three days of intense prototyping before wrapping things up on Friday, so I’m posting almost as a marker of my thoughts before the process of thinking-through-making makes me re-evaluate our earlier definitions.  Stuart Dunn has also blogged more usefully on Deep maps in Indy.

We spent the first week hearing from the co-directors David Bodenhamer (history, IUPUI), John Corrigan (religious studies, Florida State University), and Trevor Harris (geography, West Virginia University) and guest lecturers Ian Gregory (historical GIS and digital humanities, Lancaster University) and May Yuan (geonarratives, University of Oklahoma), and also from selected speakers at the Digital Cultural Mapping: Transformative Scholarship and Teaching in the Geospatial Humanities at UCLA. We also heard about the other participants projects and backgrounds, and tried to define ‘deep maps’ and ‘spatial narratives’.

It’s been pointed out that as we’re at the ‘bleeding edge’, visions for deep mapping are still highly personal. As we don’t yet have a shared definition I don’t want to misrepresent people’s ideas by summarising them, so I’m just posting my current definition of deep maps:

A deep map contains geolocated information from multiple sources that convey their source, contingency and context of creation; it is both integrated and queryable through indexes of time and space.  

Essential characteristics: it can be a product, whether as a snapshot static map or as layers of interpretation with signposts and pre-set interactions and narrative, but is always visibly a process.  It allows open-ended exploration (within the limitations of the data available and the curation processes and research questions behind it) and supports serendipitous discovery of content. It supports curiosity. It supports arguments but allows them to be interrogated through the mapped content. It supports layers of spatial narratives but does not require them. It should be compatible with humanities work: it’s citable (e.g. provides URL that shows view used to construct argument) and provides access to its sources, whether as data downloads or citations. It can include different map layers (e.g. historic maps) as well as different data sources. It could be topological as well as cartographic.  It must be usable at different scales:  e.g. in user interface  – when zoomed out provides sense of density of information within; e.g. as space – can deal with different levels of granularity.

Essential functions: it must be queryable and browseable.  It must support large, variable, complex, messy, fuzzy, multi-scalar data. It should be able to include entities such as real and imaginary people and events as well as places within spaces.  It should support both use for presentation of content and analytic use. It should be compelling – people should want to explore other places, times, relationships or sources. It should be intellectually immersive and support ‘flow’.

Looking at it now, the first part is probably pretty close to how I would have defined it at the start, but my thinking about what this actually means in terms of specifications is the result of the conversations over the past week and the experience everyone brings from their own research and projects.

For me, this Institute has been a chance to hang out with ace people with similar interests and different backgrounds – it might mean we spend some time trying to negotiate discipline-specific language but it also makes for a richer experience.  It’s a chance to work with wonderfully messy humanities data, and to work out how digital tools and interfaces can support ambiguous, subjective, uncertain, imprecise, rich, experiential content alongside the highly structured data GIS systems are good at.  It’s also a chance to test these ideas by putting them into practice with a dataset on religion in Indianapolis and learn more about deep maps by trying to build one (albeit in three days).

As part of thinking about what I think a deep map is, I found myself going back to an embarrassingly dated post on ideas for location-linked cultural heritage projects:

I’ve always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. … The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it’s scattered all over the web, in all kinds of formats with all kinds of ‘trustability’, from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. … Location-linked data isn’t only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren’t ‘notable’ or ‘historic’ enough for recording officially, whether that’s grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.

Amusingly, four years ago my obsession with ‘open sourcing history’ was apparently already well-developed and I was asking questions about authority and trust that eventually informed my PhD – questions I hope we can start to answer as we try to make a deep map.  Fun!

Finally, my thanks to the NEH and the Institute organisers and the support staff at the Polis Center and IUPUI for the opportunity to attend.

Notes from a preview of the updated Historypin

The tl;dr version: inspiring project, great enhancements; yay!

Longer version: last night I went to the offices of We Are What We Do for a preview of the new version of HistoryPin. Nick Poole has already written up his notes, so I’m just supplementing them with my own notes from the event (and a bit from conversations with people there and the reading I’d already done for my PhD).

Screenshot with photo near WAWWD office (current site)

Historypin is about bridging the intergenerational divide, about mass participation and access to history, about creating social capital in neighbourhoods, conserving and opening up global archival resources (at this stage that’s photographs, not other types of records).  There’s a focus on events and activities in local communities. [It’d be great to get kids to do quick oral history interviews as they worked with older people, though I think they’re doing something like it already.]

New features will include a lovely augmented reality-style view in streetview; the ability to upload and explore video as well as images; a focus on telling stories – ‘tours’ let you bring a series of photos together into a narrative (the example was ‘the arches of New York’, most of which don’t exist anymore).  You can also create ‘collections’, which will be useful for institutions.  They’ll also be available in the mobile apps (and yes, I did ask about the possibility of working with the TourML spec for mobile tours).

The mobile apps let you explore your location, explore the map and contribute directly from your phone.  You can use the augmented reality view to overlap old photos onto your camera view so that you can take a modern version of an old photo. This means they can crowdsource better modern images than those available in streetview as well as getting indoors shots.  This could be a great treasure hunt activity for local communities or tourists.  You can also explore collections (as slideshows?) in the app.

They’re looking to work with more museums and archives and have been working on a community history project with Reading Museum.  Their focus on inclusion is inspiring, and I’ll be interested to see how they work to get those images out into the community.  While there are quite a few ‘then and now’ projects focused on geo-locating old images around I think that just shows that it’s an accessible way of helping people make connections between their lives and those in the past.

A quick correction to Nick’s comments – the Historypin API doesn’t exist yet, so if you have ideas for what it should do, it’s probably a good time to get in touch.  I’ll be thinking hard about how it all relates to my PhD, especially if they’re making some of the functionality available.

Spotting QR tags in the real world

One of the prototypes made for dev8D has been adapted so it can ‘splash a big QR code onto the screen‘ so people can conferences can take a shot of it and click straight through to the URL – no typing.  Super cool!

I’m excited by Semapedia, a project “which uses QR Code nodes to connect Wikipedia articles with their relevant place in physical space”. You can browse locations that have been tagged on a map or on Flickr. I get excited by things like this because it makes ‘outside the walls of the museum‘ projects seem much more feasible.

The ZKM (Centre for Art and Media, Karlsruhe) are exploring mobile tagging for their 20th anniversay: “[w]ith this new tag solution, you can communicate with the museum and use it as a platform also outside of opening hours, i.e., not bound to a certain time, and without being physically present in the museum, i.e., not bound to a certain place.” The site is in German so it’s difficult to work out exactly what you get online. Thanks to Jennifer Trant for the tip-off.

Two notes on QR tags out in the wild – Seb’s recent post linked to ‘a guerilla art installation at [Melbourne’s] Federation Square‘ which is ace on so many levels.  I love their ethos.

The image is a photo I took today – a band have put up a QR tag outside a London tube station.  It takes you to a page that links to a downloadable track and their iTunes and MySpace pages.

Tragically, I’ve even started using QR codes in the office – I often use my phone to test sites outside our network, and I’ve printed out a sheet of QR codes for sites I check often, to save typing in URLs on my phone keyboard.

Happy developers + happy museums = happy punters (my JISC dev8D talk)

This is a rough transcript of my lightning talk ‘Happy developers, happy museums’ at JISC’s dev8D ‘developer happiness’ days last week. The slides are downloadable or embedded below. The reason I’m posting this is because I’d still love to hear comments, ideas, suggestions, particularly from developers outside the museum sector – there’s a contact form on my website, or leave a comment here.

“In this talk I want to show you where museums are in terms of data and hear from you on how we can be more useful.

If you’re interested in updates I use my blog to [crap on a bit, ahem] talk about development at work, and also to call for comment on various ideas and prototypes. I’m interested in making the architecture and development process transparent, in being responsive to not only traditional museum visitors as end users, but also to developers. If you think of APIs as a UI for developers, we want ours to be both usable and useful.

I really like museums, I’ve worked in three museums (or families of museums) now over ten years. I think they can do really good things. Museums should be about delight, serendipity and answers that provoke more questions.

A recent book, ‘How does one become a scientist? : survey on the birth of a Vocation’ states that ‘60% of scientists over 30 and 40% of scientists under 30 note claim, without prompting, that the Palais de la Découverte [a science museum in Paris] triggered their vocation’.

Museums can really have an impact on how people think about the world, how they think about the possibilities of their lives. I think museums also have a big responsibility – we should be curating collections for current and future audiences, but also trying to provide access to the collections that aren’t on display. We should be committed to accessibility, transparency, curation, respecting and enabling expertise.

So today I’m here because we want to share our stuff – we are already – but we want to share better.

We do a lot of audience research and know a lot about some of our users, including our specialist users, but we don’t know so much about how people might use our data, it’s a relatively new thing for us. We’re used to saying ‘here are objects in a case, interpretation in label’, we’re not used to saying ‘here’s unmediated access, access through the back door’.

Some of the challenges for museums: technology isn’t that much of a challenge for us on the whole, except that there are pockets of excellence, people doing amazing things on small budgets with limited resources, but there are also a lot of old-fashioned monolithic project designs with big overheads that take a long time to deliver. Lots of people mean well but don’t know what’s possible – I want to spread the news about lightweight, more manageable and responsive ways of developing things that make sense and deliver results.

We have a lot of data, but a lot of it’s crap. Some of what we have is wrong. Some of it was written 100 years ago, so it doesn’t match how we’d describe things now.

We face big institutional challenges. Some curators – (though it does depend on the museum) – fear loss of control, fear intellectual vandalism, that mistakes in user-generated content published on museum sites will cause people to lose trust in museums. We have fears of getting the IT wrong (because for a while we did). Funding and metrics are a big issue – we are paid by how many people come through our door or come to our websites. If we’re doing a mashup, how do we measure the usage of that? Are we going to cost our organisations money if we can’t measure visits and charge back to the government? [This is particularly an issue for free museums in the UK, an interesting by-product of funding structures.]

Copyright is a huge issue. We might not even own an object that appears in our collections, we might not own the rights to the image of our object, or to the reproductions of an image. We might not have asked for copyright clearance at the time when an object was donated, and the cost of tracing it might be too high, so we can’t use that object online. Until we come up with a reliable model that reduces the risk to an institution of saying ‘copyright unknown’, we’re stuck.

The following are some ways I can think of for dealing with these challenges…
Limited resources – we can’t build an interface to meet every need for every user, but we can provide the content that they’d use. Some of the semantic web talks here have discussed a ‘thin layer’ of application over data, and that’s kind of where we want to go as well.

Real examples to reduce institutional fear and to provide real examples of working agile projects. [I didn’t mean strictly ‘agile’ methodology but generally projects that deliver early and often and can respond to the changing technical and social environment]

Finding ways for the sector to reward intelligent failure. Some museums will never ever admit to making a mistake. I’ve heard over the past few days that universities can be the same. Projects that are hyped up suddenly aren’t mentioned, and presumably it’s failed, but no-one [from the project] ever talks about why so we don’t learn from those mistakes. ‘Fail faster, succeed sooner’.
I’d like to hear suggestions from you on how we could deal with those challenges.

What are museums known for? Big buildings, full of stuff; experts; we make visitors come to us; we’re known for being fun; or for being boring.

Museum websites traditionally appear to be about where we are, when we’re open, what’s on, is there a cafe on site. Which is useful, but we can do a lot more.

Traditionally we’ve done pretty exhibition microsites, which are nice – they provide an experience of the exhibition before or after your visit. They’re quite marketing-led, they don’t necessarily provide an equivalent experience and they don’t really let you engage with the content beyond the fact that you’re viewing it.

We’re doing lots of collections online projects, some of these have ended up being silos – sometimes to the extent if we want to get data out of them, we have to screen-scrape our own data. These sites often aren’t as pretty, they don’t always have the same design and usability budgets (if any).

I think we should stick to what we’re really good at – understanding the data (collections), understanding how to mediate it, how to interpret it, how to select things that are appropriate for publication, and maybe open it up to other people to do the shiny pretty things. [Sounds almost like I’m advocating doing myself out of a job!]

So we have lots of objects, images, lots of metadata; our collections databases also include people, events, dates, places, businesses and organisations, lots of qualified information around things like dates, they’re not necessarily simple fields but that means they can convey a lot more meaning. I’ve included that because people don’t always realise we have information beyond objects and object metadata. This slide [11 below] is an example of one of the challenges – this box of objects might not be catalogued as individual instruments, it might just be catalogued as a ‘box of stuff’, which doesn’t help you find the interesting objects in the box. Lots of good stuff is hidden in this way.

We’re slowly getting there. We’re opening up access. We’re using APIs internally to share data between gallery interactives and the web, we’re releasing them as data points, we’re using them to provide direct access to collections. At the moment it still tends to be quite mediated access, so you’re getting a lot of interpretation and a fewer number of objects because of the resources required to create really nice records and the information around them.

‘Read access’ is relatively easy, ‘write access’ is harder because that’s when we hit those institutional issues around authority, authorship. Some curators are vaguely horrified that they might have to listen to what the public have to say and actually take some of it back into their collections databases. But they also have to understand that they can’t know everything about their collections, and there are some specialist users who will know everything there is to know about a particular widget on a particular kind of train. We’d like to capture that knowledge. [London Transport Museum have had a good go at that.]

Some random URLs of cool stuff happening in museums [,,,] – it’s still very much in small pockets, it’s still difficult for museum staff to convince people to take what seems like a leap of faith and try these non-traditional things out.

We’re taking our content to where people hang out. We’re exploring things like Flickr Commons, asking people to tag and comment. Some museums have been updating collections records with information added by the public as a result. People are geo-tagging photos for us, which means you can do ‘then and now’ mashups without a big metadata enhancement budget.

I’d like to see an end to silos. We are kinda getting there but there’s not a serious commitment to the idea that we need to let things go, that we need to make sure that collections online shareable, that they’re interoperable, that they can mesh with other things.

Particularly for an education audience, we want to help researchers help themselves, to help developers help others. What else do we have that people might find useful?

What we can do depends on who you are. I could hope that things like enquiry-based learning, mashups, linked data, semantic web technologies, cross-collections searches, faceted browsing to make complex searches easy would be useful, that the concept of museums as a place where information lives – a happy home for metadata mapped around objects and authority records – are useful for people here but I wouldn’t want to put words into your mouths.

There’s a lot we can do with the technology, but if we’re investing resources we need to make sure that they’re useful. I can try things in my own time because it’s fun, but if we’re going to spend limited resources on interfaces for developers then we need to that it’s actually going to help some group of people out there.

The philosophy that I’m working with is ‘we’ve got really cool things, but we can have even cooler things if we can share what we have with everyone else’. “The coolest thing to do with your data will be thought of by someone else”. [This quote turns out to be on the event t-shirts, via CRIG!] So that said… any ideas, comments, suggestions?”

And that, thankfully, is where I stopped blathering on. I’ll summarise the discussion and post back when I’ve checked that people are ok with me blogging their comments.

[If the slide show below has a brown face on a black background, it’s the right one – slideshare’s embed seems to have had a hiccup. If it’s not that, try viewing it online directly.]

[My slide images include the Easter Egg museum in Kolomyya, Ukraine and ‘Laughter in Odd Places’ event at the Museum of London.]

This is a quick dump of some of the text from an interview I did at the event, cos I managed to cover some stuff I didn’t quite articulate in my talk:

[On challenges for museums:] We need to change institutional priorities to acknowledge the size of the online audience and the different levels of engagement that are possible with the online experience. Having talked to people here, museums also need to do a bit of a sell job in letting people know that we’ve changed and we’re not just great big imposing buildings full of stuff.

[What are the most exciting developments in the museum sector, online?] For digital collections, going outside the walls of the museum using geo-location to place objects in their original context is amazing. It means you can overlay the streets of the city with past events and lives. Outsourcing curation and negotiating new models of expertise is exciting. Overcoming the fear of the digital surrogate as a competitor for museum visits and understanding that everything we do builds audiences, whether digital or physical.

Finding problems for QR tags to solve

QR tags (square or 2D barcodes that can hold up to 4,296 characters) are famously ‘big in Japan’. Outside of Japan they’ve often seemed a solution in search of a problem, but we’re getting closer to recognising the situations where they could be useful.

There’s a great idea in this blog post, Video Print:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I would use this a lot myself – my laptop usually lives on my desk, but that’s not where I tend to read print media, so in the past I’ve ripped URLs out of articles or taken a photo on my phone to remind myself to look at them later, but I never get around to it. But since I always have my phone with me I’d happily snap a QR code (the Nokia barcode software is usually hidden a few menus down, but it’s worth digging out because it works incredibly well and makes a cool noise when it snaps onto a tag) and use the home wifi connection to view a video or an extended text online.

As a ‘call to action’ a QR tag may work better than a printed URL because it saves typing in a URL on a mobile keyboard.

QR tags would also work well as physical world hyperlinks, providing a visible sign that information about a particular location is available online or as a short piece of text encoded in the QR tag. They could work as well for a guerrilla campaign to make contested or forgotten histories visible again – stickers are easy to produce and can be replaced if they weather – as for official projects to take cultural heritage content outside the walls of the museum.

The Powerhouse Museum have also experimented with QR tags, creating special offer vouchers.

Here’s the obligatory sample QR – if your phone has a barcode reader you should get the URL of this blog*:


* which is totally not optimised for mobile reading as the main pages tend to be quite long but it works ok over wifi broadband.

[Update – I just came across this post about Barcode wikipedia that suggests: “People would be able to access the info by entering/scanning the barcode number. The kind of information that would be stored against the product would be things like reviews, manufacturing conditions, news stories about the product/manufacturer, farm subsidies paid to the manufacturer etc.” I’m a bit (ok, a lot) of a hippie and check product labels before I buy – I love this idea because it’s like a version of the ethical shopping guide small enough to fit inside my wap phone.]

[Update 2 – more discussion of a ‘what are QR codes good for’ ilk over at]

Art is everywhere

Described as ‘a project of awareness to stimulate the imagination through “art”‘,
Art is everywhere finds some interesting pieces, including empty art frames on city walls that make the wall underneath appear as possible art, and invisible monuments. I like their statement, ‘To seek for the beautiful in the daily things it undoubtedly helps us to… live better’.

Bathcamp report

This is my quick and dirty report from BathCamp, held in Bath last weekend. In summary – it was ace, and I went to sessions on the myth of engagement, how to run an Open Space session, social learning, CakePHP, managing complexity in software, learning Chinese online, the art of espresso, and a Delicious pecha kucha. I’ve included my notes on some of the sessions I’ve attended, and some ideas for the future in this post. My #bathcamp photos are here and there’s a general pool here.

There was a dinner on Friday night for people who were already in the area, which was a good chance to meet some people who were interested but unable to make the Saturday/Sunday.

The sessions:
The myth of engagement (Jack Martin Leith)
Engagement means it’s not a message from the organisation to the audience. ‘Buy-in’ means you’re being sold something. Work with people, don’t treat them as audiences or something to speak ‘to’. It should be a conversation or a dance. It means letting go of brand so it can belong to users. Flickr are a good example of how to do it – look at the ‘you’,’your x’ in their menus.

Engagement should be a code word for: inviting participation, including, involving, joining in with, conversing with, playing with, creating with.

Commands: tell
Messages: sell, test, consult
Conversations: co-create

Shell are really good at engagement, and do lots of research, as do the army (which makes sense, because they’d really need people to be engaged and committed).

Open Space (Jack Martin Leith)
This session was on how to run Open Space events, and on the comparative strengths of barcamps and open spaces.

Open Spaces set the theme as a question.

How you invite people is central. Attention is given to welcomes, orientation on arrival. The space is very important. The facilitator doesn’t do anything unless someone tries to spoil the vibe or close the space. Put the principles on the wall to remind everyone. The circle is critical in open space.

If you host a session, you agree to write a report (or get someone else to write it). [I think this is vital – it means the ideas, conversations, learning or connections aren’t lost, and can be shared beyond the session.]

People sign up for sessions once proposed sessions have been put up on the wall. This helps with planning, space allocation and coordinating sessions.

Social Learning (Laura Dewis)
Smart profiles [?], informed network of peers.

The system adapts to learner now. There was a slide on the OU (Open University) ecosystem – lots of different applications or sites linked together.

OU story – can tell the story of where you are with your course, how you’re coping, others can support you. Study buddies… connecting with others with same interests, recommendation ‘other people who’ve done this course also did…’

Cohere – semantic web. Deep learning.

Wider ecosystem of tools. They don’t talk to each others. Identify which make sense in learning/teaching context, how can they talk to each other, build on it.

Ecosystem of content – content partnerships.

Learning profiles can become CVs of a sort, showing what you’ve actually learnt and are interested in.

There was some discussion about online identities, overlap, professional vs private identities – I’m glad to see this acknowledged. Also discussion on the effect on brand.

Q: How much engagement from academics? A: A lot of buy-in, but also resistance to putting some content online e.g. video on youtube more than written course materials, as it’s better intellectual property. Developments that OU do doesn’t always get into mainstream education, they can be seen as stuff that OU would do but that traditional universities wouldn’t.

According to Brian Kelly, edu-punk is over, edu-pirate is in.

CakePHP, Mike (?)
It’s an MVC framework.
Nice pre-defined validation stuff.
[I wonder how cake compares to django? And if the validation fields for things like phone numbers are internationalised?]
Scaffolding – stuff already built into framework. [controllers for table input?]
How configurable is the scaffolding? [e.g. year field on date is really long but you might want to limit the range of years].
You can use basic class methods, helpers, components if not using scaffolding.
[This was one of a few useful demos of various application frameworks, including this Django one I didn’t get to]

Complexity in software stuff (Alex)
Why is complexity a problem? In case it’s not obvious – maintenance, debugging is harder, cost of new staff learning the software is more expensive, and less complexity makes life easier for developers (most importantly!).

There’s a body of knowledge on dealing with the complexity of software. Human experience codified. Looking at different metaphors.

Learning Chinese (Chris Hall)
The potential for learning on the internet is untapped.

Examples of autodidacts – Sophie Germain – French mathematician during 18th C. A hero for his learning. [And a possible modern bluestocking?] She had theories accepted by pretending to be a man until she was famous enough to be accepted regardless. The ability to reach out to others and explore ideas with them is really important – she wrote letters, but now we have the internet to enable autodidacts. [Does this mean autodidacts become socialdidacts? Though I guess the motivation still comes from the individual, even if they can learn with others.]

For Chris, learning Chinese was a muse, a focus or lens for learning about social networking and the potential of internet too.

Some interesting bits on the differences between western and Chinese web sites – more meaningful characters (rather than letters) mean lots of information fits in just two characters, which makes layout easier – consistent length of terms in e.g. navigation items.

Chinese users don’t trust search engines, and don’t have a culture of using search – they look for lists of links. But this will probably change.

Useful examples of using delicious in a RESTful way with bookmarked dictionary and translation sites.

Then a great example of using Ubiquity with Google’s translation API for in-page translation of someone else’s web content. Ubiquity makes it easy to use web APIs.

And we learnt that EEE’s implementation of the Chinese alphabet is phonetic – the keyboard goes by the sound of the word. I’ve always wondered how Chinese dictionaries work, and I guess they might use a similar technique.

The Art of Espresso (Sam)
Espressos have an intense flavour, they’re not necessarily strong.
Mmm, crema.

You can get good results for reasonable money e.g. £100, but steer clear of anything below £50. The pressure ones (e.g. stove top) are ‘really nasty’ and not espresso machines (ha!). Pump machines. Semi- vs fully-auto.

Grinders – grind coffee as close to using it as possible. Don’t keep coffee in the fridge. You can keep it in vacuum flasks in the dark. Espresso needs an almost powdery grind. Burr grinders are better than blade. Decent grinder c £50.

Sam covered the basic flavours from different regions – South American coffees are nutty, chocolately, quite sweet, African – darker, smokey, stronger (?) – your classic italian espresso
Asian Pacific coffees are citrussy, fruity, sharper.

I was way too excited about this session – I love proper coffee, and was having trouble staying awake so I really appreciated the espresso I had. I even got to have a moment of Australian-in-England coffee snobbery with a guy from Sydney (sorry, England!).

I went from this session into:
Delicious pecha kucha (Mark Ng)
The idea is that you provide your delicious username (e.g. and a script picks up your ten most recent bookmarks, and you have a certain number of seconds to explain each bookmark to the group. This was a bit scary after a fresh espresso on an empty stomach, but a fun challenge. The range of interests from a small bunch of geeks at one event is remarkable. I ended up having a great conversation about some of the challenges and big ideas in cultural heritage IT with some people in this session.

Later there was pizza and a tub quiz organised by Darren Beale, before we headed off to the pub and finally a burger from Schwartz’s and War Games on the projector for the night owls.

On the way up I’d realised how exciting it was to see an idea that came out of discussions at Museums and the Web in Montreal in April become reality in Bath in September. Between changing jobs and being off-line quite a lot in the lead-up, I wasn’t able to help out as much as I could have liked, so my thanks to those who actually made the event come together:
Dan Zambonini, Frankie Roberto, Laura Francis, Lisa Price, Mike Ellis, Stephen Pope, Tim Beadle. And my thanks to the sponsors who made sure we had food and drink and were generally very comfortable in the venue. And finally, it wouldn’t have worked without the friendly and engaged participants, so thank you everyone! Frankie’s put together a list of everyone’s twitter accounts to help people keep in contact. Darren‘s also linked to a bunch of blog posts about bathcamp.

If I’d run a session, I think it would have been a really open conversation on ‘what can cultural heritage IT do for you?’ – a chance to explain why so many of us are excited about digital heritage, and to hear from others about what they’d like to see museums and other organisations do, what kinds of data they might use, how they might use our content, what excites them and what bores them.

I’d also like to run a session blatantly aimed at picking the brains of some of the very smart people who come to unconferences – ask everyone to pick their favourite museum, exhibition or object, check out the relevant website and coming back to tell us one thing they’d improve about that website.

During the planning process the focus of Bathcamp changed from cultural heritage to a more general event for Bath/Bristol geeks, with some digital heritage ring-ins from further afield. I’m going to a spillover session for BarCampLondon5 and I’ll be interested in how that compares.

I’d still really like to see a MuseumCamp or DigitalHeritageCamp – I think it could be a good way of reaching out from the circle of cultural heritage geeks who have the same ideas about the Right Things To Do to engage with the rest of our sector (museums, galleries, libraries, archives, archaeology, even the humanities in general) – the people who would produce content, work with our audiences, sign-off on projects or push new metrics and evaluation models to sector funders. There’s also some discussion of this in the comments on Frankie’s round-up of bathcamp.

In the spirit of getting things done, I’ve created a digital heritage ning (ad hoc social network) as a central place where we can talk about organising a digital heritage barcamp – specifically in the UK to start with, but there’s no reason why it couldn’t be used to share ideas and organise events internationally. You can sign up directly on the ning if you want to be involved – it’s open to everyone, and you don’t have to be working in digital cultural heritage – an interest in how it can be done well is enough.

Introducing modern bluestocking

[Update, May 2012: I’ve tweaked this entry so it makes a little more sense.  These other posts from around the same time help put it in context: Some ideas for location-linked cultural heritage projectsExposing the layers of history in cityscapes, and a more recent approach  ‘…and they all turn on their computers and say ‘yay!” (aka, ‘mapping for humanists’). I’m also including below some content rescued from the ning site, written by Joanna:

What do historian Catharine Macauley, scientist Ada Lovelace, and photographer Julia Margaret Cameron have in common? All excelled in fields where women’s contributions were thought to be irrelevant. And they did so in ways that pushed the boundaries of those disciplines and created space for other women to succeed. And, sadly, much of their intellectual contribution and artistic intervention has been forgotten.

Inspired by the achievements and exploits of the original bluestockings, Modern Bluestockings aims to celebrate and record the accomplishments not just of women like Macauley, Lovelace and Cameron, but also of women today whose actions within their intellectual or professional fields are inspiring other women. We want to build up an interactive online resource that records these women’s stories. We want to create a feminist space where we can share, discuss, commemorate, and learn.

So if there is a woman whose writing has inspired your own, whose art has challenged the way you think about the world, or whose intellectual contribution you feel has gone unacknowledged for too long, do join us at, and make sure that her story is recorded. You’ll find lots of suggestions and ideas there for sharing content, and plenty of willing participants ready to join the discussion about your favourite bluestocking.

And more explanation from modernbluestocking on freebase:

Celebrating the lives of intellectual women from history…

Wikipedia lists bluestocking as ‘an obsolete and disparaging term for an educated, intellectual woman’.  We’d prefer to celebrate intellectual women, often feminist in intent or action, who have pushed the boundaries in their discipline or field in a way that has created space for other women to succeed within those fields.

The original impetus was a discussion at the National Portrait Gallery in London held during the exhibition ‘Brilliant Women, 18th Century Bluestockings’ ( where it was embarrassingly obvious that people couldn’t name young(ish) intellectual women they admired.  We need to find and celebrate the modern bluestockings.  Recording and celebrating the lives of women who’ve gone before us is another way of doing this.

However, at least one of the morals of this story is ‘don’t get excited about a project, then change jobs and start a part-time Masters degree.  On the other hand, my PhD proposal was shaped by the ideas expressed here, particularly the idea of mapping as a tool for public history by e.g using geo-located stories to place links to content in the physical location.

While my PhD has drifted away from early scientific women, I still read around the subject and occasionally adding names to  If someone’s not listed in Wikipedia it’s a lot harder to add them, but I’ve realised that if you want to make a difference to the representation of intellectual women, you need to put content where people look for information – i.e. Wikipedia.

And with the launch of Google’s Knowledge Graph, getting history articles into Wikipedia then into Freebase is even more important for the visibility of women’s history: “The Knowledge Graph is built using facts and schema from Freebase so everyone who has contributed to Freebase had a part in making this possible. …The Knowledge Graph is built using facts and schema from Freebase soeveryone who has contributed to Freebase had a part in making this possible. (Source: this post to the Freebase list).  I’d go so far as to say that if it’s worth writing a scholarly article on an intellectual woman, it’s worth re-using  your references to create or improve their Wikipedia entry.]

Anyway. On with the original post…]

I keep meaning to find the time to write a proper post explaining one of the projects I’m working on, but in the absence of time a copy and paste job and a link will have to do…

I’ve started a project called ‘modern bluestocking’ that’s about celebrating and commemorating intellectual women activists from the past and present while reclaiming and redefining the term ‘bluestocking’.  It was inspired by the National Portrait Gallery’s exhibition, ‘Brilliant Women: 18th-Century Bluestockings’.  (See also the review, Not just a pretty face).

It will be a website of some sort, with a community of contributors and it’ll also incorporate links to other resources.

We’ve started talking about what it might contain and how it might work at (ning died, so it’s at…)

Museum application (something to make for mashed museum day?): collect feminist histories, stories, artefacts, images, locations, etc; support the creation of new or synthesised content with content embedded and referenced from a variety of sources. Grab something, tag it, display them, share them; comment, integrate, annotate others. Create a collection to inspire, record, commemorate, and build on.
What, who, how should this website look? Join and help us figure it out.

Why modernbluestocking? Because knowing where you’ve come from helps you know where you’re going.

Sources could include online exhibition materials from the NPG (tricky interface to pull records from).  How can this be a geek/socially friendly project and still get stuff done?  Run a Modernbluestocking, community and museum hack day app to get stuff built and data collated?  Have list of names, portraits, objects for query. Build a collection of links to existing content on other sites? Role models and heroes from current life or history. Where is relatedness stored? ‘Significance’ -thorny issue? Personal stories cf other more mainstream content?  Is it like a museum made up of loan objects with new interpretation? How much is attribution of the person who added the link required? Login v not? Vandalism? How do deal with changing location or format of resources? Local copies or links? Eg images. Local don’t impact bandwidth, but don’t count as visits on originating site. Remote resources might disappear – moved, permissions changed, format change, taken offline, etc, or be replaced with different content. Examine the sources, look at their format, how they could be linked to, how stable they appear to be, whether it’s possible to contact the publisher…

Could also be interesting to make explicit, transparent, the processes of validation and canonisation.

Some ideas for location-linked cultural heritage projects

I loved the Fire Eagle presentation I saw at the WSG Findability event [my write-up] because it got me all excited again about ideas for projects that take cultural heritage outside the walls of the museum, and more importantly, it made some of those projects seem feasible.

There’s also been a lot of talk about APIs into museum data recently and hopefully the time has come for this idea. It’d be ace if it was possible to bring museum data into the everyday experience of people who would be interested in the things we know about but would never think to have ‘a museum experience’.

For example, you could be on your way to the pub in Stoke Newington, and your phone could let you know that you were passing one of Daniel Defoe‘s hang outs, or the school where Mary Wollstonecraft taught, or that you were passing a ‘Neolithic working area for axe-making’ and that you could see examples of the Neolithic axes in the Museum of London or Defoe’s headstone in Hackney Museum.

That’s a personal example, and those are some of my interests – Defoe wrote one of my favourite books (A Journal of the Plague Year), and I’ve been thinking about a project about ‘modern bluestockings’ that will collate information about early feminists like Wollstonecroft (contact me for more information) – but ideally you could tailor the information you receive to your interests, whether it’s football, music, fashion, history, literature or soap stars in Melbourne, Mumbai or Malmo. If I can get some content sources with good geo-data I might play with this at the museum hack day.

I’m still thinking about functionality, but a notification might look something like “did you know that [person/event blah] [lived/did blah/happened] around here? Find out more now/later [email me a link]; add this to your map for sharing/viewing later”.

I’ve always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. Wikinear is a great example, but it’s currently limited to content on Wikipedia, and this content has to pass a ‘notability’ test that doesn’t reflect local concepts of notability or ‘interestingness’. Wikipedia isn’t interested in the finds associated with an archaeological dig that happened at the end of your road in the 1970s, but with a bit of tinkering (or a nudge to me to find the time to make a better programmatic interface) you could get that information from the LAARC catalogue.

The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it’s scattered all over the web, in all kinds of formats with all kinds of ‘trustability’, from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. If an application developer or content editor can’t find information from trusted sources that fits the format required for their application, they’ll use whatever they can find on other encyclopaedic repositories, hack federated searches, or they’ll screen-scrape our data and generate their own set of entities (authority records) and object records. But what happens if a museum updates and republishes an incorrect record – will that change be reflected in various ad hoc data solutions? Surely it’s better to acknowledge and play with this new information environment – better for our data and better for our audiences.

Preparing the data and/or the interface is not necessarily a project that should be specific to any one museum – it’s the kind of project that would work well if it drew on resources from across the cultural heritage sector (assuming we all made our geo-located object data and authority records available and easily queryable; whether with a commonly agreed core schema or our own schemas that others could map between).

Location-linked data isn’t only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren’t ‘notable’ or ‘historic’ enough for recording officially, whether that’s grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.

Or getting away from traditional cultural heritage, I’d love to know when I’m passing over the site of one of London’s lost rivers, or a location that’s mentioned in a film, novel or song.

[Updated December 2008 to add – as QR tags get more mainstream, they could provide a versatile and cheap way to provide links to online content, or 250 characters of information. That’s more information than the average Blue Plaque.]