The Science Gossip project is one year old, and they’re asking their contributors to decide which periodicals they’ll work on next and to start new discussions about the documents and images they find interesting.
I’ve seen a few interesting studentships and jobs posted lately, hinting at research and projects to come. There’s a funded PhD in HCI and online civic engagement and a (now closed) studentship on Co-creating Citizen Science for Innovation.
Some of their key findings for museums (PDF) are below, interspersed with my comments. I read this section before the event, and found I didn’t really recognise the picture of museums it presented. ‘Museums’ mightn’t be the most useful grouping for a survey like this – the material that MTM London’s Ed Corn presented on the day broke the results down differently, and that made more sense. The c2,500 museums in the UK are too varied in their collections (from dinosaurs to net art), their audiences, and their local and organisational context (from tiny village museums open one afternoon a week, to historic houses, to university museums, to city museums with exhibitions that were built in the 70s, to white cube art galleries, to giants like the British Museum and Tate) to be squished together in one category. Museums tend to be quite siloed, so I’d love to know who fills out the survey, and whether they ask the whole organisation to give them data beforehand.
According to the survey, museums are significantly less likely to engage in:
email marketing (67 per cent vs. 83 per cent for the sector as a whole) – museums are missing out! Email marketing is relatively cheap, and it’s easy to write newsletters. It’s also easy to ask people to sign up when they’re visiting online sites or physical venues, and they can unsubscribe anytime they want to. Social media figures can look seductively huge, but Facebook is a frenemy for organisations as you never know how many people will actually see a post.
publish content to their own website (55 per cent vs. 72 per cent) – I wasn’t sure how to interpret this – does this mean museums don’t have their own websites? Or that they can’t update them? Or is ‘content’ a confusing term? At the event it was said that 10% of orgs have no email marketing, website or Facebook, so there are clearly some big gaps to fill still.
sell event tickets online (31 per cent vs. 45 per cent) – fair enough, how many museums sell tickets to anything that really need to be booked in advance?
post video or audio content (31 per cent vs. 43 per cent) – for most museums, this would require an investment to create as many don’t already have filmable material or archived films to hand. Concerns about ‘polish’ might also be holding some museums back – they could try periscoping tours or sharing low-fi videos created by front of house staff or educators. Like questions about offering ‘online interactive tours of real-world spaces’ and ‘artistic projects’, this might reflect initial assumptions based on ACE’s experience with the performing arts. A question about image sharing would make more sense for museums. Similarly, the kinds of storytelling that blog posts allow can sometimes work particularly well for history and science museums (who don’t have gorgeous images of art that tell their own story).
make use of social media video advertising (18 per cent vs. 32 per cent) – again, video is a more natural format for performing arts than for museums
use crowdfunding (8 per cent vs. 19 per cent) – crowdfunding requires a significant investment of time and is often limited to specific projects rather than core business expenses, so it might be seen as too risky, but is this why museums are less likely to try it?
livestream performances (2 per cent vs. 12 per cent) – again, this is less likely to apply to museums than performing arts organisations
One of the key messages in Ed Corn’s talk was that organisations are experimenting less, evaluating the impact of digital work less, and not using data in digital decision making. They’re also scaling back on non-core work; some are focusing on consolidation – fixing the basics like websites (and mobile-friendly sites). Barriers include lack of funding, lack of in-house time, lack of senior digital managers, slow/limited IT systems, and lack of digital supplier. (Many of those barriers were also listed in a small-scale survey on ‘issues facing museum technologists’ I ran in 2010.)
When you consider the impact of the cuts year on year since 2010, and that ‘one in five regional museums at least part closed in 2015‘, some of those continued barriers are less surprising. At one point everyone I know still in museums seemed to be doing at least one job on top of theirs, as people left and weren’t replaced. The cuts might have affected some departments more deeply than others – have many museums lost learning teams? I suspect we’ve also lost two generations of museum technologists – the retiring generation who first set up mainframe computers in basements, and the first generation of web-ish developers who moved on to other industries as conditions in the sector got more grim/good pay became more important. Fellow panelist Ros Lawler also made the point that museums have to deal with legacy systems while also trying to look at the future, and that museum projects tend to slow when they could be more agile.
Like many in the audience, I really wanted to know who the ‘digital leaders’ – the 10% of organisations who thought digital was important, did more digital activities and reaped the most benefits from their investment – were, and what made them so successful. What can other organisations learn from them?
It seems that we still need to find ways to share lessons learnt, and to help everyone in the arts and cultural sectors learn how to make the most of digital technologies and social media. Training that meets the right need at the right time is really hard to organise and fund, and there are already lots of pockets of expertise within organisations – we need to get people talking to each other more! As I said at the event, most technology projects are really about people. Front of house staff, social media staff, collections staff – everyone can contribute something.
If you were there, have read the report or explored the data, I’d love to know what you think. And I’ll close with a blatant plug: the MCG has two open calls for papers a year, so please keep an eye out for those calls and suggest talks or volunteer to help out!
I’m at the British Museum today for the Museums Computer Group‘s annual UK ‘Museums on the Web’ conference. UKMW15 has a packed line-up full of interesting presentations. As Chair of the MCG, I briefly introduced the event. My notes are below, in part to make sure that everyone who should be thanked is thanked! You can read a more polished version of this written with my Programme Committe Co-Chair Danny Birchall in a Guardian Culture Professionals article, ‘How digital tech can bridge gaps between museums and audiences‘.
UK Museums on the Web 2015: ‘Bridging Gaps, Making Connections’ #UKMW15
I’d like to start by thanking everyone who helped make today happen, and by asking the MCG Committee Members who are here today to stand up, so that you can chat to them, ideally even thank them, during the day. For those who don’t know us, the Museums Computer Group is a practitioner-lead group who work to connect, support and inspire anyone working in museum technology. (There are lots of ways to get involved – we’re electing new committee members at our AGM at lunchtime, and we will also be asking for people to host next year’s event at their museum or help organise a regional event.)
I’d particularly like to thank Ina Pruegel and Jennifer Ross, who coordinated the event, the MCG Committee members who did lots of work on the event (Andrew, Dafydd, Danny, Ivan, Jess, Kath, Mia, Rebecca, Rosie), and the Programme Committee members who reviewed presentation proposals sent in. They were: co-chairs: Danny Birchall and Mia Ridge, with Chris Michaels (British Museum), Claire Bailey Ross (Durham University), Gill Greaves (Arts Council England), Jenny Kidd (Cardiff University), Jessica Suess (Oxford University Museums), John Stack (Science Museum Group), Kim Plowright (Mildly Diverting), Matthew Cock (Vocal Eyes), Rachel Coldicutt (Friday), Sara Wajid (National Maritime Museum), Sharna Jackson (Hopster), Suse Cairns (Baltimore Museum of Art), Zak Mensah (Bristol Museums, Galleries & Archives).
And of course I’d like to thank the speakers and session chairs, the British Museum, Matt Caines at the Guardian, and in advance I’d like to thank all the tweets, bloggers and photographers who’ll help spread this event beyond the walls of this room.
Which brings me to the theme of the event, ‘Bridging Gaps, Making Connections’. We’ve been running UK Museums on the Web since 2001; last year our theme was ‘museums beyond the web’ in recognition that barriers between ‘web teams’ and ‘web projects’ and the rest of the organisation were breaking down. But it’s also apparent that the gap between tiny, small, and even medium-sized museums and the largest, best-funded museums meant that digital expertise and knowledge had not reached the entire sector. The government’s funding cuts and burnout mean that old museum hands have left, and some who replace them need time to translate their experience in other sectors into museums. Our critics and audiences are confused about what to expect, and museums are simultaneously criticised for investing too much in technologies that disrupt the traditional gallery and for being ‘dull and dusty’. Work is duplicated across museums, libraries, archives and other cultural organisations; academic and commercial projects sometimes seem to ignore the wealth of experience in the sector.
So today is about bridging those gaps, and about making new connections. (I’ve made my own steps in bridging gaps by joining the British Library as a Digital Curator.) We have a fabulous line-up representing the wealth and diversity of experience in museum technologies.
Ironically, the internet was down on the evening of Ada Lovelace Day 2015, an annual, international ‘celebration of the achievements of women in science, technology, engineering and maths (STEM)’, so I couldn’t post at the time. Belatedly, the people whose achievements I’ve admired are:
Professor Monica Grady, whose joy when the probe Philae successfully landed on the Rosetta comet is just about the most wonderful thing on the internet (and she worked on one of the instruments on board, which is very cool). Like New Horizons sending back images of Pluto, it’s a reminder of the awe-inspiring combination of planning, foresight, science and engineering in space that has made 2015 so interesting.
Finally, I love this image of Margaret Hamilton, lead software engineer on Project Apollo (1969), with some of the Apollo Guidance Computer (AGC) source code.
Back in September last year I blogged about the implications for cultural heritage and digital humanities crowdsourcing projects that used simple tasks as the first step in public engagement of advances in machine learning that mean that fun, easy tasks like image tagging and text transcription could be done by computers. (Broadly speaking, ‘machine learning’ is a label for technologies that allow computers to learn from the data available to them. It means they don’t have to specifically programmed to know how to do a task like categorising images – they can learn from the material they’re given.)
One reason I like crowdsourcing in cultural heritage so much is that time spent on simple tasks can provide opportunities for curiosity, help people find new research interests, and help them develop historical or scientific skills as they follow those interests. People can notice details that computers would overlook, and those moments of curiosity can drive all kinds of new inquiries. I concluded that, rather than taking the best tasks from human crowdsourcers, ‘human computation‘ systems that combine the capabilities of people and machines can free up our time for the harder tasks and more interesting questions.
I’ve been thinking about ‘ecosystems’ of crowdsourcing tasks since I worked on museum metadata games back in 2010. An ecosystem of tasks – for example, classifying images into broad types and topics in one workflow so that people can find text to transcribe on subjects they’re interested in, and marking up that text with relevant subjects in a final workflow – means that each task can be smaller (and thereby faster and more enjoyable). Other workflows might validate the classifications or transcribed text, allowing participants with different interests, motivations and time constraints to make meaningful contributions to a project.
The New York Public Library’s Building Inspector is an excellent example of this – they offer five tasks (checking or fixing automatically-detected building ‘footprints’, entering street numbers, classifying colours or finding place names), each as tiny as possible, which together result in a complete set of checked and corrected building footprints and addresses. (They’ve also pre-processed the maps to find the building footprints so that most of the work has already been done before they asked people to help.)
After teaching ‘crowdsourcing cultural heritage’ at HILT over the summer, where the concept of ‘ecosystems’ of crowdsourced tasks was put into practice as we thought about combining classification-focused systems like Zooniverse’s Panoptes with full-text transcription systems, I thought it could be useful to give some specific examples of ecosystems for human computation in cultural heritage. If there are daunting data cleaning, preparation or validation tasks necessary before or after a core crowdsourcing task, computational ecosystems might be able to help. So how can computational ecosystems help pre- and post-process cultural heritage data for a better crowdsourcing experience?
While older ecosystems like Project Gutenberg and Distributed Proofreaders have been around for a while, we’re only just seeing the huge potential for combining people + machines into crowdsourcing ecosystems. The success of the Smithsonian Transcription Center points to the value of ‘niche’ mini-projects, but breaking vast repositories into smaller sets of items about particular topics, times or places also takes resources. Machines can learn to classify source material by topic, by type, by difficulty or any other system that crowdsourcers can teach it. You can improve machine learning by giving systems ‘ground truth’ datasets with (for example) a crowdsourced transcription of the text in images, and as Ted Underwood pointed out on my last post, comparing the performance of machine learning and crowdsourced transcriptions can provide useful benchmarks for the accuracy of each method. Small, easy correction tasks can help improve machine learning processes while producing cleaner data.
Computational ecosystems might be able to provide better data validation methods. Currently, tagging tasks often rely on raw consensus counts when deciding whether a tag is valid for a particular image. This is a pretty crude measure – while three non-specialists might apply terms like ‘steering’ to a picture of a ship, a sailor might enter ‘helm’, ’tiller’ or ‘wheelhouse’, but their terms would be discarded if no-one else enters them. Mining disciplinary-specific literature for relevant specialist terms, or finding other signals for subject-specific expertise would make more of that sailor’s knowledge.
Computational ecosystems can help at the personal, as well as the project level. One really exciting development is computational assistance during crowdsourcing tasks. In Transcribing Bentham … with the help of a machine?, Tim Causer discusses TSX, a new crowdsourced transcription platform from the Transcribe Bentham and tranScriptorium projects. You can correct computationally-generated handwritten text transcription (HTR), which is a big advance in itself. Most importantly, you can also request help if you get stuck transcribing a specific word. Previously, you’d have to find a friendly human to help with this task. And from here, it shouldn’t be too difficult to combine HTR with computational systems to give people individualised feedback on their transcriptions. The potential for helping people learn palaeography is huge!
Better validation techniques would also improve the participants’ experience. Providing personalised feedback on the first tasks a participant completes would help reassure them while nudging them to improve weaker skills.
Most science and heritage projects working on human computation are very mindful of the impact of their choices on the participants’ experience. However, there’s a risk that anyone who treats human computation like a computer science problem (for example, computationally assigning tasks to the people with the best skills for them) will lose sight of the ‘human’ part of the project. Individual agency is important, and learning or mastering skills is an important motivation. Non-profit crowdsourcing should never feel like homework. We’re still learning about the best ways to design crowdsourcing tasks, and that job is only going to get more interesting.
I was in London this week for the Linked Pasts event, where I presented on trends and practices for open data in cultural heritage. Linked Pasts was a colloquium on linked open data in cultural heritage organised by the Pelagios project (Leif Isaksen, Elton Barker and Rainer Simon with Pau de Soto). I really enjoyed the other papers, which included thoughtful, grounded approaches to structured data for historical periods, places and people, recognition of the importance of designing projects around audience needs (including user research), the relationship between digital tools and scholarly inquiry, visualisations as research tools, and the importance of good infrastructure for digital history.
My discussion points are based on years of conversations with other cultural heritage technologists in museums, libraries, and archives, but inevitably I’ll have blind spots. For example, I’m focusing on the English-speaking world, which means I’m not discussing the great work that Dutch and Japanese organisations are doing. I’ve undoubtedly left out brilliant specific examples in the interests of focusing on broader trends. The point is to start conversations, to bring issues out into the open so we can collectively decide how to move forward.
The good news is that more and more open cultural data is being published. Organisations have figured out that a) nothing bad is likely to happen and that b) they might get some kudos for releasing open data.
Generally, organisations are publishing the data that they have to hand – this means it’s mostly collections data. This data is often as messy, incomplete and fuzzy as you’d expect from records created by many different people using many different systems over a hundred or more years.
Copyright restrictions mean that images mightn’t be included. Furthermore, because it’s often collections data, it’s not necessarily rich in interpretative information. It’s metadata rather than data. It doesn’t capture the scholarly debates, the uncertain attributions, the biases in collecting… It certainly doesn’t capture the experience of viewing the original object.
Licensing issues are still a concern. Until cultural organisations are rewarded by their funders for releasing open data, and funders free organisations from expectations for monetising data, there will be damaging uncertainty about the opportunity cost of open data.
Non-commercial licenses are also an issue – organisations and scholars might feel exploited if others who have not contributed to the process of creating it can commercially publish their work. Finally, attribution is an important currency for organisations and scholars but most open licences aren’t designed with that in mind.
…and the unstructured
The data that’s released is often pretty unstructured. CSV files are very easy to use, so they help more people get access to information (assuming they can figure out GitHub), but a giant dump like this doesn’t provide stable URIs for each object. Records in data dumps rarely link to external identifiers like the Getty’s Thesaurus of Geographic Names, Art & Architecture Thesaurus (AAT) or Union List of Artist Names, or vernacular sources for place and people names such as Geonames or DBPedia. And that’s fair enough, because people using a CSV file probably don’t want all the hassle of dereferencing each URI to grab the place name so they can visualise data on a map (or whatever they’re doing with the data). But it also means that it’s hard for someone to reliably look for matching artists in their database, and link these records with data from other organisations.
So it’s open, but it’s often not very linked. If we’re after a ‘digital ecosystem of online open materials’, this open data is only a baby step. But it’s often where cultural organisations finish their work.
Classics > Cultural Heritage?
But many others, particularly in the classical and ancient world, have managed to overcome these issues to publish and use linked open data. So why do museums, libraries and archives seem to struggle? I’ll suggest some possible reasons as conversation starters…
Not enough time
Organisations are often busy enough keeping their internal systems up and running, dealing with the needs of visitors in their physical venues, working on ecommerce and picture library systems…
Not enough skills
Cultural heritage technologists are often generalists, and apart from being too time-stretched to learn new technologies for the fun of it, they might not have the computational or information science skills necessary to implement the full linked data stack.
Some cultural heritage technologists argue that they don’t know of any developers who can negotiate the complexities of SPARQL endpoints, so why publish it? The complexity is multiplied when complex data models are used with complex (or at least, unfamiliar) technologies. For some, SPARQL puts the ‘end’ in ‘endpoint’, and ‘RDF triples‘ can seem like an abstraction too far. In these circumstances, the instruction to provide linked open data as RDF is a barrier they won’t cross.
But sometimes it feels as if some heritage technologists are unnecessarily allergic to complexity. Avoiding unnecessary complexity is useful, but progress can stall if they demand that everything remains simple enough for them to feel comfortable. Some technologists might benefit from working with people more used to thinking about structured data, such as cataloguers, registrars etc. Unfortunately, linked open data falls in the gap between the technical and the informatics silos that often exist in cultural organisations.
And organisations are also not yet using triples or structured data provided by other organisations [with the exception of identifiers for e.g. people, places and specific vocabularies]. They’re publishing data in broadcast mode; it’s not yet a dialogue with other collections.
Not enough data
In a way, this is the collections documentation version of the technical barriers. If the data doesn’t already exist, it’s hard to publish. If it needs work to pull it out of different departments, or different individuals, who’s going to resource that work? Similarly, collections staff are unlikely to have time to map their data to CIDOC-CRM unless there’s a compelling reason to do so. (And some of the examples given might use cultural heritage collections but are a better fit with the work of researchers outside the institution than the institution’s own work).
It may be easier for some types of collections than others – art collections tend to be smaller and better described; natural history collections can link into international projects for structured data, and libraries can share cataloguing data. Classicists have also been able to get a critical mass of data together. Your local records office or small museum may have more heterogeneous collections, and there are fewer widely used ontologies or vocabularies for historical collections. The nature of historical collections means that ‘small ontologies, loosely joined’, may be more effective, but creating these, or mapping collections to them, is still a large piece of work. While there are tools for mapping to data structures like Europeana’s data model, it seems the reasons for doing so haven’t been convincing enough, so far. Which brings me to…
Not enough benefits
This is an important point, and an area the community hasn’t paid enough attention to in the past. Too many conversations have jumped straight to discussion about the specific standards to use, and not enough have been about the benefits for heritage audiences, scholars and organisations.
Many technologists – who are the ones making decisions about digital standards, alongside the collections people working on digitisation – are too far removed from the consumers of linked open data to see the benefits of it unless we show them real world needs.
There’s a cost in producing data for others, so it needs to be linked to the mission and goals of an organisation. Organisations are not generally able to prioritise the potential, future audiences who might benefit from tools someone else creates with linked open data when they have so many immediate problems to solve first.
While some cultural and historical organisations have done good work with linked open data, the purpose can sometimes seem rather academic. Linked data is not always explained so that the average, over-worked collections or digital team will that convinced by the benefits outweigh the financial and intellectual investment.
No-one’s drinking their own champagne
You don’t often hear of people beating on the door of a museum, library or archive asking for linked open data, and most organisations are yet to map their data to specific, widely-used vocabularies because they need to use them in their own work. If technologists in the cultural sector are isolated from people working with collections data and/or research questions, then it’s hard for them to appreciate the value of linked data for research projects.
The classical world has benefited from small communities of scholar-technologists – so they’re not only drinking their own champagne, they’re throwing parties. Smaller, more contained collections of sources and research questions helps create stronger connections and gives people a reason to link their sources. And as we’re learning throughout the day, community really helps motivate action.
(I know it’s normally called ‘eating your own dog food’ or ‘dogfooding’ but I’m vegetarian, so there.)
Linked open data isn’t built into collections management systems
Getting linked open data into collections management systems should mean that publishing linked data is an automatic part of sharing data online.
Chicken or the egg?
So it’s all a bit ‘chicken or the egg’ – will it stay that way? Until there’s a critical mass, probably. These conversations about linked open data in cultural heritage have been going around for years, but it also shows how far we’ve come.
Modern elections are data visualisation bonanzas, and the 2015 UK General Election is no exception.
Last night seven political leaders presented their views in a televised debate. This morning the papers are full of snap polls, focus groups, body language experts, and graphs based on public social media posts describing the results. Graphs like the one below summarise masses of text using a technique called ‘sentiment analysis’, a form of computational language processing.* After a twitter conversation with @benosteen and @MLBrook I thought it was worth posting about the inherent biases in the tools that create these visualisations. Ultimately, ‘sentiment analysis’ is someone’s opinion turned into code – so whose opinion are you seeing?
This is a great time to remember that sentiment analysis – mining text to see what people are talking about and how they feel about it – is based on algorithms and software libraries that were created and configured by people who’ve made a series of small, accumulative decisions that affect what we see. You can think of sentiment analysis as a sausage factory with the text of tweets as the mince going in one end, and pretty pictures as the product coming out the other end. A healthy democracy needs the list of secret ingredients added during processing, not least because this election prominently features spin rooms and party lines.
What are those ‘ingredients’? The software used for sentiment analysis is ‘trained’ on existing text, and the type of text used affects what the software assumes about the world. For example, software trained on business articles is great at recognising company names but does not do so well on content taken from museum catalogues (unless the inventor of an object went on to found a company and so entered the trained vocabulary). The algorithms used to process text change the output, as does the length of the phrase analysed. The results are riddled with assumptions about tone, intent, the demographics of the poster and more.
In the case of an election, we’d also want to know when the text used for training was created, whether it looks at previous posts by the same person, and how long the software was running over the given texts. Where was the baseline of sentiment on various topics set? Who defines what ‘neutral’ looks like to an algorithm?
We should ask the same questions about visualisations and computational analysis that we’d ask about any document. The algorithmic ‘black box’ is a human construction, and just like every other text, software is written by people. Who’s paying for it? What sources did they use? If it’s an agency promoting their tools, do they mention the weaknesses and probable error rates or gloss over it? If it’s a political party (or a company owned by someone associated with a party), have they been scrupulous in weeding out bots? Do retweets count? Are some posters weighted more heavily? Which visualisations were discarded and how did various news outlets choose the visualisations they featured? Which parties are left out?
It matters because, all software has biases, and, as Brandwatch say, ‘social media will have a significant role in deciding the outcome of the general election’. And finally, as always, who’s not represented in the dataset?
One thing that might stand out when we look back at 2014 is the rise of interpolated content. We’ve become used to translating around auto-correct errors in texts and emails but we seem to be at a tipping point where software is going ahead and rewriting content rather than prompting you to notice and edit things yourself.
iOS doesn’t just highlight or fix typos, it changes the words you’ve typed. To take one example, iOS users might use ‘ill’ more than they use ‘ilk’, but if I typed ‘ilk’ I’m not happy when it’s replaced by an algorithmically-determined ‘ill’. As a side note, understanding the effect of auto-correct on written messages will be a challenge for future historians (much as it is for us sometimes now).
And it’s not only text. In 2014, Adobe previewed GapStop, ‘a new video technology that eases transitions and removes pauses from video automatically’. It’s not just editing out pauses, it’s creating filler images from existing images to bridge the gaps so the image doesn’t jump between cuts. It makes it a lot harder to tell when someone’s words have been edited to say something different to what they actually said – again, editing audio and video isn’t new, but making it so easy to remove the artefacts that previously provided clues to the edits is.
Photoshop has long let you edit the contrast and tone in images, but now their Content-Aware Move, Fill and Patch tools can seamlessly add, move or remove content from images, making it easy to create ‘new’ historical moments. The images on extrapolated-art.com, which uses ‘[n]ew techniques in machine learning and image processing […] to extrapolate the scene of a painting to see what the full scenery might have looked like’ show the same techniques applied to classic paintings.
But photos have been manipulated since they were first used, so what’s new? As one Google user reported in It’s Official: AIs are now re-writing history, ‘Google’s algorithms took the two similar photos and created a moment in history that never existed, one where my wife and I smiled our best (or what the algorithm determined was our best) at the exact same microsecond, in a restaurant in Normandy.’ The important difference here is that he did not create this new image himself: Google’s scripts did, without asking or specifically notifying him. In twenty years time, this fake image may become part of his ‘memory’ of the day. Automatically generated content like this also takes the question of intent entirely out of the process of determining ‘real’ from interpolated content. And if software starts retrospectively ‘correcting’ images, what does that mean for our personal digital archives, for collecting institutions and for future historians?
Interventions between the act of taking a photo and posting it on social media might be one of the trends of 2015. Facebook are about to start ‘auto-enhancing’ your photos, and apparently, Facebook Wants To Stop You From Uploading Drunk Pictures Of Yourself. Apparently this is to save your mum and boss seeing them; the alternative path of building a social network that don’t show everything you do to your mum and boss was lost long ago. Would the world be a better place if Facebook or Twitter had a ‘this looks like an ill-formed rant, are you sure you want to post it?’ function?
So 2014 seems to have brought the removal of human agency from the process of enhancing, and even creating, text and images. Algorithms writing history? Where do we go from here? How will we deal with the increase of interpolated content when looking back at this time? I’d love to hear your thoughts.
Tom Morris gave a lightning talk on ‘How to use Semantic Web data in your hack‘ (aka SPARQL and semantic web stuff).
He’s since posted his links and queries – excellent links to endpoints you can test queries in.
Semantic web often thought of as long-promised magical elixir, he’s here to say it can be used now by showing examples of queries that can be run against semantic web services. He’ll demonstrate two different online datasets and one database that can be installed on your own machine.
First – dbpedia – scraped lots of wikipedia, put it into a database. dbpedia isn’t like your averge database, you can’t draw a UML diagram of wikipedia. It’s done in RDF and Linked Data. Can be queried in a language that looks like SQL but isn’t. SPARQL – is a w3c standard, they’re currently working on SPARQL 2.
Go to dbpedia.org/sparql – submit query as post. [Really nice – I have a thing about APIs and platforms needing a really easy way to get you to ‘hello world’ and this does it pretty well.]
[Line by line comments on the syntax of the queries might be useful, though they’re pretty readable as it is.]
‘select thingy, wotsit where [the slightly more complicated stuff]’
Can get back results in xml, also HTML, ‘spreadsheet’, JSON. Ugly but readable. Typed.
[Trying a query challenge set by others could be fun way to get started learning it.]
One problem – fictional places are in Wikipedia e.g. Liberty City in Grand Theft Auto.
Libris – how library websites should be
[I never used to appreciate how much most library websites suck until I started back at uni and had to use one for more than one query every few years]
Has a query interface through SPARQL
Comment from the audience BBC – now have SPARQL endpoint [as of the day before? Go BBC guy!].
Playing with mulgara, open source java triple store. [mulgara looks like a kinda faceted search/browse thing] Has own query language called TQL which can do more intresting things than SPARQL. Why use it? Schemaless data storage. Is to SQL what dynamic typing is to static typing. [did he mean ‘is to sparql’?]
Question from audence: how do you discover what you can query against?
Answer: dbpedia website should list the concepts they have in there. Also some documentation of categories you can look at. [Examples and documentation are so damn important for the update of your API/web service.]
Coming soon [?] SPARUL – update language, SPARQL2: new features
[These are more (very) rough notes from the weekend’s Open Hack London event – please let me know of clarifications, questions, links or comments. My other notes from the event are tagged openhacklondon.
Quick plug: if you’re a developer interested in using cultural heritage (museums, libraries, archives, galleries, archaeology, history, science, whatever) data – a bunch of cultural heritage geeks would like to know what’s useful for you (more background here). You can comment on the #chAPI wiki, or tweet @miaridge (or @mia_out). Or if you work for a company that works with cultural heritage organisations, you can help us work better with you for better results for our users.]
There were other lightning talks on Pachube (pronounced ‘patchbay’, about trying to build the internet of things, making an API for gadgets because e.g. connecting hardware to the web is hard for small makers) and Homera (an open source 3d game engine).