The rise of interpolated content?

One thing that might stand out when we look back at 2014 is the rise of interpolated content. We've become used to translating around auto-correct errors in texts and emails but we seem to be at a tipping point where software is going ahead and rewriting content rather than prompting you to notice and edit things yourself.

iOS doesn't just highlight or fix typos, it changes the words you've typed. To take one example, iOS users might use 'ill' more than they use 'ilk', but if I typed 'ilk' I'm not happy when it's replaced by an algorithmically-determined 'ill'. As a side note, understanding the effect of auto-correct on written messages will be a challenge for future historians (much as it is for us sometimes now).

And it's not only text. In 2014, Adobe previewed GapStop, 'a new video technology that eases transitions and removes pauses from video automatically'. It's not just editing out pauses, it's creating filler images from existing images to bridge the gaps so the image doesn't jump between cuts. It makes it a lot harder to tell when someone's words have been edited to say something different to what they actually said – again, editing audio and video isn't new, but making it so easy to remove the artefacts that previously provided clues to the edits is.

Photoshop has long let you edit the contrast and tone in images, but now their Content-Aware Move, Fill and Patch tools can seamlessly add, move or remove content from images, making it easy to create 'new' historical moments. The images on extrapolated-art.com, which uses '[n]ew techniques in machine learning and image processing […] to extrapolate the scene of a painting to see what the full scenery might have looked like' show the same techniques applied to classic paintings.

But photos have been manipulated since they were first used, so what's new? As one Google user reported in It’s Official: AIs are now re-writing history, 'Google’s algorithms took the two similar photos and created a moment in history that never existed, one where my wife and I smiled our best (or what the algorithm determined was our best) at the exact same microsecond, in a restaurant in Normandy.' The important difference here is that he did not create this new image himself: Google's scripts did, without asking or specifically notifying him. In twenty years time, this fake image may become part of his 'memory' of the day. Automatically generated content like this also takes the question of intent entirely out of the process of determining 'real' from interpolated content. And if software starts retrospectively 'correcting' images, what does that mean for our personal digital archives, for collecting institutions and for future historians?

Interventions between the act of taking a photo and posting it on social media might be one of the trends of 2015. Facebook are about to start 'auto-enhancing' your photos, and apparently, Facebook Wants To Stop You From Uploading Drunk Pictures Of Yourself. Apparently this is to save your mum and boss seeing them; the alternative path of building a social network that don't show everything you do to your mum and boss was lost long ago. Would the world be a better place if Facebook or Twitter had a 'this looks like an ill-formed rant, are you sure you want to post it?' function?

So 2014 seems to have brought the removal of human agency from the process of enhancing, and even creating, text and images. Algorithms writing history? Where do we go from here? How will we deal with the increase of interpolated content when looking back at this time? I'd love to hear your thoughts.

6 thoughts on “The rise of interpolated content?”

  1. I hadn't thought of digital interpolation in a wider context of digital history until just reading your post. I am pushing on what the limits of interpolation and extrapolation might be with my world history model (http://www.runningreality.org). The model engine interpolates history data to show any date in history with data tagged so it knows the fidelity of what it is computing and displaying (http://www.runningreality.org/reference/worlds/timelines.php). While the web version isn't powerful enough to do much interpolation, the app version really pushes things by doing things like interpolating to get city population values, then generating inferred street maps and buildings (and eventually people on those streets). Would love your thoughts on how to best convey to the user what the engine knows versus what it infers.

  2. Hi Garth,

    thanks for commenting! Visualising uncertainty still seems to be one of the trickier design issues in visualisation, whether archaeological illustrations that fill in gaps in what's known about buildings or fuzzy dates in museum collections. There's a 'truthiness' about visualisations that makes them seem more authoritative than they should sometimes. And to make things more difficult, different disciplines have different assumptions about 'true' data and different conventions for expressing. You could use texture or variations in colour hue or saturation to express certainty but explaining that so that it's immediately clear to viewers could be tricky.

    I'd be curious to hear about your experiments as you explore with the best way to designate interpolated data!

    Cheers, Mia

  3. The New York Times' Lens blog has started 'Debating the Rules and Ethics of Digital Photojournalism' after 20% of photos submitted to the World Press Photo competition were disqualified for 'excessive — and sometimes blatant — post-processing'. It's fascinating reading the range of opinions, particularly as they relate to the ease with which anyone can manipulate photos, and the limits of acceptable processing:

    …to blatantly add, move around or remove elements of a picture concerns us all, leaving many in the jury to feel we were being cheated, that they were being lied to.

    …since the advent of digital files — of RAW and the ease of Photoshop — the rules have blurred […] So is the truth of our images now more about intent when shooting?

    We are now thinking about setting up a series of video tutorials that show participating photographers what kinds of manipulation are not allowed. They will make clear that manipulation — the addition or removal of significant content, other than sensor anomalies — is not allowed, regardless of the technical process through which that addition or removal is achieved.

    All images are processed, and levels of processing are aesthetic judgments and do not by themselves violate contest rules. The only point at which processing becomes manipulation is when the toning is so great — usually by transforming significant parts of an image to opaque black or white — that it obscures substantial detail.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.