Nonhuman Publishing
Friday 20 April 2018
Lectures
article
So in this talk I’m going to attempt to draw a clear map of nonhuman publishing as it already exists today (because it certainly exists already!) and how we might think about the subject going into the future. Through some 4 or 5 examples I’m hoping to show that the human no longer holds the exclusive rights to ‘publishing’ as a practice... both in the narrow sense of books, articles, essays, etc... and in the broader sense that has been developed and hinted at in more recent conversations around social media, contemporary performance, post-internet art, and so on.
Therefore, with that in mind, I’d like to start with a quote from a prolific but little known editor...
About me: I am written using the PyWikiBot (core) framework. Most of my work centers on moving and deleting categories and updating listified pages of categories. I am an admin bot, so don't make me get all SkyNet on you.
I currently have around 5.1 million edits!
Here's a list of listified categories I update regularly (or as on-wiki changes necessitate):
— Speedy deletion candidates — Current proposed deletions — Old proposed deletions — Requests for help — Requests for unblocking (and) — Categories up for deletion
•••
This is an entry taken from the profile page of the wikipedia user and contributor Cydebot. Interestingly, as we can see, on the Wikipedia platform there is nothing much built into the userpages of these bots that would indicate they are nonhuman – especially when the bot’s developer chooses to write their biography in the first-person (as is the case for CydeBot). Occasionally a programmer will add a ‘big red button’ titled with some variant of ‘EMERGENCY BOT SHUTOFF BUTTON’ to a bot’s userpage… but for Cydebot this most definitely is not an option. In fact, the onus is on you to behave or else suffer at the hands of Cyde turning, as they write, ‘all SkyNet’.1
Wikipedia bots were introduced onto the platform in 2002, just one year after the encyclopedias founding. Over the last decade-and-a-half bots have performed various general tasks, from deleting vandalism and updating links, to much more specific functions. For example, one of Cydebot’s co-workers, ClueBot II, showcases the versatility of these users – ClueBot II not only rifles through the platform for potential vandals but has also uploaded thousands of tiny articles about asteroids using data collected by NASA – a role which has won ClueBot a whole host of prestigious awards and made him one of the most prolific publishers on the platform.2
As sweet as they may appear, these bots are, however, prone to arguing. A paper released by researchers at the University of Oxford last year showed that “although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits... [leading to] 'fights' [which can] continue for years” or, in some instances, even decades.3 To illustrate what this means it might be useful to turn to an example given by Wired magazine: here, one bot makes an edit to redirect the page or search query ‘Ricotta al forno’ to ‘Ricotta cheese’ where previously another bot had linked it simply to ‘Ricotta’. After this edit the initial bot reverts it back, at which point the second editor, preferring their own naming convention, makes the change again… and so on, ad infinitum. In fact, the two bots involved in this example – Scepbot and Russbot – have, across 1800 different articles, collectively reverted one-anothers edits just under 2000 times over a three year period.4
So, in the context of what we are discussing here today – publishing and distribution – these bot-to-bot editorial processes, or what we might call ‘nonhuman’ arguments start to create a few problems.
While much of what we speak of in the ‘publishing industry’ concerns strictly anthropogenic relations, intra- and inter-social and cultural interactions – or in other words, humans ‘making things public’ for other humans – it would appear that nonhuman actors have begun to make some quite complex foray’s into this same territory...
...one’s that start to complicate our received notions of what it means to be, or to function as, an editor or a publisher (and also what publishing is and does in the first place). To start with a simple task then, we might say: smart wikibot’s now play a very significant part in the editing process of a platform that is read by millions of humans every single day – and the implications of this needs to be understood now, not, as has been the case for some time, imagined away into a netflix series or some silicon-inspired science fiction.
However, as dramatic as this sounds, it would be very easy to overstate the importance of these algorithms. After all, these are quite simply scripts, and as much as I or their developers might like to anthropomorphize them – their influence on the world, or what we might call their ‘reach’, remains pretty limited. There are, however, operations which are much wider reaching in their scope.
Wordsmith, for example, is a piece of software generated by Automated Insights (AI). In their own words, Wordsmith is “a natural language generation (NLG) platform that turns data into insightful narratives.”5 It takes data from a spreadsheet and outputs an article, a story, a report, whatever you want... and, beyond that, through some sophisticated ‘configuration’ Automated Insights can even stylise the voice of the Wordsmith, making it sound like a basketball team, a vineyard, or in one slightly bizarre case, a bodybuilder.
One of Automated Insights' more prestigious clients however, and the one which is most relevant for our conversation today, is the Associated Press (AP). In the AP’s case, they needed a way of automating their Quarterly Earning Reports (QEP’s) for US public companies… before working with AI, the AP could only produce 300 stories per quarter, each one meticulously hand written and edited by a financial journalist. However, after implementing a customised version of Wordsmith in 2014, the AP were able to output 4,400 quarterly reports – in other words, the algorithm was able to produce 12 times the number of stories than the AP’s previous, strictly human efforts.6
In 2016, two years after this initial implementation, the AP insights team published a study which found that a result of automating (and massively increasing) the output of QEP’s had a material effect on the amount of trading that took place within the US financial market.7 The researchers suggested “that automated coverage increases firms’ trading and liquidity around their earnings announcements.” In non-financial jargon, smaller companies that were getting left out from the AP’s earning reports managed to make the news and therefore received some interest they might otherwise have missed. So, here, beyond the wikibots rapid re-directions we can see a very material influence that these bot-human publishing relations have.
However, things get even more interesting if we being to speculate even a little.
Staying within this financial context for a moment, it might be wise to turn our attention to the ‘flash crash’ that took place in 2016, a few months after the Brexit vote. Overnight the British pound dropped by 6% and, at the time, no one really knew why or how to fix it. However, shortly after, the BBC published a compelling story linking the mini-freefall to algorithmic traders... or ‘algo’s’ as the human-traders call them. Kathleen Brooks, a research director at the financial broker City Index, wrote:
“These days some algos trade on the back of news sites, and even what is trending on social media sites such as Twitter … Apparently it was a rogue algorithm that triggered [this] selloff after it picked up comments made by the French President Francois Hollande, who said if Theresa May and co. want hard Brexit, they will get hard Brexit.”8
So an 'algo' somewhere got the idea from a headline or a tweet – or a tweet of a headline – that a hard brexit was on the way, and started selling off sterling as fast as it could. This in turn lead all of the algo’s associates to reconsider their own sterling policies and the value of the pound suddenly tanked. Considering our previous revelations about the Associated Press then, it would not be stretching our imagination too far to picture a scenario in which an algorithm had written this (somewhat sensationalist) story in the first place. Here we can wind our way back through a common thread, and return to the story and fate of our wikibots… perhaps it would be wise to reconsider who their intended audience is in the first place, and what the effects of such infighting, such relationships might actually and eventually be.
Here we can see a much stricter form – even if it remains, to a certain degree, speculative – of bot-bot relations. These nonhuman actors are entangled not only in messy relationships with us (as we see not only here but in more general discussions around Facebook and elections worldwide) but also with one-another. In the Wikibot case even… while, at the end of the day, the intended audience of the whole article is eventually a human reader – when we consider this issue at the level of the individual, specific disagreement that these bots engage in – the person, or agency, that they are trying to satisfy is not a human, but in fact the other, adversarial bot. To be slightly provocative about it we could say that the human is left out of the picture entirely... in this context at least, human input is not really that important or required. Similarly, we can extrapolate this out to our speculative example of an algorithmic reporter generating stories for algorithmic traders – all of which being technology that currently exists and in which the human plays an increasingly minor part.
However, in our diagram there is one obvious connection missing – human-bot relations, humans publishing for a nonhuman audience. Here, I can produce only one very short connection by way of a brief example (but I imagine it’s only going to get more important as time goes on)… and this takes the form of the content that we produce on social media platforms. While we often think – and Mark Zuckerberg literally testifies to the fact – that the revenue model of these sharing platforms is predominantly provided via advertising and the selling of data, a small (but increasingly significant) part of their current revenue streams will come from the development and deployment of Artificial Intelligence. One way in which AI’s are being trained right now (through machine learning for example), are against the huge datasets published by humans online. The pictures, texts, videos, audio-files… pretty much everything that we, as humans, post online can and is being read by bots, algorithms or what we might call nonhuman actors so that they might educate (or maybe even entertain!) themselves much in the same way as we might read each-others publications.
Here we can see what might – for some – seem like a somewhat unsettling fourth connection. But as disturbing as it may be… I think – in this sense of distribution and publishing – this now complete diagrammatic understanding encourages us (or in some ways shocks us) into taking a unique position – a non-anthropocentric (what some might call, non-correlationist) position. It also encourages us to understand this phenomena of nonhuman publishing as contemporary… while very much incomplete, these stories are already unfolding, already becoming real – this tangled web of relations is, in many ways, already in place.
So, now we have our map of the situation – one which displaces the human from the centre of public or published relations – I’d like to use it to plot a proposition (or, perhaps, depending on the time, a couple of propositions) and finally a caveat…
Firstly, I’d like to reframe a fifth, slightly different connection, on this diagram. Not so much a publisher-audience, or writer-reader, relationship as a collaborative editor-editor one…
At present we speak about the issue of technology, algorithms and bots as if they are ‘tools’ in need of re-working, reprogramming or ‘fixing’. Especially, when it produces unexpected or problematic results – as is the case with issues around data-privacy, biased-algorithms, and the rest… Indeed, while there is a usefulness in this way of speaking – trying to work out how can we find leverage in a situation that is problematic or troublesome – perhaps we would do better to talk about working with these tools…
To a certain degree this is already the case (a la ‘clippy’) albeit we don’t think of it this way, with Siri or Alexa and the other digital assistants. Perhaps it was truer for clippy than it is for Siri or Alexa, when working with Clippy it felt much more like a collaboration (or true assistance) rather than creepy data-mining or domestic snooping. This might be an interesting and useful way for any UX designers in the room to think about the ‘design’ of present and future ‘digital assistants’...
Secondly, and this is a related point… in the acknowledgements of her most recent book ‘Staying with the Trouble’ Donna Haraway writes: that this is a “book [...] full of human and nonhuman critters to think and feel with” – perhaps this is the best articulation of what I am trying to say.
This formulation – human and non-human ‘critters’ – might also be more useful than ‘bots’ and serves to create a new-terrain where our understanding can take hold. It needs its own vocabulary.
Lastly, the caveat... I think it’s important to state that – while this non-human publishing is all very new, flashy and exciting – it would be all too easy to overlook an inherent problem when considering the ‘non-human’ at all… this is a point made much more articulately than me by Helen Hester in a piece she wrote last year entitled ‘Towards a Theory of Thing Women’. In this section of the article I am about to quote she is examining Ian Bogosts similar project to understand the relations of non-human objects, (the question of – in the internet of things, what it is like to be a thing) in this case the questions comes from a wider philosophical program of Object Oriented Ontology, some of you may be familiar with it – so...
“When Bogost asks, with tender curiosity and a genuine will to understand, what it is that a microprocessor or a ribbon cable experiences, it is hard not to instinctively bristle on behalf of all the abjected human things who are not subject to the same curiosity – whose inner lives most philosophical and artistic discourses have no time to ponder. The project of object-oriented ontology insists that that “Nothing is overlooked, … nothing given priority”. As such, it might be best encapsulated by the slogan “All Things Matter.” As that slogan implies, it actually demands a fair amount of social immunity or entitlement to prioritise nothing at all over anything else, as well as a certain lack of concern with the treatment, affairs, and survival of animate beings, including other humans.” 9
While it might, for some, be important, interesting or exciting to acknowledge the importance of non-human critters, objects or actors (especially in the sense of bots, microprocessors and the rest) – it is also important to consider all the very real, human actors whose ‘published relations’ go unacknowledged on a daily basis... and, for that matter, have gone unacknowledged throughout history. This, for me, is one of the trickiest issues concerning this relatively new field of theory and understanding and perhaps it is something we can talk about in the Q&A.
Adapted from a talk given at the DISTRIBUTED symposium organised by David Blamey (Open Editions), Joshua Trees (Books From The Future) & the Royal College of Art
Footnotes
-
Incidentally, it is worth adding, that CydeBot is the most active bot – and by that measure user – on Wikipedia as a whole. ↩
-
http://journals.plos.org/plosone/article/comments?id=10.1371/journal.pone.0171774 ↩
-
https://www.wired.com/2017/03/internet-bots-fight-theyre-human/ ↩
-
https://automatedinsights.com/case-studies/associated-press ↩
-
https://insights.ap.org/industry-trends/study-news-automation-by-ap-increases-trading-in-financial-markets ↩
-
http://www.litfmag.com/issue-4/towards-a-theory-of-thing-women/ ↩