How NTB utilised news automation in its election reporting

In May, the Immersive Automation project met up with Magnus Aabech from the Norwegian news agency NTB. Aabech is involved in NTB’s election bot project and talked to us about the experiences he has had so far with news automation. After the parliamentary elections in Norway, Aabech also shared some of NTB’s experiences with automated election reporting.

Magnus Aabech works as a News Developer at the Norwegian news agency NTB.

The Norwegian news agency (NTB) has previously focused on automating sports news, mainly football, but during the parliamentary elections in September 2017, the agency also used a simple bot in order to produce partially automated election news. The agency had a channel on Slack, which alerted the reporters when some type of changes occurred in the results. The bot would also produce a news text about the changes. The responsible reporter would then determine whether the changes were worth publishing.
“Just like the Immersive Automation project, we also work with text templates. However, our system is not really using NLG,” Aabech explains.

“We have received a lot of attention within the industry, which is always a plus”

The NTB focused on three main templates, which were then tailored. All in all, the bot consisted of almost 7 000 lines of JavaScript code.
“This could surely have been done in a shorter format, but my colleague and I are not that experienced in writing code,” Aabech says and laughs a little.

The election bot was created in-house, and the work involved three or four employees. One of the reporters worked full time on the bot for two months, while the others were involved part-time in addition to their regular tasks.

Aabech describes the experience as a positive one.
“The election bot provided us with plenty of valuable information, and it also illustrated to a lot of the NTB’s employees what can actually be done by news automation. Although the bot was a success in a lot of ways, we still experienced some technical difficulties in the beginning. Fortunately, we managed to solve them.”

For the NTB the most important thing was to develop the skills of the employees as well as prepare for the Norwegian municipal elections in 2019.
“And, of course, we were happy about the quality of the texts and how well the bot worked. In addition, we have received a lot of attention within the industry, which is always a plus.”

“The election bot provided us with plenty of valuable information, and it also illustrated to a lot of the NTB’s employees what can actually be done by news automation.”

Developing systems for news automation is expensive, and as such Aabech was interested in knowing more about the ways in which our project has attempted to create a re-usable system of the Valtteri election bot.
“Projects like these can be expensive. Automation is also something that I work with on top of my regular tasks, so the progress is quite slow.”

We compared the structures, and what makes Valtteri so special, is that instead of just making one big black box, the IA-project has created a chain of smaller black boxes, and as such can alter each individual box instead of starting from scratch and building a completely new system.

While the IA-project aims at producing news, which can be published directly for the audience to read, the NTB decided to proofread the automatically produced election texts before they went out on the NTB’s newswire.
“This was a type of pilot project, but in the future we will send out the automatically produced texts directly. That is also what we do with our football texts,” Aabech says.

EDIT:
In the original text we wrote that the NTB proof reads all of its automatically produced texts. This was only the case with the election reporting. 

Machine learning can detect hate speech and violence

The spread of fake news and hateful content is one of the most debated topics right now. As machine learning techniques become more and more sophisticated, numerous fields have begun to utilise these techniques. In her PhD, text and data analyst Myriam Munezero has studied machine learning models that can detect antisocial behaviours. In this blogpost she explains the possibilities of natural language processing in violence prevention.

More than a billion people use Facebook daily, and the social media platform has become one of the most influential news businesses, with an incredible ability to mobilise people. Despite community standards and encouragements to tackle hateful content more efficiently, racist and hateful material still exist on the platform.
“The words we use, as well as our writing styles, can reveal information about our preferences, thoughts, emotions, and behaviours,” Myriam Munezero says.

Natural language processing techniques have been shown to be useful in identifying harmful behaviors, such as cyberbullying, harassment, extremism, and terrorism, in text

In her research conducted at the University of Eastern Finland, she and her research team developed machine learning models that can detect antisocial behaviours, such as hate speech and indications of violence, from texts. Historically, most attempts to address antisocial behaviour have been done from educational, social and psychological points of view. This new study has, however, demonstrated the potential of using natural language processing techniques to develop state-of-the-art solutions to combat antisocial behaviour in written communication.
“Natural language processing techniques have been shown to be useful in identifying similar harmful behaviors, such as cyberbullying, harassment, extremism, and terrorism in text, all with varying levels of accuracy. However, few research address the broader antisocial behavior, which is characterized by covert and overt hostility and intentional aggression toward others,” Munezero explains.

Munezero and her fellow researchers have created solutions that can be integrated in web forums or social media websites to automatically or semi-automatically detect potential incidences of antisocial behaviour. The high accuracy of these solutions allows for fast and reliable warnings and interventions to be made before the possible acts of violence are committed. In many instances, people who have committed school shootings for instance, have indicated their intentions online prior to action. By detecting these indications, future acts of violence could be prevented.

Text and data analyst Myriam Munezero finds the results of her research encouraging.

One of the great challenges in detecting antisocial behaviour is first defining what precisely counts as antisocial behaviour and then determining how to detect such phenomena. Thus, using an exploratory and interdisciplinary approach, Munezero’s study applied natural language processing techniques to identify, extract, and utilise the linguistic features, including emotional features, pertaining to antisocial behaviour.

The study investigated emotions and their role or presence in antisocial behaviour. Literature in the fields of psychology and cognitive science shows that emotions have a direct or indirect role in instigating antisocial behaviour. Thus, for the analysis of emotions in written language, the study created a novel resource for analysing emotions. This resource further contributes to subfields of natural language processing, such as emotion and sentiment analysis.

The study also created a novel corpus of antisocial behaviour texts, allowing for a deeper insight into and understanding of how antisocial behaviour is expressed in written language.
“Finding representative corpora to study harmful behaviours is usually difficult,” Munezero says.
As the results are encouraging, Munezero finds that further progress within this topic can be made with continued research on the relationships between natural language and societal concerns.

Myriam Munezero’s PhD was approved on April 12 at the University of Eastern Finland. She also appears in an article in the newspaper Karjalainen. Munezero currently works as a researcher at the faculty of Data Science at the University of Helsinki and is a member of the Immersive Automation team.

Nine questions about news automation

The other day we received some excellent questions from Peter Carson at Edinburgh Napier University. Carson is currently collecting empirical material for his dissertation, which focuses on the impact of artificial intelligence on journalism. We decided to publish our thoughts in this blog.

Could automated journalism lead to the deskilling of journalists and the loss of jobs?
Considering the skills associated with the current state of the art, what those systems are capable of today, we are not too concerned about this. All new technologies come with challenges and may require new skillsets, whilst other skills may become redundant. Automation may exert increased pressure on the sense-making skills of journalists, which might lead to a higher level of specialisation for reporters and the material they produce. At the same time, if the media industry is willing to invest in depth, automation can free up resources for increased quality of stories. The role of journalists will definitely change, but there are still such wide gaps in the level of sophistication between human journalists and machines, that journalists will be needed for quite some time. In the optimistic scenario journalists will focus on the specific areas and topics that machines are not capable of, while text generators will take on the more mundane routine tasks.

If AI (artificial intelligence) becomes a dominating force in journalism production, is there a potential that journalists will only be essential in an editorial capacity?
Algorithms do currently not operate in a vacuum, they need creators and managers to function properly, so these positions will be required. The timeframe for a full-on automation of creative material is most likely longer than one would expect, and it will most likely take years before we see newsrooms only consisting of editors. We suggest looking into Dörr (2015) and Van Dalen’s (2012) respective work for more thoughts on the possible changes to the professional role of journalists.

Are there dangers in having AI-algorithms curate news on social media feeds without some sort of overseeing regulatory body?
This depends on the skills of machine learning. Algorithms are currently as biased as their creators, meaning that just as people make mistakes and false judgements, automation can make the same errors. The example of Microsoft’s Tay shows that we need to carefully think through the steps, and learn from them as we go along. It is important to keep in mind that social media algorithms learn from our behaviours, meaning that we impact on their actions.

Do we need more agreement on the ethics of data collection, when mining data on unaware individuals, to use for advertising or newsgathering purposes?
Yes. We estimate that this will be a big topic for discussion within international bodies during the coming years. We think all media needs to be overseen by an ethics committee or suchlike, and computational journalism or algorithmic text generators are no different from traditional media outlets. However, we need to invest resources into this area, since the production volumes and potential capacity of algorithms can be extremely difficult to monitor due to their immense quantities.

Are personalised news feeds curated by AI drawing people into “bubbles” of information, that shield them from new or challenging views?
The debate on whether this is happening or not, or if it actually is a new phenomenon or not, is currently taking place. While we might be able to sense that we are in a bubble, we can, however, actively try to impact that bubble by choosing to “teach” our social media algorithms that we also want to see other stuff.

Photo: Alberto Ollo

With the advent of “fake news”, is AI the answer to upholding values of truth in journalism and preventing the spread of misinformation?
Fake news is nothing new, nonetheless it is beneficial for our society that there is an ongoing debate on the impact of the information forming our opinions. Using algorithm-driven intelligence for locating and filtering false information is definitely something that could be beneficial. This, however, simultaneously poses questions of how that would be governed, and who would have the right to make those decisions.

Do we need to establish rules about transparency and accountability when articles are written by algorithms?
Yes, ethical rules and guidelines are always beneficial. At the same time, it is not as clear if a top-down approach is the best one, as the field evolves rapidly.

How do we prevent hidden biases in AI-generated news stories?
That is the million-dollar question, as we cannot even do that in traditional human-created news. Are humans even able to be unbiased? There are examples of how algorithms actually help us expose our inherent biases, as in the case of Google Image search preferred showing images of male CEOs.

Do we need to ensure that new journalism students have a degree of code literacy?
Based on discussions and interviews with data scientists and data journalists, we think the best results are achieved when journalists collaborate with the people who develop the technological aspects of automation (programmers/software engineers) instead of journalists trying to be programmers or programmers trying to be journalists. However, what is essential is that journalists adapt a computational way of thinking in their professional role, in order to better understand the possibilities and added value that algorithms and computation can have in the editorial offices or newsrooms. At the same time, a literacy of the foundations of journalistic norms and values is also relevant for the people who work with the technological aspects of the media industry.

Further reading on computational thinking:
http://www.cs.cmu.edu/~wing/publications/Wing08a.pdf

Peter Carson’s questions were discussed and answered by project lead Carl-Gustav Lindén, who is a journalism and media researcher, journalist and PhD-student Stefanie Sirén-Heikel, who focuses on journalistic verification, newsroom innovations, and media management, and research assistant and journalist Laura Klingberg.

Meet our first prototype – Valtteri the election bot

After several months of intensive work, the Immersive Automation team is now ready to present its first prototype, Valtteri the election bot. Just in time for the municipal elections in Finland, Valtteri writes short pieces of news based on the election results. In this blogpost, you can learn more about how and why Valtteri was created.

Valtteri the Election Bot is found at vaalibotti.fi.


As the Immersive Automation project studies the automation of editorial processes, it was necessary for us to create a prototype, which could illustrate the difficulties of automation and guide us along our journey towards future news ecosystems. Data analyst Leo Leppänen specialises in language technology, and he is the brains behind Valtteri. Over the past couple of months, he has programmed and developed Valtteri with the assistance of the other researchers in the IA-team.
“This is our first prototype and the point is to manually create a system which can illustrate where machine learning could be most useful and profitable,” he says.

Data analyst Leo Leppänen is the brains behind Valtteri the election bot.

Valtteri utilises data from the Finnish Ministry of Justice and combines the data with templates created by the research team.
“This probably sounds very simple and easy, but for a computer this includes some major challenges. The computer does not know what useful and interesting information is, and the amount of data is massive. The human brain possesses vast amounts of information, whereas the computer has no other knowledge than the things we have taught it,“ he further explains.

After the municipal elections, the research team will gather feedback and analyse the user experiences.
“The next step will be to find all that essential knowledge humans have and transfer it into Valtteri. Our main challenge is that a computer is a slow learner and needs plenty of examples to learn from,” Leppänen says.
He also points out that this is an experiment and a first prototype, and thus a fairly simple system.

You can find Valtteri the newsbot here. The bot works in Finnish, Swedish, and English.

Until Monday April 10, when the latest election results become available, Valtteri practices newswriting with old data from the 2012 municipal elections.

Did you try Valtteri? We are curious to know what you think!
Tweet us @vaalibotti or send us a message info @ immersiveautomation.com

 

First training for journalists and visit by Nick Diakopoulos

On March 20 and 21 the Immersive Automation project arranged its first training for journalists. During two intensive days of lectures and discussions the participants were given an introduction to computational thinking and the applications of it in a newsroom setting. Among the guest lecturers was assistant professor Nick Diakopoulos from the University of Maryland, who is a leading scholar in algorithmic accountability and social computing in the news.

Assistant professor Nick Diakopoulos from the University of Maryland believes computational thinkers will be more effective at exploiting the capabilities of automation.

Nick Diakopoulos defines computational thinking as a praxis about data, modelling, simulation, and programming into journalistic norms, goals, and epistemology.
“Essentially it’s about finding and telling news stories, with, by, or about algorithms,” he says.

He is very clear about the fact that computational thinking does not mean that we should think like computers.
“Instead it’s about thinking in a way so that we can use our computers in the best way possible to solve a problem.”
And why do we need computational thinking in news automation?
“Because computational thinkers will be more effective at exploiting the capabilities of automation.”

He compares an automated writing pipeline to the process of baking a cake; you have the algorithms and the parameters. The algorithm is like the recipe for the cake and the parameters are the ingredients, which can be altered and changed according to our wishes and needs.
“We have the basic recipe and if we for example want to make a vegan version of the cake, we just simply substitute a few of the ingredients.”

Computational thinking does not mean that we should think like computers.

Diakopoulos also led a workshop on bots, as they can be excellent at serving niche audiences and the costs of creating a bot are low. Some of the partnering media houses are also working on bots in their newsrooms and as such the topic was very current for the participants.

The participants also got to meet Valtteri, an election news bot, and the first prototype of the Immersive Automation project. The bot is currently training with data from the 2012 municipal elections in Finland so it will be ready for action on April 9. Data analysts Myriam Munezero and Leo Leppänen, who are members of the IA-research team, are the brains behind Valtteri. The rest of the research team has contributed to the creation process by considering news angles, writing templates, and analysing the linguistic capabilities of the bot.

You can follow Nick Diakopoulos on Twitter.

The Valtteri bot will be released soon.

Increasing unemployment among journalists or boosting news production? Why do we need news automation?

In the most recent years, automated generation of news content has arrived in the editorial offices. Some people like to talk about ‘robot journalism’. While automation has conquered plenty of industries, the media appears to have fallen behind. If a robot is capable of performing surgery on human bodies, why could it not assist journalists in the newsrooms as well? This a comparison media and journalism researcher Carl-Gustav Lindén often makes during his lectures on the topic. In this blog post he will present a few key arguments as to why our newsrooms could benefit from automation.

Perhaps we should not call these systems robot journalists at all, as they do not include mechanical parts. In fact, they consist of a snippet of code or an algorithm creating news stories from structured – often numeric – data. The data might for instance originate in sensors detecting seismic activity, or somebody reporting sports results from a local football game.

While journalists are busy working with their more complex editorial tasks, a text generator can produce huge amounts of shorter texts for a wider audience.
“In the case of routine news of low value, I think journalists need to consider how we can reduce the amount of human labour by using smart machines that generate and distribute texts. This could enable journalists to concentrate on unique complicated stories that provide the most value to the audience, engaging content that people are willing to pay for,” Carl-Gustav Lindén says.

Automation in the newsrooms has existed for decades. Software has edited, managed, and distributed content.

Introducing news automation in an editorial office does not mean that journalists, or the human involvement, will be erased from journalism. Algorithms and NLG-systems are not black boxes, they are created by humans and therefore journalists need to get involved.

Carl-Gustav Lindén sees endless possibilities with journalists working alongside sophisticated text generators and algorithms.

According to Lindén it is a matter of crucial editorial decisions on what machines should do, and algorithmic authority and accountability is not a minor issue. However, new technology is something journalists are used to so this should not be a problem.
“Automation in the newsrooms has existed for decades. Software has helped journalists with editing, managing, and distributing content. Think about Photoshop or complex CMS-systems. If you walk into a television studio you will find automation everywhere. This is only the next step.”

The automation of news production seems to fit in the media industry, where the commercial pressures and higher profit expectations have heavily increased over the past few years. News automation can also investigate areas, which we previously have not been able to cover. Essentially this means that algorithms and text generators can work alongside journalists and perform tasks that humans are incapable of doing.
“I see so many applications that I do not even know where to start. One very exciting opportunity is to use sensor data monitoring human activity, in say traffic or other movements of people,” Lindén says.

Carl-Gustav Lindén’s newest article Decades of Automation in the Newsroom was recently published in Digital Journalism.

Can computers quote like human journalists, and should they?

Quotations in journalistic texts are regarded as word-for-word recollections of what an interviewee has stated. However, there are very little research on actual quoting practices. This is why journalist and scholar Lauri Haapanen decided to focus on quoting in his PhD. In this blog post he will reflect upon how NLG-systems could benefit from knowledge of how journalists actually quote.

When a reader enjoys a story in a magazine, they have no way of knowing how an interview between a journalist and a source was conducted. Even quotations – which are widely considered being verbatim repetitions of what has been said in the interview – might be very accurate, but they might as well be heavily modified, or even partially trumped-up.

A text generator could write a story and a journalist could interview sources and add citations in suitable places.

“For journalists, and their editors, the most important thing is of course to produce a good piece of writing. This means they might be forced to make compromises, since the citations must serve a purpose for the story,” Lauri Haapanen explains.

The Immersive Automation-project focuses on news automation and algorithmically produced news. Since human-written journalistic texts often contain quotations, automated content should also include them to meet the traditional expectations of readers.

In the development process of news automation, it is realistic to expect human journalists and machines to collaborate.
“A text generator could write a story and a journalist could interview sources and add quotations in suitable places,” says Haapanen.

At a later stage, when the algorithms that create texts become more sophisticated, Haapanen suggests software developers also include criteria regarding the selection, positioning, and text modification of quotes.

This is where Haapanen’s research within journalistic quoting practices could be useful. In his dissertation he categorised nine essential quoting strategies used by journalists when writing articles.

Computers must learn to ‘think’ like human journalists in the process of quoting, says researcher Lauri Haapanen.

Based on empirical data, Haapanen found that  when journalists extract selected stretches from the interview discourse, they aim at (1) constructing the persona of the interviewee, (2) disclaiming the responsibility for the content, and/or (3) adding plausibility to the article.

As such, machines should be able to mine these kinds of segments from the source data available.

When journalists then position the selected stretches into the emerging article, they aim at (4) constructing the narration and (5) pacing the structure. When journalists modify the linguistic form and meaning of the selected stretches, they aim at (6) standardising the linguistic form, although they occasionally (7) allow some vernacular aspects that serve a particular purpose in the storyline. Furthermore, journalists aim at (8) clarifying the original message and (9) sharpening the function of the quotation.

Within the scope of the Immersive Automation project we look at how these nine quoting practices can be incorporated in automated news generation.
“After all, computers must learn to ‘think’ like human journalists in the process of quoting,” Haapanen says.

Lauri Haapanen defends his thesis at the University of Helsinki on Saturday March 11. He also appeared on YLE’s radio program Julkinen sana on Wednesday March 8. He has written a blog post for The Media Industry Research Foundation of Finland, the advocacy organisation for the Finnish media industry, and appears in an article in Suomen Lehdistö.

NLG is an essential part of the Immersive Automation research project

NLG, or natural language generation, is a subfield of Artificial Intelligence and Computational Linguistics. Since NLG technology enables the automation of routine document creation, it is an essential part of the Immersive Automation project. Mark Granroth-Wilding is a research associate at the Department of Computer Science at the University of Helsinki, as well as one of the experts on the Immersive Automation (IA) team. As he specialises in Artificial Intelligence, and in particular Natural Language Processing, he will define the basics of NLG in this blog post.

“NLG consists of techniques to automatically produce human-intelligible language, most commonly starting from data in a database. It can be thought of as a process of turning a symbolic representation of data into human language,” Mark Granroth-Wilding explains.

“The purpose of the Immersive Automation project is to take natural language generation and news automation further”, says researcher Mark Granroth-Wilding.

The essential idea of the Immersive Automation research project is to create means to produce news in a way that humans cannot do, for example hundreds or thousands of articles all at once. NLG provides the tools to produce language or text in such a large volume.

“You could of course just supply data to audiences in a raw format – without NLG – but we want to present information in an easier, more understandable format.”

In recent years, we have seen a massive growth in the use of statistical methods and machine learning, including in NLG. However, Granroth-Wilding points out that this has not yet been seen in many of the practical applications of NLG.

“This is what makes NLG a hot topic, and this is also the reason why we are looking into this in the IA-project.”

“Our focus is to work out how state-of-the-art statistical NLG methods can be incorporated into real journalistic processes.”

While some forms of news automation have been introduced into newsrooms around the world, the systems have so far been language dependent and template based. This means that the systems rely heavily on human contribution and focus mainly on languages spoken by large groups of people. One of the most widely used systems is Wordsmith, developed by Automated Insights in Durham, North Carolina. Associated Press, among others, uses the system.

Improving the state-of-the-art

Wordsmith would probably be the most prominent example of NLG in automated text production. However, what we want to do here is something even more sophisticated. There are no such examples currently where a system is capable of independently producing highly variable news texts.”

Currently, the automatically produced news is also limited to areas with large amounts of numeric data, such as sports news and earnings reports. The numeric data is easy to combine with text templates. However, the purpose of the Immersive Automation project is to take NLG and news automation even further.

“Our focus now is to work out how state-of-the-art statistical NLG methods can be incorporated into real journalistic processes. Working out how these techniques can be made intelligible to newsrooms, as well as reliable in accurately conveying their source data, is the big challenge that we’re undertaking in this project.”