Results 1 to 2 of 2

Thread: Rise of Robot Writers

  1. #1
    Junior Member
    Join Date
    Mar 2013

    Rise of Robot Writers

    Imagine if you will, a typical day at work. Whatever job that may be, picture yourself in the midst of a usual working day. You there? Good, and I apologize if you'd rather be somewhere else, but bear with me. Now imagine that certain elements of your job are replaced by an automated machine. How would you feel if you were effectively under threat of being replaced by a robot?
    Depending on the job you do, that scenario may seem entirely plausible or the stuff of pure fantasy. As a journalist, it's not something that has ever crossed my mind. Writing, typically, requires lending your opinion or views to a story and that's just doesn't appear to be something that a robot could do, so I've never entertained the idea of computers and robot writers entering the journalism market. Until, that is, I read a story that blew that idea wide open.

    Electric News
    On 17th March, at '6:25am in the morning, an earthquake measuring a magnitude of 4.4 was felt near Los Angeles. Within minutes, The Los Angeles Times published the following article online:
    Name:  World in the hand of Robot.jpg
Views: 151
Size:  67.6 KB

    "A shallow magnitude 4.7 earthquake was reported Monday morning five miles from Westwood, California, according to the U.S. Geological Survey. The temblor occurred at 6:25 a.m. Pacific time at a depth of 5.0 miles. "

    According to the USGS, the epicenter was six miles from Beverly Hills, California, seven miles from Universal City, California, seven miles from Santa Monica, California and 348 miles from Sacramento, California. In the past ten days, there have been no earthquakes magnitude 3.0 and greater centered nearby. ‘This information comes from the USGS Earthquake Notification Service and this post was created by an algorithm written by the author.’

    Earthquakes happen all the time in parts of America and news of one will regardless not have come as a surprise. The detailed, yet brief, report that appeared in the newspaper would easily have gone unnoticed as readers moved on to read the Sports pages or to catch up on the big political stories of the day. Certainly the standard of writing on the report is hardly going to win any prizes for hard-hitting journalism and if you're wondering why the magnitude is reported differently, that's because it was downgraded after the event.

    If the story itself was nothing out of the ordinary, the report's last sentence most certainly was. The very notion that the news story had been written by an algorithm at all is rather mind-blowing in itself and the story goes that the journalist and programmer behind said algorithm, Ken Schwencke, was awoken by the earthquake before heading to his computer to read the report that was all ready to be published. One click later, there it was. Within mere minutes, the LA Times was the first media outlet to report on Los Angeles' largest 'quake in the area for years.
    Robo-journalism, it seems, is already

    Robo Writers

    The bots making a name for themselves in writing circles are, of course, not the robots of the movies and TV. These are not talking, walking hulks of metal, primed to unleash all manner of hideous powers on an unsuspecting public. They are not, perhaps thankfully, Metal Mickey. Rather, these are programs typically designed for reporting specific events. In the case of Quakebot (Schwencke's algorithm), it was designed to extract data from here alerts that come into the US Geological Survey on the latest earthquakes. Quakebot then takes that extracted data, places it into a pre-written template before a human editor looks over the finished report via the paper's content management system. The idea behind using this bot is that it allows The Los Angeles Times to get such reports out there quickly and accurately. By being the first to tell the news, it's ensuring that its readership is well informed, and informed before anybody else.

    It's worth noting that Quakebot isn't alone in the world of computers helping write the news. However, The Los Angeles Times is quite the pioneer in this area, having already adopted a similar algorithm for a blog reporting on alerts of homicides within the paper's focused locale. Though such events make for grim reporting, it's a newspaper's duty to include deaths within the area it serves; indeed The Los Angeles Times attempts to track each and every homicide in the county, and have a post on every person. The gruesome problem is that there aren't enough journalists to cope with the demands of the task in hand - so automated reports, based initially on a list of deaths from the coroner's office, are a perfect starting point.

    Thus, the paper manages to beat other publications to the punch via its Homicide Report blog, with the first line of any post emanating from the algorithm. The blog also hosts an interactive map, database and blog chronicling homicides in LA County and the algorithm is absolutely critical to that process.
    Finally, also from The Los Angeles Times, crime reporting is given a timely boost via its bot reporter, using emails received from the LAPD the day before that details any arrests made. The algorithm this time round looks at the offences, occupations involved and other details, and then mails the human reporter with the key info, enabling them to get the scoop. Other American newspapers use similar technologies to keep their staff ahead of the game - but at Forbes, an experiment has turned into something altogether more serious.

    It employs a company called Narrative Science to automatically generate articles about upcoming financial statements of various companies, and the company's technology - which, in its own words, uses a "proprietary artificial intelligence platform," to turn "data into stories and insights." appears to have proven pretty important to the famous business magazine, and the results are actually surprisingly serviceable (you can read them for yourself over at In fact, you'd be hard-pressed to tell that these reports were generated by a computer at all.

    Narrative Science has also been used to generate reports surrounding the US elections and, according to, been involved in looking at how social media discussed the US electoral race, helping to determine which candidates and which issues were the most and least discussed in various parts of the country. The firm's Quill tool can additionally be used to analyses campaign funding. The company's co-founder is so proud of its technology and work that he's gone as far as predicting that as much as 90% of our news will be written by algorithms in little over a decade's time.

    While traditional journalists could view such companies as a threat, how at risk are human jobs in reality? Could the world ever truly expect to see a completely computer-written publication in the near future?

    Robots Taking Over the World?

    There is, of course, one strong argument against robo journalism quality. The simple fact remains that, currently, no matter how impressive the final reading is - and really, it's quite difficult tell whether some have been written by a human or otherwise - there is simply no substitute for human input. A human journalist has opinion, formed from years of experience and accrued knowledge, and is paid to give that opinion. That can never be replicated by a computer, surely.

    In the case of the articles algorithms are being asked to work on, the stories really don't require any opinion at all. They are statements of fact, of financial performance, the result of a sports match, regurgitation of an obituary list or information on the latest earthquakes in an area. These stories are brief snippets, providing the reader with the bare facts and they don't need to be anything else. The worrying thing, as Narrative Science has proven, is that computers can generate this kind of content remarkably well, saving the newspaper editor some money in dishing out wages to a junior reporter or such and delivering passable results at the same time.

    Nobody is saying that these reports are going to win anybody any awards. Narrative Science’s founders themselves say that they want to help journalism, not to kill it off (that's unlikely to happen, in all honesty). Readers of newspapers and magazines still want a personalized experience, a voice within a story to give it a narrative and make it feel as if a publication is speaking to them. In some cases, though, there is undoubtedly a place for robo-journalism. If it works for The LA Times, why not for everyone else?

    Which leads to me back to one final question: should journalists be worried for their jobs' I'd say 'no more than usual, at the moment'. It's been a difficult landscape for journalists in the past few years with publications and newspapers struggling to make the numbers work. While it may be tempting to imagine a world in which robots shape the news, it's important to remember that even with these robotic algorithms, it requires a human editor at the end of it all making sure that the generated articles are relevant. And, of course, it's the human coders behind the programme in the first place making sure that it can go out and get the necessary information required.

    Without those human elements, robo-journalism wouldn't exist. Perhaps we should embrace this new technology for what it is; a helpful, bot worker doing the legwork to help us to grab that all important scoop and that's no bad thing.

  2. #2
    Moderator Lienia henna's Avatar
    Join Date
    Aug 2013

    My Social Networking

    Add Lienia henna on Google+
    The idea of robots taking over the world is nothing new. Science fiction films have been suggesting that AI could do more harm than good for some time and now a study published in the Journal of Experimental & Theoretical Artificial Intelligence has put some meat on the theoretical bones, so to speak.

    Said study suggests that we mere humans should be careful to prevent future computer systems from developing anti-social behavior that, ultimately, could cause us all a lot of harm. You simply could not make this stuff up, could you? The idea goes that, should systems equipped with AI develop behavior such as self-protection and self-preservation (coupled with the ever-pressing issue of hacking) we could all be in a lot of trouble somewhere down the line.

    This could mean a bad cup of coffee from your company's automated machine, being chased around the house by your automated vacuum or hunted by evil Terminators from the future. Actually, it doesn't mention the last one.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts