Chatbots – who is actually chatting?

Chatbots – software programs that respond to text and/or vocal inputs in a humanlike manner – are beginning to proliferate online. Facebook announced at its recent F8 developer conference that it would be opening its Messenger Platform to third party developers who will be able to create their own bots to run on Facebook’s centralised Bot Engine. In a few years’ time, if this takes off, you may not be able to differentiate between a Messenger chat with your friend and one with a bot.

It was reported in the news recently that Tay, a Microsoft bot on Twitter, began tweeting offensive comments. Tay was designed to learn from its interactions with real people on Twitter – an unintended consequence of these interactions was a string of racist and offensive tweets.

This raises an interesting legal question: if a bot says something online which, if said by a human, would be libellous or criminal who, if anyone, is liable? A software developer may create the underlying code (in respect of the bot and the platform with which the bot interacts) but the bot can grow and learn to become a product of its online interactions with other humans, and possibly other bots. This is analogous to parents passing on their genetic material to their offspring. The offspring develops its own personality; partly shaped by its genes and partly shaped by its interaction with its environment. After a certain period of time society no longer deems the parents responsible for their offspring. As such the developer is arguably no more responsible for the bot’s “personality” than the other humans and bots with which it interacts.

Unless the law catches up with technology we may find ourselves in a position where, as the lines between bot and human become increasingly blurred, a bot says something damaging about a politician or celebrity and that person finds that they have no-one to sue.

Leave your response

  • (will not be published)