A regional publisher has joined forces with IBM to produce a new tool aimed at helping safe editorial content from being “blacklisted” online.
Reach plc has teamed up with the IT multinational to launch Mantis, a “safety platform” which uses artificial intelligence and machine learning to check whether content is appropriate to appear with advertising.
At present, Reach says a “significant proportion” of news content is blacklisted to advertisers due to some of the words used in stories appearing on existing safety platforms’ lists of “unsafe” terms.
For example, football reports which mention someone “shooting a winning goal” can be blacklisted as the term “shooting” is considered inappropriate for advertisers by such tools because it could appear in a story about gun violence.
Reach says this has reduced revenue for publishers as a result, and the company’s digital solutions director Terry Hornsby set about finding a solution.
Terry, pictured, said: “From a brand perspective, the margin for error has simply been too great – no one wants to see their product next to an upsetting or graphic story.
“And from the publishing side, simply too much of our content was being blacklisted, even the most perfectly innocent stories.
“My starting point was making sure that a football report on someone ‘shooting a winning goal’ was recognised for what it is – a great piece of content that most brands would love to be next to, not a violent story.”
Mantis uses image recognition to instantly flag the context of a piece and much more sophisticated language recognition to capture nuance and context.
Using these tools, a “nude lipstick” is no longer marked as salacious and a story about a new life-saving “drug” is no longer flagged as criminal.
Mantis can also be used as a tool for editorial teams to see instantly if their content may be unfairly deemed unsafe because of certain words.
The product is now live at Reach and is being taken out to market for other publishers and content owners, both in the UK and abroad.