Last week saw millions of shoppers turning to the web to purchase Valentine’s Day gifts for their significant others, with total spending exceeding $17 billion. It’s no surprise that Valentine’s Day gifts range from innocent teddy bears to salacious lace teddies, but what surprised many shoppers was an offering from Sears. The retailer, known for its integrity and tact, accidentally allowed one of its suppliers to post a scandalous image of a lingerie set. The incident has fueled a debate over content control and raised the question—how could this have been prevented?
Content-control software, or censorware, is nothing new. For years, families, schools and even workplaces have used the technology to prevent viewing of certain sites based on keywords. Now there are even PHP classes that scan images for nudity. This technology may become more relevant to online retailers if they undertake marketplaces utilizing third-party suppliers. Retailers can articulate to suppliers what type of content is allowed, but as the Sears debacle proves, accidents happen and filters seem like the solution.
But are they effective? According to a recent survey of filters by the Department of Justice, not really. Content-control software, whether text or image based, generally has one of two flawed outcomes—underblocking or overblocking. While it seems prudent to err on the side of caution, overblocking still poses a problem for merchants who would then need to either check and edit the content themselves or contact the supplier. If there was nothing inappropriate about the content to begin with, this just wastes time and human resources.
As it stands today, most sites that include supplier provided content employ people to manually check that content. However, as retailers, brands, and publishers make the transition to marketplaces, the demand for reliable content-control filters will grow.