Child pornography seems like such a sinister thing to be involved in that one would hope that there are only a few incidents scattered around. The Internet Watch Foundation (IWF) has however proven that innocent thought wrong as they tell us that more than 68,000 URLs hosting child pornography were taken offline in 2015.
Simple rules of demand and supply become terrifying when applied in this context as that would imply that a good number of people are searching out these internet nests to fulfill their absurd desires. The charity discovered majority of the websites hosting either explicit photography or videos of children being abused.
Of all the abusive content they discovered and removed, 69 percent contained children who appeared no older than ten years. Further breakdown estimated three percent of the materials included children younger than two. In 85 per cent of all images and videos the victims were girls. Fred Langford the Internet Watch Foundation’s deputy CEO told WIRED; “We’re really seeing the abuse of image hosting services increasing year-on-year…They’re posting images on image hosting sites – there are thousands of them out there – and they link to it.”
The total 68,092 URLs hosting the child sexual abuse content were traced back to 48 countries, the IWF said in its annual report. Of the top level domains hosting the webpages .com .net .ru .org and .se, when combined, accounted for 91 per cent of all webpages identified containing the images and videos.
In IWF’s 20 year existence, 660,000 reports have been made. In 2015 however, of the 112,875 reports of potentially illegal content they received, 49,334 reports came from the public and 63,641 reports were created by them actively searching the internet for abuse images. This shows that technology has enabled them to be more adept at running their own searches for child pornography online, making them more effective than previous years when they could only rely on reports received from the public.
The technology continues to improve with the IWF currently testing a targeted crawler that allows its searches to be semi-automated. In that system, they could create a unique code or ‘hash’ of each image, which can then be used to find duplicates of that image.
Google is developing a similar system to automatically detect videos of child sexual abuse. Despite these improvements, Langford doesn’t believe an image and video scanning system will ever be fully automated. He explained the reason to WIRE by emphasizing how humans still need to be involved in the process to verify that the images are illegal and not false-positives.