Apple Inc. on Thursday said it will take steps to verify photos in iPhones in the United States to ensure they are not related to child pornography uploaded before uploading to iCloud storage services.
Apart from identifying child abuse images to protect against false positives, Apple said it will review human beings and report them to law enforcement. He said the system is designed to reduce counterfeit products to one trillion.
Apple’s new system seeks to address law enforcement demands to stop legal sexual harassment, and to respect the privacy and security practices that are the core of the company’s brand. But some privacy advocates say the system could open the door to tracking political speech or other content on iPhones.
Most other major technology providers – including Alphabet Inc., Google Inc., and Microsoft Corporation – are already looking at images of known child pornography in the database.
“For many people who use Apple products, these new security measures have the potential to save children who are exposed to online pornography and pornography,” said John Clark, executive director of the National Center for Destruction and Exploitation. Children, in their statement. The fact is that privacy and child protection can coexist.
Here’s how the Apple system works. Law enforcement has a database of known child sexual abuse images and translates those images into “hash” – numeric codes that positively identify the image but cannot be used to rebuild them.
Apple uses that technology called NeuralHash, which is designed to capture images that are similar to the original. That database is stored on iPhones.
When a user uploads an image to Apple’s iCloud storage service, the iPhone creates an image hash to load and compares it to the database.
Photos stored only on the phone have not been verified, Apple exists, and the account is there to verify that any matches are valid before blocking any account before reporting it to law enforcement.
Apple says it can appeal to users who feel their account has been unfairly blocked. The Financial Times has previously covered some aspects of the program. Read more
One feature of Apple’s system is that it searches photos stored on phones instead of scanning them once they reach the company’s servers.
On Twitter, some privacy and security experts have expressed concern that the system could eventually be expanded to scan phones for banned content or political speech in general.
Apple has “sent a very clear signal. In their (influential) opinion, it is safer to build systems that scan users’ phones for prohibited content, ”warns Matthew Green, a researcher at Johns Hopkins University.
This will break the dam – governments demand from everyone.
Other privacy researchers, such as India McKinney and the Electronic Frontier Foundation, Erika Portone, wrote in a blog post that Apple had failed to double its promise to test small device content.
“The move is a shock to users who trust the privacy and security of the company,” the pair wrote.
“At the end of the day, a well-registered, carefully thought-out and narrow-fenced backyard is still the back door,” wrote McKinney and Porto.
.