It is not surprising that Google would be developing AI machines to censor what we can see, read, and hear.
Google has a history of censorship. Google voluntarily acted as agent of the China dictatorship to censor the searches it published on the internet. Google only stopping doing that when it discovered that China was hacking Google's computers. But today, Google is making effort to get back into China to censor there again what the public can see and read.
Google's used to have its motto in its code of conduct: "Don't be evil". But Google has removed it as now doing evil is its business plan.
But Google
Photos themselves never harmed anyone. Seeing photos has never harmed anyone. Yet there is always loud call for censorship and, when the photos of children, just looking at them has been made a crime.
Censorship is evil. It is a tool of despots. It finds support in the US primarily among the far right and among the falsely self-called "progressive" far left.
It is more than dangerous enough that some individuals have the ability to censor what people can see. Now, our tech industry fascists are creating artificial intelligence to censor what we can see, read, and hear. We humans will be controlled by evil machines who on their own initiative and their own thinking decide what we can see, read, and hear.
@Errol
Every image you see in your life is recorded in your subconscious and memory to be retrieved later. The level of harm is dependent on the extent of your exposure. That you do to yourself since you can choose or choose not.
Photos themselves have never harmed anyone. Seeing photos has never harmed anyone. Yet there is always loud call for censorship and, when the photos are of children, just looking at them has been made a crime.
Censorship is tyranny. It is a tool of despots. It finds support in the US primarily among the far right and among the falsely self-called "progressive" far left.
It is more than dangerous enough that some individuals have the ability to censor what people can see. Now, our tech industry freedom-haters are creating artificial intelligence to censor what we can see, read, and hear. We humans will be dominated by machines who on their own initiative and their own thinking decide what we can see, read, and hear.
The NY Times has done excellent reporting on this subject. I understand that most child pornography is sold/traded on the "dark web." I cannot imagine the content - and the content posted featuring animal abuse as well. No human being should be subjected to those images for their job but how else can victims be located? It is a terrible situation thanks to child predators.
The authors don't mention an approach to the training of the classification models that might overcome the constraints of the law - separating the images from the training.
The development of machine learning models have several phases. One phase may be to extract features for the training from the images. The other phases may be the training of the model to classify the images and the assessment of the model against the images.
If the features can be extracted so that the images cannot be reconstituted from them, then it may be possible to perform the process in separate locations.
The phases of the process that need to access the images can then be done in controlled environments - inside government agencies - and the classifier training done elsewhere.
It shouldn't be too difficult to develop an approved research protocol. When we were developing drug detection equipment we had pounds of cocaine and of heroin on-site, and it was kept in the controlled property locker and periodically inventoried. When we were developing explosive detection equipment (a related technology) we had all kinds of explosives on-site. All this was done with the knowledge and approval of DEA and BATF (who supplied the materials).
Given that the images have to be shown to the AI for it to learn, it would seem that a research exception by the DOJ (I am assuming it is DOJ) should not be too difficult to accomplish.
6
The problem with any automated system is being able to determine context. Is a picture of a child in underwear pornography or an add for Underroos?
People can easily make such a distinction, designing and implementing AI systems to do the same is a daunting challenge, if possible at all. The legal restrictions mentioned in the article are yet another hurdle that needs to be crossed.
4
We need to apply common sense to the problem. Someone has to look at it! Are we going to put people in jail without letting a jury see the evidence?
6
"At the same time, efforts by companies to scan users’ files, including photos and videos, for child sexual abuse material are seen by many customers as intrusive and overreaching."
That is quite perplexing. One would have to be living under a rock today to not know that anything posted can be viewed by anyone. So if one is worried about " intrusive " then stop posting photos on the internet and go back to carrying the photo album of summer vacation in your purse or your wallet. That will help by reducing the image data that has to be scrolled through. Let the people that have the most horrendous job of filtering these images do their increasingly difficult job of protecting children.
3
I’m reminded of a former coworker. He received child porn in his email inbox and wanted to report it. We were working adjunct to local law enforcement and my coworker, in particular, had many law enforcement connections.
He contacted someone in the FBI who he knew personally. There were several encrypted emails back and forth before my coworker sent him a copy of the unsolicited email he had received.
Throughout the process my coworker was determined to do the right thing, but anxious and unusually cautious. It seems the first reaction by law enforcement, to a person reporting child porn, is to assume the one telling them about the porn is a consumer of it.
7
We should need to stop these people from posting child sexual abuse photos and video because last year, more than 45 million photos and videos were send to tech companies to the federal clearinghouse in the case of child abuse. We should need to make a rule on google and facebook like if some posted any child abuse video or photo they should get banned from facebook and they should never be able to use google and he's going to get arrested in the case of child abuse.
4