SAN FRANCISCO, Sept. 8 (Reuters) – In September last year, Google’s cloud division looked at artificial intelligence to decide who to lend to.
After weeks of internal discussions, AI rejected the client’s offer, as AI technology continued to discriminate on the basis of race and gender.
Since the beginning of last year, Google has also rejected customer requests for advanced facial recognition systems, including IBM.N.
All of these technologies have been banned by executives or other leaders in interviews with three U.S. technology giants.
Reported here for the first time, their votes and the recommendations that led to them reflect a new industry-wide drive to balance the search for profitable IC systems with a greater sense of social responsibility.
“There are opportunities and disadvantages, and our job is to increase opportunities and reduce injuries,” he said. Tracy Pizzzo Frey, managing director of II, in charge of two ethics committees in the Google cloud.
Judgment can be difficult.
For example, Microsoft had to balance the benefits of voice-over-speech with the use of voice-over-technology technology, said IT CEO Natasha Krampton.
Advocates say that decisions should not be made internally. They argue that ethics committees cannot be truly independent and that public transparency is limited by competitive pressures.
Jasha Galaski, a lawyer for the European Union for Civil Liberties, sees foreign control as a way forward, and US and European officials are actually drawing rules for the developing world.
“If the companies’ ethics committees are really open and impartial – and all this is very cruel – then this may be even better than any other solution, but I don’t think it’s realistic,” Galaski said.
He said the companies would adopt clear rules on the use of AI and that this was important for both the customer and the public in accordance with car safety laws. He also said that they are responsible for their financial needs.
They want any rules to be flexible, even if they are innovative and the new problems it creates.
Among the complex ideas that came up, IMM began talking to police about the implantation and implantation of computer brain transmitters to Reuters.
Such neurotransmitters can help people with disabilities control their movements, but they raise concerns such as the prospect of hackers, said IBM chief privacy officer Christina Mongomeri.
No. He can see your grief
Just five years ago, technology companies began to recognize the identification of HIV services as chatbots and photographs with a few ethical defenses, and to combat abuse or misuse in future updates.
But as the political and public scrutiny of the IT failures escalates, Microsoft has set up ethics committees to review new services from the start in 2017 and Google and IBM in 2018.
Google said last September that it was in the throes of a financial crisis when it was able to better assess people’s creditworthiness.
The project has attracted clients such as DBKGn.DE, HSBC (HSBA.L) and BNY Mellon (BK.N) in the development of AI tools that help identify unusual transactions. ).
Google’s AI-based credit score is a multibillion-dollar market and needs a foothold.
However, about 20 managers, social scientists, and engineers voted unanimously in favor of the project at a meeting of the ethics committee in October, said Pizo Frey.
The committee concluded that the II system must learn from previous data and patterns, and that it is in danger of repeating discriminatory practices against colorful people and other marginalized groups around the world.
The committee, known as Lemonide, has issued a policy to skip all financial services agreements related to credit eligibility until such concerns are resolved.
Lemonade rejected three similar proposals last year, including from a credit card company and a business lender, and Pizo Frey and her sales partner are eager to reach a decision on the matter.
Google also said that the second cloud ethics committee, known as “Id tea”, has evaluated the service released in 2015 by categorizing people into four categories: happiness, sadness, anger and surprise.
The move follows a decision last year by Google’s General Ethics Panel, Advanced Technology Evaluation Council (ATRC), to ban new reading-related services.
The ATRC – more than a dozen senior executives and engineers – has decided that emotional turmoil is unhealthy because, among other things, facial expressions are uniquely linked to emotions in cultures, says Jane Jenney, founder and leader of Google Responsibility.
Ice Tea has blocked 13 planned emotions for the cloud device, including satisfaction and satisfaction, and a new system that describes activities such as clapping and smiling without trying to interpret them may soon be completely discontinued. They said.
Sounds and faces
Microsoft has developed software that can duplicate a person’s voice from a short sample, but the company’s Sustainable Users panel has spent more than two years discussing the ethics of its use, and has consulted with Brad Smith, the company’s chief executive, told Reuters.
She specializes in fields such as human rights, information science, and engineering – and finally gave the green light to the full release of custom neural sound in February this year. However, there are restrictions on its use, including the fact that the subject matter agreement has been verified and the “responsible AI Chambers” group, which is trained in corporate policy, approves purchases.
The IBM AI Board, consisting of about 20 department heads, examined the client’s request to develop face recognition technology to identify fever and scarring at the beginning of the COVID-19 outbreak.
According to Montgomery, his board of directors rejected the invitation because the photographs were not included in any AI database and concluded that small interference in privacy would be sufficient.
Six months later, IBM announced that it had suspended its face recognition service.
Legislators in the European Union and the United States are tightening their grip on IoT systems in an effort to protect privacy and other freedoms.
The EU’s Artificial Intelligence Act prohibits real-time recognition in public places when it is passed next year and requires technology companies to examine high-risk applications such as employees, creditors and law enforcement. Read more
U.S. Congressman Bill Foster, who has been hearing how the algorithms discriminate in financial services and housing, said new laws to control IAN would ensure equal access for suppliers.
“When you ask a company to make big profits in order to achieve social goals, you say, ‘What about our shareholders and competitors? That’s why you need sophisticated rules, ”said the Illinois Democrat.
Until there are clear road rules, there may be some very sensitive areas where technology companies are deliberately sitting.
Of course, some AI developments can easily be sustained until companies deal with moral risks without investing too much engineering resources.
After Google Cloud rejected a request for custom financial AI last October, the Lemonade Committee told the sales team that it intends to one day start developing loan-related applications.
First, research into combating unfair discrimination is part of a policy that has been distributed to employees to ensure that Google’s cloud needs are met in order to increase financial inclusion through “very sensitive” technology.
Until then, we have not been able to deploy solutions.
Report by Paresh Dave and Jeffrey Dustin; Correction by Kenneth Lee and Provin Char
Our Standards – Thomson Reuters Beliefs.