Has Google failed in the ethical challenge of dealing with artificial intelligence? | technology

Artificial intelligence applications are constantly expanding, in all sectors and at various levels, opening new horizons for humanity and different ways of working and living through innovative technical solutions provided by artificial intelligence to achieve a digitally connected future, where machines and people work together to achieve impressive results that were not possible before.

But this prosperous future must first define an ethical strategy for dealing with artificial intelligence, so that this strategy maximizes the benefits and reduces the negatives that can be associated with the application of this advanced technology.

In fact, developing an ethical strategy to deal with artificial intelligence is one thing, and implementing this strategy and implementing it on the ground is quite another. Scientists for this purpose, but when applied to the ground, the situation was different, and these companies did not stick to the strategies they set for themselves, and perhaps the biggest example of this is Google (Google).

Google has worked for many years to present itself as an ethically responsible organization (Reuters)

Google between theory and practice

Google has worked for many years to present itself as a responsible institution in dealing with artificial intelligence in an ethical way that takes into account the interests of its customers around the world, and it has employed leading scientists and academics in its research centers and laboratories, ways to deal with artificial intelligence, and participated in the largest international conferences specialized in this field.

With all that said, the company’s reputation has been irreversibly damaged in recent times, as the company is now struggling to convince people and governments of its good “ethical” handling of the vast amount of data it owns, according to The Verge. Technical. specialist.

The company’s decision to fire two scientists, Timnit Gibero and Margaret Mitchell, two of its top researchers in the field of artificial intelligence ethics who studied the disadvantages of the popular Google search engine, sparked huge waves of protests inside and outside the giant company unleashed.

Scientists and academics have in many ways registered their strong dissatisfaction with this arbitrary decision. Two of them withdrew from a workshop organized by Google, a third scientist refused a $ 60,000 grant from the company, and a fourth promised to accept no funding from it in in the future.

Two of the company’s top engineers also resigned in protest of Gebero and Mitchell’s treatment, and they were recently followed by a senior Google AI employee, Sami Bengio, director of research who oversaw hundreds of employees in this field at the company operates, according to a previous report by The Verge.

Google’s dismissal of Gebru and Mitchell prompted thousands of company employees to protest, and the two scientists had earlier called for more diversity and inclusion among Google’s search staff, and expressed concern that the company had begun censoring papers critical of its products.

“What has happened makes me deeply concerned about Google’s commitment to ethics and diversity within the company, and what’s even more worrying is that they have shown a willingness to suppress science that is not in line with their business interests, “says Scott Nikom, an assistant. professor at the University of Texas who works in robotics and machine learning.

“It definitely undermines their credibility in the area of ​​justice and AI ethics,” said Deep Raji, a professor at the Mozilla Foundation working on AI ethics.

There have been many questions raised in the past about the ethics with which Google deals with the large amount of data it collects from billions of people in the world, the way it collects this data and how the company uses this data to make billions of dollars in profit every year at the expense of users, it is an addition to many cases of monopoly and abuse of power brought against it in many countries of the world.

All of this echoes the issue of ethics in dealing with the artificial intelligence that Google and other technology giants use to give them more power and authority.

In fact, in theory, Google has a comprehensive system for managing the ethics of dealing with artificial intelligence, and it was one of the major companies in the world that adopted such a system, as it has a specialized division for this goal established in 2018. , as reported by the American newspaper “Washington Post”.

Google has a comprehensive system for managing the ethics of dealing with artificial intelligence (Shutterstock)

Google has set a set of goals that it wants to implement, and seeks to implement them in its actions with artificial intelligence, and we’re passing this on to you as stated on the company’s website:

To be socially useful

The wide spread of new technologies is increasingly affecting society as a whole, and advances in artificial intelligence will have different implications in a wide range of areas, including healthcare, security, energy, transportation, manufacturing and entertainment.

In considering the potential uses of artificial intelligence technology, a wide range of social and economic factors will be considered as the business continues, as the company believes that the expected benefits greatly outweigh potential risks and disadvantages, adding that he will strive to provide high-quality and accurate information using artificial intelligence, with Continue to respect cultural, social and legal norms in the countries in which it operates.

Anti-bias

AI algorithms and datasets can reflect, improve or reduce unfair prejudices, and the company says it understands that it is not easy to distinguish between just and unfair prejudices, and differs between cultures and societies. It says it will seek to avoid unfair effects on people, especially those associated with sensitive traits such as race, gender, nationality, income, sexual orientation, ability and political or religious beliefs.

Safety and security

Google claims that it will continue to develop and implement robust practices that respect safety and security principles, and avoid unintended consequences that could lead to harm, adding that it will design its AI systems to be appropriately prudent, and seeks to develop it according to best practices in safety research, and will test AI technologies in restricted environments and monitor their operation while you work.

Responsibility towards people

Google says it will design AI systems that provide appropriate opportunities for relevant feedback and interpretation, while ensuring a right of appeal for users, and that the company’s AI technology will be subject to appropriate human control.

Privacy Guarantee

The company claims that privacy principles are respected when developing and using its AI technology, that it will allow users to agree to data collection while respecting their privacy, and that it will provide appropriate transparency and control over use. of the data collected.

Will your smartphone read your mind in the future?
Responsible AI requires the creation of systems that comply with basic guidelines that distinguish between permitted and illegal uses (Getty Images)

Adherence to the highest standards of scientific excellence

Technological innovation is rooted in a commitment to scientific method, intellectual rigor, integrity and collaboration. And the tools provided by artificial intelligence that have the potential to open up new areas of scientific research in very important fields, such as biology, chemistry, medicine and environmental sciences. Google strives for high levels of scientific excellence through its work on the development of artificial intelligence, and confirms that it will share the knowledge it acquires responsibly by publishing educational materials, best practices and research that enable more people to use useful applications to develop for artificial intelligence.

Availability for useful use

There are many technologies that have multiple uses, and Google says that this in turn will limit applications that can be harmful or offensive, and will also evaluate the potential uses of various artificial intelligence technologies so that they are useful to users.

Steps to build an ethical strategy for dealing with artificial intelligence

All of the above is a good thing and no one can say otherwise, but when it came to implementation, the result was often different, so the question is: How does one ensure that AI is in line with business models and core values? Which such companies follow?

Responsible AI involves creating systems that comply with basic guidelines that distinguish between permitted and illegal uses, so that AI systems are transparent, human-centered, interpretable and socially useful to be considered responsible AI.

The researcher and author, Prangia Pandap, identifies 5 basic steps for building and implementing any ethical strategy for dealing with artificial intelligence, in an article for her published by the “EnterPriseTalk” platform:

Start over

Many corporate executives are still unaware of how to build and implement responsible AI within their companies and organizations, and here leaders need to be educated on the principles of reliable AI so that they can take a clear stand on the ethics of this intelligence, and ensure compliance with applicable laws and regulations.

Risk assessment

It is necessary to understand the risks that can be associated with the applications of this new technology, as artificial intelligence is an emerging technology, and the laws, instructions and standards for dealing with it have not been definitively determined in several countries of the world does not. , due to the difficulty in identifying the risks and threats it may represent. Continuous risk posed by the application of this technology is essential and critical.

Define baseline

Reliable AI processes need to be integrated into the company’s management system, and here the company’s policies need to be updated to ensure that the application of artificial intelligence at work does not have any negative consequences affecting human rights inside and outside the organization. , as well as to resolve any issues that may arise in this The context in the future, which require the adoption of a reliable compliance policy that contains a mixture of technical and non-technical precautions to ensure the best results.

Raise awareness at company level

Companies need to educate their employees about the legal, social and ethical implications of dealing with AI, and explain the risks associated with AI, and how to reduce those risks. In this context, holding training workshops on the ethics of dealing with artificial intelligence will be more important than focusing on the rigid rules of compliance that companies distribute to their employees.

third party

An artificial intelligence system is rarely built by a single company or organization, there are always other parties involved in this process, and here it is of great importance that these external parties and organizations adhere to the company’s ethical strategy for dealing with artificial intelligence, there must be reciprocal links between various institutions working on the system, in order to ensure the reliability of this technology, and this includes conducting audits on providers that include how they deal with potential adverse human rights impacts.

Leave a Comment