Face recognition technology … the brutal use of the digital world

Jeddah – Nermin Al-Sayed – This was highlighted yesterday when it came to light that major retail chains Bunnings, The Good Guys and Kmart are using it to recognize customers’ faces.

A CHOICE survey found that companies “capture important data from their customers” and that 76% of buyers did not know the technology was used in Australian stores.

Simon McDowell, Bunnings’ chief operating officer, said signs telling customers to use the software were located at entrances, saying they needed to identify “people of interest” and keep stores safe.

Peter Lewis, of the Australian Institute’s Center for Responsible Technology, says that some retailers may have good intentions in using the technology, but it is nevertheless worrying, saying: “What we do know is that this technology is the Wild West of the digital world is. ”

He continued, “We know that there are companies that take these photos, put them back together and sell them to governments and other companies. They are even used in the war in Ukraine.”

Peter Lewis says Australians need to care about how their data is used, according to ABC News.

She called on the federal government to adopt the Human Rights Commission’s 2021 recommendation to stop the use of facial recognition technology in “high-risk situations” until more precautions are in place.

The moratoriums are already in force in some jurisdictions around the world, said Edward Santo, a professor at the Sydney University of Technology and former human rights commissioner.

These include San Francisco, where there is a ban on police and city agencies using facial recognition technology.

In Australia, there is no specific law regarding the use of facial recognition, but there is some protection under privacy laws.

But going forward, Santo Australia says it must follow the European Union’s lead in taking a “more cautious approach” to how the technology is used.

The EU bill would ban “harmful” use of technology and increase privacy while allowing low-risk use.

“There are significant loopholes in Australian law, especially in terms of how it can be protected from mass fraud and error,” Santo said.

Experts have made renewed calls for federal guidelines on the use of face recognition technology, and there are some legitimate uses of “face recognition” and should be encouraged, but red lines should be drawn around the most harmful uses.

Human Rights Commissioner Edward Santo says face recognition technology is less accurate in recognizing dark-skinned faces, and there are three broad types of face recognition technologies, which are characterized by varying degrees of accuracy.

The first is face verification, which according to Santo is “the least complex” of the kind used in smartphones. The second is face recognition, a “more sophisticated” form that can be used to identify people in crowds and is “error-prone”. .

Facial analysis is the “most empirical” form and is often equated with “junk science”, according to Santo.

It tries to judge factors such as mood, age, gender and behavior of a person based on their face and expressions.

Santo said the model is “really dangerous” because it can give people like law enforcement a false sense of self-confidence.

Studies have also shown a decrease in the accuracy of facial recognition software when it comes to people of color and women.

Edward Santo says there are loopholes in Australia’s laws relating to face recognition.

He cited an example abroad where face recognition technology used by the London Metropolitan Police mistakenly identified 96% of people scanned as suspects, which was revealed in 2019.

The Office of the Australian Commissioner for Information (OAIC) ​​found in 2021 that New York-based Clearview AI had violated privacy by scraping Australians’ important information from the internet and revealing it through a face recognition tool .

Clearview AI obtained the data without permission and did not take reasonable steps to notify those whose data was deleted.

The OAIC also found that the Australian Federal Police had failed to comply with its privacy obligations by using the Clearview AI instrument on a trial basis between November 2019 and January 2020.

Regarding how widespread the use of face recognition technology is in Australia, Santo says: “The short answer is we do not know.”

He said the investigation into selection “reveals” the customs in the country, and that approval is one of the most pressing issues.

“One of the things I (as a human rights commissioner) have advocated is that there should be more transparency about how facial recognition is used,” he said.

He continued, “There’s a big difference between a kind of fine print … and a meaningful exchange with someone … so they have the opportunity to withdraw.”

We have shown you, our valued visitors, the most important details about the news of face recognition technology. The brutal use of the digital world on Dot Alkhaleej in this article. We hope to provide you with all the details clearly and with more credibility. and transparency If you want to follow more of our news, you can subscribe to us for free through the system Our alerts on your browser or by joining the mailing list and we look forward to providing you with everything that is new.

We should also remind you that this content has already been published on the Mubaida website, and the editors of Dot Al Khaleej may have verified, modified or quoted it, or it may have been completely transmitted, and you can read and follow the developments of this news from its main source.

Leave a Comment