Deep forgery: secrets and secrets of the most dangerous techniques


16/07 02:53

During the four years between the 2016 and 2020 U.S. presidential elections, many cyber experts and analysts were concerned that Russia or other nation-states would develop their ability to manipulate the perceptions of U.S. voters by moving from social media memes to deep-seated ones.

This emerging technology can be used by the use of computer-manipulated images and video material to discredit candidates in any way, thereby compromising their chances of voting.

Meanwhile, the RAND Corporation has released a new report, “Artificial Intelligence, Deep Fakes, and Disinformation: A Primer”, which concludes that the potential for chaos due to deepfakes has not yet been realized.

Some commentators, for example, have expressed confidence that the 2020 election will be targeted and possibly lifted by a deeply fake video. Although the Deepfake technique did not come, it does not eliminate the risks of future elections. ”

The report identifies several reasons why deepfake technology has fallen short of its looming reputation, especially as well-designed deepfakes require sophisticated computer resources, time, money and skill.

Dr. Matthew Stamm, Assistant Professor of Electrical and Computer Engineering at Drexel University, has been in the field of counterfeiting for about 15 years and has contributed to DARPA programs on detection algorithms, and he agrees that it is not a big deal to have an outstanding deep technology. It’s easy or cheap, but he’s not sure how important it is in terms of its effectiveness.

Also read .. Facebook technology exposes “deeply fake” photos and videos

“Over time, they will get better,” says Stam. “But we also have to deal with another factor – we tend to believe something that confirms our previous beliefs.”

Stam cites, for example, a video of Nancy Pelosi going viral on social media during the summer of 2020, which made her speech appear tainted – the speaker Pelosi is speechless, and it was not a profound falsehood, he says – and helped uncover it The audio is a bit slow to make it sound weak, but no matter how many times it is refuted by fact checkers or reported by social media, many people accept it as a “” fact”.

“You can often get the same result with ‘cheap imitations,'” Stam said. “You have to think about what your goal is? You can spend a lot of time and money creating a very good deepfake that will last a long time, but is that your goal?”

While videos have received by far the most attention, the RAND report identifies other types of deep falsification that are easier to implement and that have already demonstrated their ability to cause harm. Audio reproduction is one method.

In one example, the CEO of a UK-based energy company reported that he had received a phone call from someone who was apparently his boss at a parent company on voice instructions on the phone, allegedly the product of a voice cloning program. 243 thousand dollars – to the bank account of a Hungarian supplier. ”

Another tactic is to create deep forgeries – images that people use everywhere on social media. Deep in 2019, he was linked to a small but influential network of accounts, including a Trump administration official who was employed at the time of the incident.

The value of creating such deep forgeries, the report notes, is that it is not detected by a reverse image search looking for matches with the original verified images.

The fourth form of deep forgery identified in the report is generative text – the use of natural language computer models and artificial intelligence to create false but human text. Use it to mass-produce fake news stories that will flood social networks.

The report lists four main ways in which enemies or bad actors can be armed – election tampering; aggravation of social divisions, including a weakening of confidence in government institutions and authorities; and undermining the press and other reliable sources of information.

Tribe notes that these are large-scale social threats, but others are economic, “and he said:” There were already [حالات] Deep false voice, there are many, many criminal opportunities, and any of these deeply false methods can be used. ”

The report identifies several approaches to reduce the threat posed by all kinds of counterfeits to information integrity: disclosure, source, regulatory initiatives, open source intelligence techniques, journalistic approaches, and civic media literacy.

Discovery often gets the most attention, as most detection efforts target automated tools, and DARPA has invested heavily in this area, first with its media forensic program ending in 2021, and now with its semantic forensic program.

While tracking capabilities have improved significantly over the past few years, deep-fake videos have been developed and the result is an arms race, which is definitely in favor of those who create deep-false content, the report said. Artificial deepening are important signals associated with deepening videos. content, and these lessons are quickly assimilated into the creation of new deep-content content.

The original is already used in certain ways. If you notice a small surrounded “i” on a photo, it means that the photographer has used a safe mode on their smartphone to take the photo, which contains important information in the digital integrate photo metadata.

Leave a Comment