Menu Close

Deepfake problem studied in EU; Africa not immune

Horizon Europe, a diverse research program, has awarded University of Amsterdam (UvA) Professor Federica Russo a €2.6 million (US$2.81 million) grant to lead a study of the political risks of misinformation and deepfakes.

The Solaris study is scheduled to start in February.

“We will analyze political risks associated with these technologies to prevent negative implications for EU democracies. We want to establish regulatory innovations to detect and mitigate deepfake risks,” according to a UvA marketing brief.

Solaris will also assess value-based generative adversarial networks (GANs) as tools for improving citizen engagement by boosting their awareness on key global topics such as climate change, gender dimension and migration.”

Deepfakes under scrutiny in Africa

Deepfakes are worrying academics in Africa, too, according to a Forbes article published this week.

Professor Johan Steyn, a research fellow at the School of Data Science and Computational Thinking at Stellenbosch University in South Africa, says deepfakes pose legal and policy issues.

“How do you present evidence to a court of law when you cannot confirm if a video or voice is authentic? There’s almost no way of proving deepfakes are authentic,” Steyn tells Forbes.

In fact, he says, AI will increase the need for philosophers and ethicists.

“If you’re a critical thinker, fake news should be relatively easy to pick up. Deepfakes are more serious. What happens when a bank, for example, accepts voice as a proof of identity?”

Meanwhile, deepfake-connected crime already prowls Africa (and elsewhere), says Vladislav Tushkanov, a lead data scientist with Kaspersky Lab, a cybersecurity firm based in Russia.

Talking to Forbes, Tushkanov says tools exist to spot at least some deepfakes, and observant people can spot rudimentary forgeries. They can watch for jerking movement, shifts in lighting from one frame to the next, strange blinking and poorly synched lips.

Experts explain ways they spot video, audio deepfakes

A podcast by The Economist this week picked up the detection thread, too.

On it, University of Florida professor Patrick Traynor talked about a novel method to expose audio generated by artificial intelligence.

Also during the show, Intel’s senior research scientist Ilke Demir explained how to spot visual fakery by analyzing facial color changes. Wendy Betts of eyeWitness to Atrocities, a part of the International Bar Association, discussed how the organization fends off AI-adulteration of its digital evidence. Horizon Europe, a diverse research program, has awarded University of Amsterdam (UvA) Professor Federica Russo a €2.6 million (US$2.81 million) grant to lead a study of the political risks of misinformation and deepfakes.

The Solaris study is scheduled to start in February.

“We will analyze political risks associated with these technologies to prevent negative implications for EU democracies. We want to establish regulatory innovations to detect and mitigate deepfake risks,” according to a UvA marketing brief.

Solaris will also assess value-based generative adversarial networks (GANs) as tools for improving citizen engagement by boosting their awareness on key global topics such as climate change, gender dimension and migration.”
Deepfakes under scrutiny in Africa
Deepfakes are worrying academics in Africa, too, according to a Forbes article published this week.

Professor Johan Steyn, a research fellow at the School of Data Science and Computational Thinking at Stellenbosch University in South Africa, says deepfakes pose legal and policy issues.

“How do you present evidence to a court of law when you cannot confirm if a video or voice is authentic? There’s almost no way of proving deepfakes are authentic,” Steyn tells Forbes.

In fact, he says, AI will increase the need for philosophers and ethicists.

“If you’re a critical thinker, fake news should be relatively easy to pick up. Deepfakes are more serious. What happens when a bank, for example, accepts voice as a proof of identity?”

Meanwhile, deepfake-connected crime already prowls Africa (and elsewhere), says Vladislav Tushkanov, a lead data scientist with Kaspersky Lab, a cybersecurity firm based in Russia.

Talking to Forbes, Tushkanov says tools exist to spot at least some deepfakes, and observant people can spot rudimentary forgeries. They can watch for jerking movement, shifts in lighting from one frame to the next, strange blinking and poorly synched lips.
Experts explain ways they spot video, audio deepfakes
A podcast by The Economist this week picked up the detection thread, too.

On it, University of Florida professor Patrick Traynor talked about a novel method to expose audio generated by artificial intelligence.

Also during the show, Intel’s senior research scientist Ilke Demir explained how to spot visual fakery by analyzing facial color changes. Wendy Betts of eyeWitness to Atrocities, a part of the International Bar Association, discussed how the organization fends off AI-adulteration of its digital evidence.  Read More   

Generated by Feedzy

Disclaimer

Innov8 is owned and operated by Rolling Rock Ventures. The information on this website is for general information purposes only. Any information obtained from this website should be reviewed with appropriate parties if there is any concern about the details reported herein. Innov8 is not responsible for its contents, accuracies, and any inaccuracies. Nothing on this site should be construed as professional advice for any individual or situation. This website includes information and content from external sites that is attributed accordingly and is not the intellectual property of Innov8. All feeds ("RSS Feed") and/or their contents contain material which is derived in whole or in part from material supplied by third parties and is protected by national and international copyright and trademark laws. The Site processes all information automatically using automated software without any human intervention or screening. Therefore, the Site is not responsible for any (part) of this content. The copyright of the feeds', including pictures and graphics, and its content belongs to its author or publisher.  Views and statements expressed in the content do not necessarily reflect those of Innov8 or its staff. Care and due diligence has been taken to maintain the accuracy of the information provided on this website. However, neither Innov8 nor the owners, attorneys, management, editorial team or any writers or employees are responsible for its content, errors or any consequences arising from use of the information provided on this website. The Site may modify, suspend, or discontinue any aspect of the RSS Feed at any time, including, without limitation, the availability of any Site content.  The User agrees that all RSS Feeds and news articles are for personal use only and that the User may not resell, lease, license, assign, redistribute or otherwise transfer any portion of the RSS Feed without attribution to the Site and to its originating author. The Site does not represent or warrant that every action taken with regard to your account and related activities in connection with the RSS Feed, including, without limitation, the Site Content, will be lawful in any particular jurisdiction. It is incumbent upon the user to know the laws that pertain to you in your jurisdiction and act lawfully at all times when using the RSS Feed, including, without limitation, the Site Content.  

Close Bitnami banner
Bitnami