well-temperedforum.groupee.net
We're not ready for deepfakes

This topic can be found at:
https://well-temperedforum.groupee.net/eve/forums/a/tpc/f/9130004433/m/2913964497

01 June 2020, 12:11 PM
wtg
We're not ready for deepfakes
quote:
Last month during ESPN’s hit documentary series The Last Dance, State Farm debuted a TV commercial that has become one of the most widely discussed ads in recent memory. It appeared to show footage from 1998 of an ESPN analyst making shockingly accurate predictions about the year 2020.

As it turned out, the clip was not genuine: it was generated using cutting-edge AI. The commercial surprised, amused and delighted viewers.

What viewers should have felt, though, was deep concern.

The State Farm ad was a benign example of an important and dangerous new phenomenon in AI: deepfakes. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.


https://www.forbes.com/sites/r...epared/#4ab4bb3f7494


--------------------------------
When the world wearies and society ceases to satisfy, there is always the garden - Minnie Aumônier

01 June 2020, 04:48 PM
Amanda
This has been worrying me for a long time - ever since I became aware that this technology existed and was being deployed.

I only question the thread title.
If as things stand there is apparently no way to distinguish deepfakes from real videos and other represntations (even the false blink test, has been bypassed), when WILL we be "ready" for them?

Having trouble thinking of a future in which we will be protected from the effects of deepfakes. It looks like people with the expertise to design litmus test to debunk them, are out there (first) designing them - and (worse still) finding ways to bypass the identification methods.

(Perhaps as mercenaries, because they are often the same people! Just as those manipulating election outcomes are working on both sides. Likewise, armaments designers' products are sold to both sides in battle.)

How can we successfully persuade such tech-savvy AI people that it's - WRONG - to manipulate public opinion? (That Right and Wrong exist and MATTER!). At present it seems being able to get away with something (something that pays well) justifies tapping that market.

(As affected by the moral Influencer in Chief! Mad)


--------------------------------
The most dangerous word in the language is "obvious"