Adobe Research and UC Berkeley create AI that can find and undo portrait manipulations

Adobe Research and UC Berkeley create AI that can find and undo portrait manipulations
ФОТО: dpreview.com

Researchers with Adobe Research and UC Berkeley are working together on the development of a method for identifying photo edits made using Photoshop's Face Aware Liquify tool. The work is sponsored by DARPA's MediFor program, which funds researchers who are working to 'level the digital imagery playing field' by developing tech that assesses the 'integrity' of an image.

Both DARPA and Adobe highlight the issue of readily available image manipulation technologies, including some tools that are offered by select Adobe software. The company says that despite being 'proud of the impact' these tools have had, it also recognizes 'the ethical implications of our technology. '

Adobe said in a blog post on Friday:

Trust in what we see is increasingly important in a world where image editing has become ubiquitous – fake content is a serious and increasingly pressing issue. Adobe is firmly committed to finding the most useful and responsible ways to bring new technologies to life – continually exploring using new technologies, such as artificial intelligence (AI), to increase trust and authority in digital media.

As such, Adobe Research and UC Berkeley researchers have published a new study detailing a method for detecting image warping edits that have been applied to images of human faces. The technology involves a Convolutional Neural Network (CNN) trained on manipulated images that were created using scripts with Photoshop and its Face Aware Liquify tool.

To ensure the method can detect the types of manipulations performed by humans, the image dataset used to train the AI also included some images that were altered by a human artist. 'This element of human creativity broadened the range of alterations and techniques used for the test set beyond those synthetically generated images,' the study explains.

To test the deep learning method's assessment skills, the researchers used image pairs featuring the original unedited image and the image that had been altered. Humans presented with these images could only detect which had been altered with 53% accuracy, whereas the neural network was able to pick the manipulated image with accuracy as high as 99%.

In addition, and unlike the average Photoshop user, the technology is able to pinpoint the specific areas of a face that had been warped, which methods of warping had been used, and calculate the best way to revert the image back to as close to its original state as possible.

Adobe researcher Richard Zhang explained, 'The idea of a magic universal 'undo' button to revert image edits is still far from reality. But we live in a world where it’s becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research. '

The research is described as still in its 'early stages,' and is only one part of Adobe's body of work on image integrity and authenticity. The results come amid the growing sophistication of artificial intelligence technologies capable of generating highly realistic portraits and performing complex edits to images.

.

image adobe had research

2019-6-15 21:13

image adobe → Результатов: 3 / image adobe - фото


Фото: digitalrev.com

Stock Photographer Shocked After Finding Family Photo on Ku Klux Klan Website

When photographer Mat Hayward listed an idyllic photograph of his family to stock photo sites, he didn’t expect it to make its way onto the frontpage of Ku Klux Klan’s website. Despite being used as KKK’s website banner since 2011, Hayward’s family portrait, which is Photoshopped in front of an American flag alongside messaging such as ”The White Alternative,” was only made known to him on Wednesday when New York Daily News sent over a screenshot. digitalrev.com »

2016-11-04 03:00