When Adobe Photoshop was originally released in 1990, it was a major step toward democratizing creativity and expression. Since then, Photoshop has had a profound impact on creativity, and even more broadly, on our visual culture.
While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology. Trust in what we see is increasingly important in a world where image editing has become ubiquitous – fake content is a serious and increasingly pressing issue. Adobe is firmly committed to finding the most useful and responsible ways to bring new technologies to life – continually exploring using new technologies, such as artificial intelligence (AI), to increase trust and authority in digital media.
With this current landscape in mind, Adobe researchers Richard Zhang and Oliver Wang, along with their UC Berkeley collaborators, Sheng-Yu Wang, Dr. Andrew Owens, and Professor Alexei A. Efros, developed a method for detecting edits to images that were made using Photoshop’s Face Aware Liquify feature, sponsored by the DARPA MediFor program. While still in its early stages, this collaboration between Adobe Research and UC Berkeley, is a step towards democratizing image forensics, the science of uncovering and analyzing changes to digital images.
This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations. Past Adobe research focused on image manipulation detection from splicing, cloning, and removal, whereas this effort focuses on the Face Aware Liquify feature in Photoshop because it’s popular for adjusting facial features, including making adjustments to facial expressions. The feature’s effects can be delicate which made it an intriguing test case for detecting both drastic and subtle alterations to faces.
The new research is framed with some basic questions:
- Can you create a tool that can identify manipulated faces more reliably than humans?
- Can that tool decode the specific changes made to the image?
- Can you then undo those changes to see the original?
By training a Convolutional Neural Network (CNN), a form of deep learning, the research project is able to recognize altered images of faces. The researchers created an extensive training set of images by scripting Photoshop to use Face Aware Liquify on thousands of pictures scraped from the Internet. A subset of those photos was randomly chosen for training. In addition, an artist was hired to alter images that were mixed in to the data set. This element of human creativity broadened the range of alterations and techniques used for the test set beyond those synthetically generated images.
“We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered,” Oliver says. “For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces.”
Those human eyes were able to judge the altered face 53% of the time, a little better than chance. But in a series of experiments, the neural network tool achieved results as high as 99%.
The tool also identified specific areas and methods of facial warping. In the experiment, the tool reverted altered images to its calculation of their original state, with results that impressed even the researchers.
“It might sound impossible because there are so many variations of facial geometry possible,” says Professor Alexei A. Efros, UC Berkeley. “But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work.”
“The idea of a magic universal ‘undo’ button to revert image edits is still far from reality,” Richard adds. “But we live in a world where it’s becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.”
What’s ahead? At Adobe, our mission is to serve the creator and respect the consumer. We strive to unleash the imagination of our customers by giving them tools that let them bring their ideas to life. At the same time, we’re working across numerous Adobe Research projects to help verify the authenticity of digital media created with our products and to identify and discourage misuse.
“This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well,” says head of Adobe Research, Gavin Miller. “Beyond technologies like this, the best defense will be a sophisticated public who know that content can be manipulated — often to delight them, but sometimes to mislead them.”
The issue of content authenticity is an industry topic that we will continue to explore with our customers, partners and community. We welcome the ongoing discussion on safeguards which could be implemented while fostering creativity and storytelling to flow across the ever-evolving digital canvas.