Enlarge / The researchers write, “In this image, the person on the left (Scarlett Johansson) is real, while the person on the right has been generated by AI. Their eyeballs are painted under their faces. The reflections in the eyes match in the real person, but are inaccurate (from a physical point of view) in the fake one.”
In 2024, it’s almost trivial to create realistic images of people using AI, raising concerns about how these fake images will be detected. Researchers at the University of Hull recently unveiled a new way to detect AI-generated deepfake images by analyzing the reflections of the human eye. The technique, presented at the Royal Astronomical Society’s National Astronomy Meeting last week, applies a tool that astronomers use to study galaxies to scrutinize the consistency of the eye’s light reflections.
Adejumoke Owolabi, a Masters student at the University of Hull, led the research under the supervision of Dr Kevin Pimblett, Professor of Astrophysics.
Their detection technique is based on a simple principle: a pair of eyes illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eye. Many AI-generated images created to date do not take into account ocular reflections, and therefore the simulated light reflections are often inconsistent between each eye.
Magnification / A series of actual eyes showing fairly consistent reflections in both eyes.
In some ways, astronomy isn’t necessarily required for this kind of deepfake detection, as a quick glance at a pair of eyes in a photograph can reveal discrepancies in reflections — something portrait artists should keep in mind — but applying astronomy tools to automatically measure and quantify eye reflections in deepfakes is a novel development.
Auto Discovery
In a blog post for the Royal Astronomical Society, Pimbrett explained that Owolabi developed a technique to automatically detect eye reflections, then used morphological features of the reflections as a metric to compare the similarity of left and right eyes. Their findings revealed that deepfakes often show differences between the left and right eyes.
The team applied a technique from astronomy to quantify and compare the eye’s reflectance. They used the Gini coefficient, which is typically used to measure the distribution of light in images of galaxies, to assess the uniformity of reflectance across the eye’s pixels. A Gini value closer to 0 indicates that the light is more evenly distributed, while a Gini value closer to 1 indicates that the light is more concentrated in a single pixel.
Close Up / A series of deepfake eyes showing inconsistent reflections in each eye.
In a post for the Royal Astronomical Society, Pimblett compared his method of measuring eye-reflection shape to a more common way of measuring galaxy shape in telescope images: “To measure a galaxy’s shape, we analyse whether it has a compact centre, whether it’s symmetrical, and how smooth it is; we analyse the distribution of light.”
The researchers also explored using CAS parameters (Concentration, Asymmetry, Smoothness), another tool in astronomy to measure the distribution of light in galaxies, but this method proved to be less effective in identifying false eyes.
Detection Arms Race
While eye reflex technology is a potential way to detect AI-generated images, this method may not work if AI models evolve to incorporate physically accurate eye reflexes, perhaps applied as a next step after image generation. Also, the technology requires a clear and close view of the eye to work.
This approach also runs the risk of generating false positives as eye reflections may mismatch even in real photos due to different lighting conditions or post-processing techniques.However, analyzing eye reflections may still be a useful tool in a larger deepfake detection toolset that also considers other factors such as hair texture, anatomical structure, skin details, and background consistency.
Although the technique shows promise in the short term, Dr Pimbret warned that it is not perfect: “There will be false positives and false negatives, and it won’t catch everything,” he told the Royal Astronomical Society. “But this method provides a basis, a plan of attack, in the arms race to detect deepfakes.”