Creating false photos and videos using artificial intelligence (AI) imagery has made it easy for anyone to alter an image or video with startling realistic results. This creates issues for many industries, from insurance claim verification to news and media outlets. In the past, a person needed to use computer-generated imagery (CGI) or a have a mastery of Photoshop – and a lot of time – to modify pictures or video convincingly.
Today, there is off-the-shelf software that is available to anyone who wants to create deceptive footage and photos that are surprisingly realistic. These types of videos and photos are called “deepfakes”. These fake images make it much easier to falsify claims related images, increasing workload and potential costs for many organizations including the insurance carriers. There is growing awareness of the issue and new technological developments that can better prepare people to determine if an image is true as opposed to manipulated creations.
There are two main ways forensics experts identify false photographs. One is by looking for modifications by determining if pixels have been altered or metadata has been altered. They look for reflections that don’t follow the laws of physics, for example, or shadows that don’t make sense. They can also check to see how many times an image has been compressed and whether it has been saved multiple times.
The second way, based on the new technology, is to verify an image’s integrity the moment it’s taken. AI is able to perform dozens of checks to ensure the photograph is valid, such as the camera’s location data and timestamp. It can quickly determine if the camera’s coordinates, time zone and altitude line up with nearby Wi-Fi network information. It can also tell if a camera took an original photo or was taking a picture of another two-dimensional image. These new apps can even spot if a single pixel is out of place.
This new technology is promising, especially in the light that that are more than two billion photos uploaded to the internet on a daily basis. This type of technology could be used to filter deepfake images at scale. As an added layer of trust, photos and metadata are being stored using a blockchain, a technology that combines cryptography and distributed networking to securely store and track information.
There are software image verification kits now available on the market. These are especially important for documenting high-stake scenarios, including crime scenes, human rights violations and insurance investigative documentation. In the future, it is expected that this type of technology will be used by default as a preventative measure. Since most content viewed today is taken with mobile devices, integrating the verification technology into their operating systems as well as into digital cameras would go a long way toward refuting deepfake photos and videos.
Social media platforms, such as Facebook, Twitter and Snapchat are looking into installing verification technology in which an unaltered image would automatically be identified and marked as authentic, encouraging more transparency and trustworthiness.
Despite these advances, there is still much to be done to be fully prepared for the proliferation of deepfakes. It’s especially important that companies have a way to demonstrate they are transparent about their processes and work with trusted technology. That can help maintain consumer trust and keep deception at bay.
The trusted experts at Marsh & McLennan Agency (MMA) go further than most insurance agents and brokers to understand your organization. We deliver results that reduce risk, insurance costs while improving our clients’ operations and profitability. Contact us to learn more.