The human society gets its excellence from a host of different factors, and yet none has ever chipped in more to our current reality than that willingness to grow on a consistent basis. This unwavering …
The human society gets its excellence from a host of different factors, and yet none has ever chipped in more to our current reality than that willingness to grow on a consistent basis. This unwavering commitment towards getting better, no matter the situation, has brought the world some huge milestones, with technology emerging as quite a major member of the stated group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has grown more and more evident in recent times, and assuming one new discovery ends up the desired impact, it will only put that trend on a higher pedestal moving forward.
The researching team of University at Buffalo has successfully developed new deepfake detection algorithms, which can be used to make the technology less biased. To understand the significance of such a development, we must look at how, as per recent studies, error rates in the context of performing deepfake detection have gone up to a disturbing 10.7% difference among different races. On a more specific level, these studies revealed that some algorithms were generally better at guessing the authenticity of lighter-skinned subjects than darker-skinned ones. Now, the point which makes such disparities a massive problem is that they can put certain groups at a higher risk of having their real image pegged as a fake, or worse, a doctored image of them presented as real, thus opening the individual to serious exploitation. However, according to Siwei Lyu, Ph.D., co-director of the UB Center for Information Integrity, and the lead author on this study, the component at fault isn’t exactly the algorithms but the data on which they are so rigorously trained. You see, if you comb through the relevant photo and video datasets supporting such deepfakes, you’d easily notice that middle-aged white men are often overly represented in them, ensuring the algorithms grow better at recognizing them than, let’s say, people from a particularly underrepresented group.
“Say one demographic group has 10,000 samples in the dataset and the other only has 100. The algorithm will sacrifice accuracy on the smaller group in order to minimize errors on the larger group. So it reduces overall errors, but at the expense of the smaller group,” said Lyu.
Mind you, the world has already seen a fair few efforts geared towards making the databases in question more demographically balanced, they all have proven themselves to be time-consuming. The new discovery doesn’t run into the same problem because it is actually the first attempt anyone has ever made to improve the fairness of the algorithms themselves. The feat is achieved using two specialized machine learning methods where one makes the algorithm aware of demographics, whereas the other leaves them blind to them. This isn’t just likely to reduce disparities in accuracy across races and genders, but at the same time, it will also improve the algorithm’s overall accuracy. Taking a more granular view of the picture, the demographic-aware ML method will kick things off by supplying algorithms with datasets that labeled subjects’ gender—male or female—and race—white, Black, Asian, or other. It will then issue explicit instructions to minimize errors on the less represented groups. Having said that, datasets aren’t exactly designed to hold race or gender-related labels, meaning team’s demographic-agnostic method doesn’t classify deepfake videos based on the subjects’ demographics. Instead, it does so based on features in the video not immediately visible to the human eye.
“We’re essentially telling the algorithms that we care about overall performance, but we also want to guarantee that the performance of every group meets certain thresholds, or at least is only so much below the overall performance,” said Lyu, “Maybe a group of videos in the dataset corresponds to a particular demographic group or maybe it corresponds with some other feature of the video, but we don’t need demographic information to identify them. This way, we do not have to handpick which groups should be emphasized. It’s all automated based on which groups make up that middle slice of data.”
The researchers have already tested their new methods with popular FaceForensic++ dataset and state-of-the-art Xception detection algorithm. Going by the available details, it showed to improve all of the algorithm’s fairness metrics, such as equal false positive rate among races, and the demographic-aware method was the best in each one of them. Another thing these tests revealed was how the methods actually increased overall detection accuracy of the algorithm from 91.49% to as high as 94.17%. Although still improving most fairness metrics, the researchers also witnessed a limitation, showcasing that when Xception algorithm is used with different datasets, and the FF+ dataset is placed alongside different algorithms, the whole approach suffers from reduced overall detection accuracy.
“There can be a small tradeoff between performance and fairness, but we can guarantee that the performance degradation is limited,” said Lyu. “Of course, the fundamental solution to the bias problem is improving the quality of the datasets, but for now, we should incorporate fairness into the algorithms themselves.”
Copyrights © 2024. All Right Reserved. Engineers Outlook.