Last year, in the midst of the Black Lives Matter protests Twitter was called out for its “racist” image cropping algorithm. Users had discovered that the algorithm was automatically focusing on white faces over black ones. Unsurprisingly critics pounced, accusing Twitter of algorithmic bias.
The problem was in how the algorithm cropped images in order to allow multiple pictures to be shown in the same tweet.
The problem started when Twitter scrapped its face-detection algorithm in 2017 for a saliency detection algorithm, which is trained to display the most important part of an image.
One user discovered that the algorithm would consistently crop an image of US Senator Mitch McConnell and Barack Obama to hide the former president.
There were also complaints of objectification bias which involved pictures of women focusing on their chest or legs, although when Twitter investigated this was not upheld.
After investigating the allegations Twitter found that there was a 4% difference from demographic parity in favour of white individuals and an 8% difference from demographic parity in favour of women.
Twitter’s director of software engineering, Rumman Chowdhury, admitted that “that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”
In March, Twitter tested a new way to display standard aspect ratio photos in full on iOS and Android — giving users more control over how their images appear and also improving the experience of people seeing the images in their timeline.
The update also included a true preview of the image in the Tweet composer field, so users know what a tweet will look like before they post it – (although in the case of Twitter for the web image preview crops remain).
In addition to this change, Twitter formed in April the – ML Ethics, Transparency and Accountability (META) team – a dedicated group of engineers, researchers, and data scientists who collaborate across the company to assess harms in the algorithms it uses and to help Twitter prioritise which issues to tackle first.
Now, in an added effort to improve that much-maligned algorithm, Twitter is releasing the code on GitHub and META is inviting coders and the hacker community to give it their best shot.
Twitter is proudly describing it as “the industry’s first algorithmic bias bounty competition”.
Competitors will have to submit a description of what issues they’ve found using the current algorithm as well as a dataset that will then be run through Twitter’s system to demonstrate the fault.
But although coders and hackers will be doing the hard work in a very sensitive area for the tech giant the pay-day is a modest one.
Winners will receive cash prizes ranging from US$500 to US$3500 – which is a little underwhelming for a company that this week bought After Pay for $29 billion USD in stock.
Still, winners might just get a free trip to Las Vegas as the chosen ones will be invited to present their work at an event hosted by Twitter at DEF CON, one of largest hacker conferences in Sin City next month.