Oasis Labs has built technology that claims to allow Meta to collect racial data on Instagram users without infringing on their privacy.
Meta is partnering with privacy-focused company Oasis Labs to gather data on the race and ethnicity of Instagram users to increase ‘fairness’ in its AI model.
Over the next few months, some Instagram users in the US will be asked to optionally disclose their race within the app, according to Axios.
To protect user privacy, Oasis Labs has built a cloud platform that uses a technology known as secure multi-party computation (SMPC), which will enable Meta to collect data only at the ‘aggregate level’.
Oasis Labs said the joint project will “advance fairness measurement in AI models” so that Meta can ensure its products operate fairly across racial lines.
“This first-of-its-kind platform will play a major role in an initiative that is an important step towards identifying whether an AI model is fair and allowing for appropriate mitigation,” the company wrote in a statement announcing the partnership yesterday (28 July).
“The data, collected by a third-party survey provider, will be secret-shared with third-party facilitators in a way such that the user’s survey responses cannot be learned by either the facilitators or Meta.”
By taking the help of a third party, Meta hopes users will more comfortable in sharing their data with the company given that it has come under fire for data privacy before.
According to Axios, Oasis is working with YouGov for data collection, with the collected information split among several partners. These include Northeastern University, University of Central Florida and Texas Southern University.
None of the parties involved will have access to individual responses on Instagram users’ racial identities.
“For a long time, we’ve been working to better understand and improve the experiences that people from marginalised communities are having on our apps,” Roy Austin, VP of civil rights at Meta, told Axios.
“But since it’s always difficult to address something without measuring it first, we’ve partnered with leading researchers, civil rights and academic experts and universities that serve these communities to do exactly that.”
Before Facebook became Meta, the social network had been long criticised for allowing advertisers in key categories such as housing, employment and finance to only show messages to people of a certain race, gender or age group.
This changed in 2019, when Facebook COO Sheryl Sandberg – who is leaving Meta this autumn – announced that the social network had agreed to halt advertising practices allowing advertisers to discriminate against minorities.
“There is a long history of discrimination in the areas of housing, employment and credit, and this harmful behaviour should not happen through Facebook ads,” Sandberg said at the time.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.