Other Than Data Leak Is Facebook Also Digital Redlining The Users? Read To Find Out.
Opiotennione v. Bozzuto Mgmt. is only one of the numerous discrimination cases filed against Facebook. We already know how dangerous these advertisements can be, from spying on us to gathering our data and causing even more catastrophic partisan differences. But there’s something more wrong with internet advertisements. Let’s take a closer look to discover what’s actually going on.
Firstly, What Is Digital Redlining?
Digital redlining refers to unethical ad-targeting techniques. According to the ACLU, internet ad targeting can amplify existing societal disadvantages, excluding people from historically underrepresented groups from housing, jobs, and credit. In a January press release, the ACLU stated, “In today’s digital world, digital redlining has become the new frontier of discrimination, as social media platforms like Facebook and online advertisers have increasingly used personal data to target ads based on race, gender, and other protected traits.”
Although this type of online discrimination is harmful and disproportionately affects people of color, women, and other marginalized groups, courts have ruled that platforms like Facebook and online advertisers cannot be held liable for withholding ads for jobs, housing, and credit from certain users. Despite agreeing to make significant improvements to its ad network, Facebook’s digital redlining remains. Many activists think that, while Facebook has taken steps to address its ad discrimination issues since a ProPublica investigation in 2016, not enough has been done in the same.
But Is The Steps Enough?
Despite the work Facebook has done, Morgan Williams, general counsel of the National Fair Housing Alliance, told Mashable that there are other areas of Facebook’s platform that need study and worry. According to research from October 2021 drawn on public voting records in North Carolina, the effects of Facebook adverts are discriminatory.
Special ad audiences enable marketers to provide a seed audience to Facebook, and then Facebook picks other Facebook users that resemble that seed audience. So advertisers aren’t saying, “display this ad to 27-year-old gay people in Brooklyn,” they’re saying, “show this to individuals like Christianna Silva” — and Christianna Silva is a 27-year-old LGBTQ person in Brooklyn.
The ad-delivery algorithm on Facebook then decides which individuals who meet those requirements will see the advertisements based on projections based on a slew of user data about who they are, where they reside, what they like or post, and what groups they join. While this may appear to be innocuous, it may lead to discrimination since data about who we are, where we live, what we live and post, and what organizations we join are all indicators of our protected characteristics.
To be clear, targeting advertising based on selected characteristics is against the law. Despite this, a 2021 study of discrimination in job ad delivery on Facebook and LinkedIn conducted by independent researchers at the University of Southern California discovered that Facebook’s ad-delivery system showed different employment ads to men and women, despite the fact that the jobs required the same qualifications and the targeting parameters selected by the advertiser included all genders. According to Sherwin, a senior staff attorney with the ACLU Women’s Rights Project, it’s an issue “Recognizing the problem is the first step, but the fact that Facebook has demonstrated no actual desire to break the ad distribution algorithm is crucial. After all, advertising money accounts for the vast majority of Facebook’s revenue. In the three months ending in June 2021, the firm generated $29 billion in ad revenue.” Until Facebook’s appetite changes, much of the job will fall to activists and policymakers.