Technology

Can Meta prevent its algorithm from facilitating a communal bloodbath in India?


gettyimages-1349815260.jpg

Image: Getty Images

It is hard to imagine that the land synonymous with Gandhi’s non-violence is on track to commit genocide against Muslims.

This may seem a bit overblown but Gregory Stanton, founder of Genocide Watch, one of the world’s foremost scholars on the topic who warned us about the Rwandan genocide in 1993 doesn’t seem to think so. Stanton says India is well on track to spark one, considering the country’s current communal trajectory.

Genocide is more of a process than an event, say experts. For a genocide to happen, at least two conditions must be reached. One, outright and tacit directives to engage in mass slaughter by representatives of the majority population must exist. Two, an unrelenting outpouring of hate and propaganda directed against a community that has subsequently been dehumanised, all done using a platform of great reach.

Both conditions are now easily fulfilled in India, which is what has Stanton worried.

It’s especially concerning the call last month from some of the highest Hindu clergy in India was for the slaughter of Muslims. Coming in the form of speeches, some of those calls were about the kind of weapons that would be ideal in dispatching Muslims. Other speeches, meanwhile, stipulated a minimum price to be spent on a personal armoury. One of the senior-most clergy advocated a swift ethnic cleansing like in Myanmar.

And while half of the second condition mentioned above, regarding the existence of hate and propaganda against Muslims, has long since been satisfied, it is only recently a platform for incessantly hammering home the message that the minority are inhuman has existed.

In Nazi Germany, the propaganda minister at the time, Joseph Goebbels, used a variety of platforms — posters, art, radio, film, pamphlets, and loudspeakers from trucks — to convince Germans that Jews needed to be wiped out. In 1993, the Hutus in Rwanda used the radio to constantly hammer home the message that Tutsis were “cockroaches” that needed to be exterminated. In India, the platform of choice for Hindu supremacists and hate-mongers is Facebook, now rebranded as Meta, at least according to the whole host of company insiders, whistleblowers, internal company communications, and critics of the social media company.

With two major state elections around the corner and public hate-mongering against minorities being the established pattern for wooing votes, all eyes are now on whether Facebook will continue to burnish its reputation as chief-spreader-of-hate or act against it.

FACEBOOK’S ‘DETERMINING ROLE’ IN GENOCIDE

All eyes are on Facebook as it already has one genocide credited to its name.

The genocide in question happened in Myanmar — formerly Burma — in 2017 when over 7,000 minority Muslims from the Rohingya community were murdered in a coordinated attack by the Buddhist majority military and accommodating civilians.

Ultimately, close to a million Rohingya Muslims fled into neighbouring Bangladesh where the majority of them now live in deplorable conditions. 

But how did the genocide process even start?

A majority of Burmese had jumped onto the internet in 2013 thanks to Facebook’s “Free Basics” program. This meant the first page encountered by many Burmese civilians would be Facebook’s. From then on, for four years, extremist Buddhists and the military began the project of dehumanising the Rohingya on the social media platform.

A steady tsunami of posts branded the Rohingya as criminals, foreign invaders, rapists, pigs, dogs, and various other epithets.

“Facebook executives were fully aware that posts ordering hits by the Myanmar government on the minority Muslim Rohingya were spreading wildly on Facebook,” said a former Meta employee in the recent $150 billion lawsuits issued against the company for its role in the Rohingya genocide.  

“It couldn’t have been presented to them more clearly, and they didn’t take the necessary steps,” added David Madden, a tech entrepreneur who had urged Facebook to take immediate action when he visited the company’s offices.

In response to those claims, Meta CEO Mark Zuckerberg has said documents released by whistleblower Frances Haugen related to the Rohingya genocide has created a “false picture” of the company. In 2018, however, Meta conceded that its Facebook platform was used to foment division and incite offline violence — although, this admission only occurred after the Rohingya genocide had already happened.

The head of the United Nation’s fact-finding mission into the genocide, Marzuki Darusman, said that Facebook played a “determining role” in the massacre.

SPREADING HATE IN INDIA

But could India really follow in Myanmar’s footsteps?

Every genocide is usually sanctioned or tacitly encouraged by a country’s top bosses. In this case, Indian leaders have done an exemplary job of doing that encouragement, with India’s Hindu nationalist Prime Minister Narendra Modi and his chieftain Home Minister Amit Shah having never condemned perpetrators about the growing violence against Muslims since they came to power seven years ago.

This silence has even continued despite the aforementioned call for genocide of Muslims in Haridwar by Hindu religious leaders.

Instead, Modi’s Bharatiya Janata Party (BJP) government has passed laws that critics say target Muslims, engineered a brutal crackdown on Kashmir, orchestrated a way to strip close to two million Indian Muslim citizens of their citizenship, sanctioned the incarceration of a large number of innocent Muslim men, and presided over riots, lynchings, and a tirade of hate speech against the Muslim community without a passing reference to them.

So it’s no surprise then that hateful propaganda in India has been able to thrive on Facebook, with the company at the same time choosing to do nothing. In late 2019 and 2020, this vitriol increased by 300% into a virtual torrent, says the Wall Street Journal. The expose charted how Facebook’s inaction soon led to a riot in Delhi, the nation’s capital, which killed 53 citizens who were mostly Muslim.

The Wall Street Journal’s revelations carried another bombshell. It revealed that Ankhi Das, Facebook’s Indian head of public policy at the time, had prevented the taking down of hate posts from BJP-affiliated accounts. This included posts from Raja Singh, a BJP politician from the state of Telangana, who had expressed a great desire to shoot Bangladeshi and Rohingya refugees, which he called “Muslims traitors” while calling for their mosques to be burned.

Das, who worked closely with Zuckerburg, had apparently explained to fellow Facebook employees that they couldn’t ban BJP politicians as it would hurt the company’s business prospects. The BJP spends seven times more than its rivals on Facebook-placed ads.

In response to this allegation, Facebook spokesperson Andy Stone said: “Das had raised concerns about the political fallout that would result from designating Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision to let Singh remain on the platform”.

After Facebook’s efforts to quell criticism, however, a post made by Das surfaced online. In that post, Das called Muslims “a degenerate community”. The post also revealed Das had helped Modi with his debut prime ministerial campaign. Das eventually left Facebook, but that in itself was too late to undo the carnage that had already occurred.

Digging deeper, Facebook would like people to believe that its backing of Das is a reflection of Zuckerburg’s public stance about content: That content should be unfettered and free.

But critics say that’s all a misleading sideshow. According to critics, Zuckerburg and Meta are consummate opportunists who have allied with the world’s power structures as they see fit — free speech be damned. In Vietnam, as the Washington Post has detailed, the ruling Communist Party reportedly told Meta that they had to throttle anti-government posts or be booted out of the country. With $1 billion annually in revenues on the line, Meta, a little ahead of the party’s January Congress in 2020, began blocking anti-government posts at triple the rate they had been.

AVOIDING THE SHIELD OF SECTION 230

Observers of the company say that Zuckerburg’s chutzpah is fuelled less by the idea of free speech and more by a clause enshrined in the 1996 Communication Decency Act (CDA), which has acted as the company’s guardian angel for many years.

Carved out in the late 90s when the internet was still new, the law was created to protect a nascent industry and its stalwarts like Compuserve and Prodigy from the liability of anything posted on its pages. Section 230 ring fenced them from such pesky problems.

Since then, social media companies have enjoyed having little to no liability despite being able to attract advertisements for content, which should logically put them in the same category as a newspaper publisher.

As we know, public good does not always come from privately funded platforms.

If section 230 were to be altered even a little bit, Meta would suddenly find itself fighting several legal battles, and even worse, it would be forced to change its cash cow of a business model.

Recently, Zuckerburg threw a curveball by appearing all too eager for a modification of the law where companies are to be held responsible for illegal content unless they have systems in place to identify it. But critics have been quick to point out that such a change would easily shut out smaller companies or competitors who wouldn’t have Facebook-like resources to implement such a costly mechanism.

Despite the difficulty in having the merits to both sue Meta and win, Rohingya refugees around the world have filed two $150 billion class action lawsuits — one in the US and another in the UK — with arguments planned specifically around bypassing the section 230 shield. 

The lawsuits have done this through structuring their arguments to target something else that is at the core of how Facebook makes its money: Its algorithms. 

There are two kinds of Facebook algorithms. One, is the company’s content algorithm that critics say is highly geared toward popularising content that has extreme emotions over more rational or objective ones.

Then there’s the second kind — its hate-speech algorithm. According to Time, before the national elections in India in 2019, and the following riots in Delhi, only four out of the 22 state languages spoken by 1.4 billion people in India were covered by Facebook hate-speech algorithms. 

Whistleblower Frances Haugen says that there was a total “lack of Hindi and Bengali classifiers”, as well as classifiers for other languages, at this time. Just take the case of Assamese, spoken in a state of 22.5 million people in India’s North East, which didn’t have any code in place to monitor hate speech.

By combining this lack of classifiers with the historical tension in this region, which arose from Bangladeshi Muslim refugees pouring across the border to cause Assamese Hindu resentment, Facebook in 2019 became a cesspool containing posts calling Bengali Muslims “parasites”, “rats”, “rapists” that each saw millions of views. 

The Rohingya lawsuits have thus targeted the algorithms instead of traversing the section 230 route knowing that it would be destined for failure. By doing so, the lawsuits claim that Meta’s Facebook product is defective and the company acted negligently by prioritising user engagement and growth over safety.

“At the core of this complaint is the realisation that Facebook was willing to trade the lives of the Rohingya people for better market penetration in a small country in Southeast Asia,” the lawsuit’s originating complaint outlines [PDF].

Could it work?

“The timing of these announcements shows the lawsuit is a wake-up call,” Alternative ASEAN Network on Burma founder Debbie Stothard said in a report. 

“Strategic litigation like this — you never know where it can go. In recent times we have seen climate-change litigation becoming more commonplace and getting some wins,” she added. 

“Based on the precedents, this case should lose,” said Eric Goldman, a professor of law at Santa Clara University School of Law. “But you’ve got so much antipathy towards Facebook nowadays — anything is possible.”

When the lawsuit was filed last month, Mark Farmaner, director of the NGO Burma Campaign UK, said many of the accounts linked to the military were still up and active despite it being years after the genocide.

“I was in discussions with Facebook only a few weeks ago about these pages. They were still adamantly refusing to take them down,” he said. “They’ve known all along that these pages were used to promote military companies, to raise money to help fund the crimes the military commit.”

When the lawsuit was officially filed, however, the pages disappeared.  

Meta then shortly came out with the following statement: “We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw (Myanmar military), disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content.”

WILL FACEBOOK CHANGE?

India is in crisis. The rate of assaults, murders, public humiliations, intimidation, incarcerations, and unending waves of hate speech are battering the country in an unprecedented fashion never seen since partition.

And yet, in this complex post-colonial nation with 22 languages, thousands of dialects, castes, linguistic identities and religious fissures carved by British rule, Meta has plied its trade with disastrous consequences.

In doing so, Meta’s chieftains, from spokesperson Lena Pietsch to vice president of global affairs Nick Clegg to Zuckerburg himself, have all repeatedly assured us that Facebook is on top of things.

Knowing what happened in Myanmar, it’s hard to take Facebook at its word.

Related Coverage



Source link

2 thoughts on “Can Meta prevent its algorithm from facilitating a communal bloodbath in India?

Comments are closed.