You are here:

New social media rules hold users accountable for behavior offline

AXIOS

New rules from tech companies are making it harder for users with sketchy reputations in the real world to become famous online.

Driving the news: Twitch, the Amazon-owned livestream platform used primarily by gamers, on Wednesday unveiled a new policy to address “severe misconduct” that happens off its platform, but that may still impact its online community.

Twitch says it will only act in cases where it has “verifiable evidence” of off-platform activities such as deadly violence, terrorist activities or recruiting, credible threats of mass violence, sexual exploitation of children, sexual assault or membership in a known hate group.

The company says to ensure its probes of off-platform behavior “are thorough and efficient,” it has have partnered with a third-party law firm with expertise in sensitive investigations.

Be smart: Twitch isn’t the first company to create this type of policy and it certainly won’t be the last.

Snapchat became one of the first social media companies to take action against President Trump’s account for things he said off its platform last summer. The action came after Trump tweeted comments that some suggested glorified violence amid the 2020 racial justice protests.

Months later, following the Capitol insurrection in January, a slew of online platforms and servers removed or suspended Trump’s account or accounts affiliated with pro-Trump violence, conspiracies and hate groups, in an attempt to curb the spread of misinformation and hate speech.
The big picture: This more holistic approach may help tech companies protect themselves against criticism for hosting potentially harmful people or groups, but the policies may also put companies in a bind when it comes to upholding free speech values.

Spotify debuted a “hateful conduct and content policy” in 2018 that limited the active promotion of music by R. Kelly and rapper XXXTentacion, who died shortly after the policy was created. The company discontinued the policy less than a month later following censorship backlash.

Be smart: For some of these platforms, evaluating offline behavior isn’t totally unprecedented, but it’s becoming more important as lawmakers put more pressure on Big Tech firms to be accountable for more nuanced content moderation problems around things like hate speech and harassment.

Twitch, for example, has in the past banned users from its platform that have been accused of things like sexual misconduct. Its new policies expand the scope of the off-platform activities it will ban users for, like belonging to a hate group.

A wave of #MeToo accounts in June 2020 led to a reckoning in the video games industry and at several companies, including at Twitch and Facebook Gaming, which subsequently banned multiple streamers accused of sexual misconduct.
What to watch: For now, most platforms are evaluating extreme behavior offline that is either illegal or borderline illegal, like terrorist activity, sexual misconduct or belonging to a recognized hate group. The challenge for these companies will be whether and how they draw the line around activity that is harder to define as explicitly illegal, like bullying.

Source: Axios