Sept. 2 (Reuters) – Amazon.com Inc (AMZN.O) plans to take a more proactive approach to determining what types of content violate its cloud service policies, such as the rules against promoting violence, and enforcing its removal, according to two sources, a move that could reignite debate over how much energy tech companies should have to restrict free speech.
Over the next few months, Amazon will hire a small group of people in its Amazon Web Services (AWS) division to develop expertise and work with outside researchers to monitor future threats, one of the sources familiar with the matter said.
This could make Amazon the world’s largest cloud service provider with 40% market share according to research firm Gartner, one of the world’s most powerful arbitrators for authorized content on the Internet, according to the reports. experts.
Amazon made headlines in the Washington Post last week for shutting down a website hosted on AWS that featured Islamic State propaganda celebrating the suicide bombing that killed around 170 Afghans and 13 US soldiers in Kabul last Thursday . They did so after the news agency contacted Amazon, according to the Post.
The proactive approach to content comes after Amazon banned the social media app Speak from its cloud service shortly after the Jan.6 riot on Capitol Hill for allowing content promoting violence.
“AWS Trust & Safety strives to protect AWS customers, partners, and Internet users from bad actors who attempt to use our services for abusive or illegal purposes. When AWS Trust & Safety becomes aware of abusive or illegal behavior on AWS services, they act promptly to investigate and engage with customers to take appropriate action, ”AWS said in a statement.
“AWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to grow, we expect this team to continue to grow,” he added.
Human rights activists and groups increasingly hold not only websites and apps accountable for harmful content, but also the underlying technological infrastructure that allows these sites to function, while conservatives politicians denounce the restriction of freedom of expression.
AWS already prohibits the use of its services in various ways, such as illegal or fraudulent activity, to incite or threaten violence or promote the sexual exploitation and abuse of children, in accordance with its acceptable use policy.
Amazon first asks customers to remove content that violates its policies or have a system to moderate the content. If Amazon does not come to an acceptable agreement with the customer, it may shut down the website.
Amazon aims to develop an approach to the content issues it and other cloud providers face more frequently, such as determining when disinformation on a company’s website is reaching a scale that requires AWS action, the source said. .
The new team at AWS does not plan to sift through the vast amounts of content that businesses host in the cloud, but will aim to stay ahead of future threats, such as emerging extremist groups whose content could end up on the web. AWS cloud, the added source.
Amazon is currently recruiting a global policy manager to join the AWS Trust and Security team, which is responsible for “protecting AWS from a wide variety of abuse,” according to a job posting on its website.
AWS’s offerings include cloud storage and virtual servers and have large companies like Netflix (NFLX.O), Coca-Cola (KO.N) and Capital One (COF.N) as customers, according to its website. .
Better preparedness against certain types of content could help Amazon avoid legal and PR risks.
“While (Amazon) can proactively eliminate some of these items before they are discovered and become big news, it helps to avoid this reputational damage,” said Melissa Ryan, Founder of CARD Strategies, a consulting firm that helps organizations understand extremism and toxicity threats online.
Cloud services such as AWS and other entities such as domain registrars are considered the “backbone of the Internet,” but have always been politically neutral services, according to one. 2019 report by Joan Donovan, a Harvard researcher who studies online extremism and disinformation campaigns.
But cloud service providers have already removed content, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the ability of alt-right groups to organize, Donovan wrote.
“Most of these companies understandably didn’t want to get into the content and didn’t want to be the arbiter of thought,” Ryan said. “But when you talk about hate and extremism, you have to take a stand.”
Reporting by Sheila Dang in Dallas; Editing by Kenneth Li, Lisa Shumaker and Sandra Maler
Our standards: Thomson Reuters Trust Principles.