Rechercher dans ce blog

Thursday, April 8, 2021

Today I learned about Intel's AI sliders that filter online gaming abuse - The Verge

proc.indah.link

Last month during its virtual GDC presentation Intel announced Bleep, a new AI-powered tool that it hopes will cut down on the amount of toxicity gamers have to experience in voice chat. According to Intel, the app “uses AI to detect and redact audio based on user preferences.” The filter works on incoming audio, acting as an additional user-controlled layer of moderation on top of what a platform or service already offers.

It’s a noble effort, but there’s something bleakly funny about Bleep’s interface, which lists in minute detail all of the different categories of abuse that people might encounter online, paired with sliders to control the quantity of mistreatment users want to hear. Categories range anywhere from “Aggression” to “LGBTQ+ Hate,” “Misogyny,” “Racism and Xenophobia,” and “White nationalism.” There’s even a toggle for the N-word. Bleep’s page notes that it’s yet to enter public beta, so all of this is subject to change.

Filters include “Aggression,” “Misogyny” ...
Credit: Intel
... and a toggle for the “N-word.”
Image: Intel

With the majority of these categories, Bleep appears to give users a choice: would you like none, some, most, or all of this offensive language to be filtered out? Like choosing from a buffet of toxic internet slurry, Intel’s interface gives players the option of sprinkling in a light serving of aggression or name-calling into their online gaming.

Bleep has been in the works for a couple of years now — PCMag notes that Intel talked about this initiative way back at GDC 2019 — and it’s working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is no easy feat as platforms like Facebook and YouTube have shown. Although automated systems can identify straightforwardly offensive words, they often fail to consider the context and nuance of certain insults and threats. Online toxicity comes in many, constantly evolving forms that can be difficult for even the most advanced AI moderation systems to spot.

“While we recognize that solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction, giving gamers a tool to control their experience,” Intel’s Roger Chandler said during its GDC demonstration. Intel says it hopes to release Bleep later this year, and adds that the technology relies on its hardware accelerated AI speech detection, suggesting that the software may rely on Intel hardware to run.

The Link Lonk


April 08, 2021 at 04:37PM
https://ift.tt/31WWYmM

Today I learned about Intel's AI sliders that filter online gaming abuse - The Verge

https://ift.tt/2YXg8Ic
Intel

No comments:

Post a Comment

Featured Post

Intel Falls on Latest Server Chip Delay; Rival AMD Gains - Yahoo Finance

proc.indah.link (Bloomberg) -- Intel Corp. fell after saying a new version of its Xeon server chip line will go into production in 2022, r...

Popular Posts