![]() ![]() Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. Beyond Bing Chat’s early stumbles and Microsoft’s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges, including biases against specific subsets of users. (No word on whether it was built on Microsoft’s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic.īut there’s reason to be skeptical of them. Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.Īzure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team, and Jigsaw, and succeeds Microsoft’s own Content Moderator tool. ![]() But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms. Setting all that aside for a moment, Azure AI Content Safety - which protects against biased, sexist, racist, hateful, violent and self-harm content, according to Microsoft - is integrated into Azure OpenAI Service, Microsoft’s fully managed, corporate-focused product intended to give businesses access to OpenAI’s technologies with added governance and compliance features. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design. In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. Other reporters got it to make threats and even shame them for admonishing it. Bing Chat went off the rails when it first rolled out in preview our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. “We’re now launching it as a product that third-party customers can use,” Bird said in a statement. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”ĭuring a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service. “New models are able to understand content and cultural context so much better. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. The models - which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese - assign a severity score to flagged content, indicating to moderators what content requires action. Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.Ĭalled Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. ![]()
0 Comments
Leave a Reply. |