Responsible AI is top of mind for every organization and the AI practitioners that work for them, whether it is large or small. One challenge is assessing and identifying what responsible AI risks or harms the AI system can inflict on people or society. Even when the AI risks are discovered, the next dilemma is how to mitigate them. The solutions are not always straightforward and sometimes tradeoffs need to be made.
✨ Join the #MarchResponsibly challenge by learning responsible AI tools and services available to you.
Checkout best practice guide from Mihaela Vorvoreanu, a responsible AI researcher at Microsoft, on tips, tools and practices organizations and developers can use to reach responsible AI maturity. The article discusses responsible AI red teaming guide on how to identify potential adversarial attacks for testing security vulnerabilities. Next, after the harms are identified, how it is important to examine the frequency; putting mitigations in place; and continuously monitoring for the emergence of new harms. Since all of this requires team collaboration and leadership buy-in, the article recommends great interactive tools and framework for teams to leverage in order to reach higher responsible AI maturity stages.
The article covers the following practical responsible AI tools & guides:
- Red teaming guide
- Making complex AI harm trade-off
- Human-AI experience (HAX) toolkit
- Human-AI Interaction guide
- Reaching maturity levels
👉🏽 Checkout Mihaela Vorvoreanu's article: https://aka.ms/march-rai/guide-to-advance-ai
🎉Happy Learning :)
Top comments (0)