
Mitigating Bias and Risk in AI Systems
-
You must log in to register
- Non-member - Free!
- Member - Free!
Artificial intelligence is one of the most promising and complex technologies in history. Leveraged responsibly, it can be a powerful tool to bolster public safety and even serve to enhance procedurally just outcomes. However, given the speed of innovation in this space, there is still much to be understood. Many community members understandably have concerns about AI, particularly around bias in systems. This presentation will offer a real-world methodology already in use in the United States to vet AI solutions for criminal justice based on accuracy, mitigate bias before deployment, and safeguard community expectations related to equitable outcomes. Finally, the session will highlight considerations for the role human users play in both the success and risks of AI technology programs.
- Participants will be introduced to the NIST AI Risk Framework and better understand where the 3 primary sources of bias in AI systems originate.
- Participants will be presented with real world applications of the concepts in the NIST AI Risk Framework where bias and risk in algorithmically assisted process in criminal justice are being proactively mitigated.
- Participants will understand the basic functionality of generative AI systems and the notion of "inscrutability" and how such issues must be accounted for to maintain trust and prevent concerns related to inequitable outcomes.
Key:




