Return to Articles

Navigating The AI Risk Labyrinth

Dean Wampler

You have done a proof-of-concept for an AI system using multiple AI Models, some of them off-the-shelf, some of them tuned in-house; it is showing great results and now you want to move to product.

AI systems come with lots of potential for embarrassing risks or unsafe behaviors. Even if we don’t have a dedicated risk officer looking for sign-off, can we catch them beforehand and do something to address them? Do you know how to address the risk of the system being inaccurate, which was the purpose of the PoC after all, and what about the system being biased, or generating abusive text? If the system was fine-tuned on your own data, what is the possibility of data leakage? Was permission obtained for all the data the model was trained on? Even for third party models?

This is the AI Risk Labyrinth, a major obstacle preventing many large companies from effectively adopting Generative AI, as they struggle to find a clear path forward.

Risk Atlas Nexus

To tackle these challenges, IBM Research is pleased to introduce Risk Atlas Nexus, a collaborative effort to structure and mitigate AI risks. Given the rapidly evolving landscape of capabilities and governance policies, no single product or company can serve as the sole source of truth on all dimensions. As a result, we need to foster a community-driven approach to creating, linking and curating these resources in a way that supports end users to operationalize these safeguards as part of their processes. We are releasing Risk Atlas Nexus as a first step in this vision to help create and foster such a community based effort.

One Combined Vocabulary: to have a clear view of a model’s risks there is a need to combine data from disparate sources (structured and unstructured). Often the data is derived from textual documentation which is highly interconnected and multi-disciplinary ranging from reporting F1 scores of models on specific benchmarks to regulations about the CO2 emissions of the model and specific regulations that apply to the proposed use-case in the proposed geography.

Knowledge Graphs can provides a unified structure that can link and contextualize data when it is complex, ambiguous and covers multiple domains. Putting that data and specifically their risks in context helps AI system designers manage those risks using a common, open ontology defining important entities and relationships.

We have been working on just such an ontology that we are releasing as part of this open source project. The ontology uses and links multiple risk taxonomies, NIST, OWASP and in particular IBM’s Risk Atlas. (See this starter notebook).

AI Assisted Information Gathering: Questionnaires are one way to start to put structure onto fragmented data and can create a governance trail for projects. We help users with suggested answers to questionnaires such as the Stanford Transparency index or questions relating to the EU AI Act taking into account free-text, use-case descriptions and default policies defined through examples. (Example auto question notebook).

What risks do I need to care about? We need to know which risks are the most important to your use case, we help judge which ones you might start to think about. (Example risk identification notebook).

How can I measure them? Once we know which risks then we can start gathering the appropriate datasets, metrics and benchmarks. We help there by connecting risks to benchmarks and the mitigations such as Granite Guardian.

What actions do I need to take? Not all risks can be addressed by technical mitigations, some might mean collecting additional documentation, others might just be checking in with stakeholders. We mined these suggestions from NIST and you can now go from risks to recommended actions.

Just the start… We started this open source project after prototyping a system that lowers the barrier for entry to provide AI governance of a system. Watch our demo of Usage Governance Advisor to be presented at AAAI’25 and read more about our vision for AI assisted governance here, which will be presented at AAAI’25 Workshop on AI Governance: Alignment, Morality and Law.

We are releasing Risk Atlas Nexus which will be part of the AI Alliance Trust and Safety Evaluation initiative. It is a first step in the vision to help create and foster such a community based effort.

Risk Atlas Nexus is just the beginning. We invite the community to join us in shaping the future of AI governance.

See our GitHub repo for more information: https://github.com/IBM/risk-atlas-nexus

Related Articles

View All

Architecture of Data Prep Kit Framework 

Technical Report

The Data Prep Kit (DPK) framework enables scalable data transformation using Python, Ray, and Spark, while supporting various data sources such as local disk, S3, and Hugging Face datasets. It defines abstract base classes for transformations, allowing developers to implement custom data and folder transforms that operate seamlessly across different runtimes. DPK also introduces a data abstraction layer to streamline data access and facilitate checkpointing. To support large-scale processing, it provides three runtimes: Python for small datasets, Ray for distributed execution across clusters, and Spark for highly scalable processing using Resilient Distributed Datasets (RDDs). Additionally, DPK integrates with Kubeflow Pipelines (KFP) for automating transformations within Kubernetes environments. The framework includes transform utilities, testing support, and simplified APIs for invoking transforms efficiently. By abstracting complexity, DPK simplifies development, deployment, and execution of data processing pipelines in both local and distributed environments.

The State of Open Source AI Trust and Safety - End of 2024 Edition

News

We conducted a survey with 100 AI Alliance members to learn about the state of open source AI trust and safety for 2024. This blog post highlights key findings on AI applications, model popularity, safety concerns, regulatory focus, and gaps in current safety practices, while also providing an overview of notable open-source projects, tools, and research in the field of AI trust and safety.

The AI Alliance: Our First Year

News

The AI Alliance launched last December with a mission to build, enable, and advocate for open innovation in AI globally. We’re well on our way!