AI Vulnerability Database
Link to AVID docs: https://avidml.gitbook.io/doc/
Getting started
As the first open-source, extensible knowledge base of failures across the AI Ecosystem (e.g. data sets, models, systems), AVID aims to
- encompass coordinates of responsible ML such as security, ethics, and performance
- build out a taxonomy of potential harms across these coordinates
- house full-fidelity information (e.g. metadata, measurements, benchmarks) on evaluation use cases of a harm (sub)category
- evaluate models and datasets that are either open-source or accessible through APIs
This site contains information to get you started with different components of AVID.
We plan to support future iterations against ethics, security and other performance measures.