AI Vulnerability Database (AVID)
AVID is an AI Vulnerability Database for reporting and escalating reports of vulnerabilities. AVID houses full-fidelity information (model metadata, harm metrics, measurements, benchmarks, and mitigation techniques if any) on evaluation examples of harm (sub)categories defined by the taxonomy.
The aim here is transparent and reproducible evaluations. AVID is building the database to be both an extension of, and a bridge between, the classic security-related vulnerabilities of the National Vulnerability Database (NVD), case studies of adversarial attacks housed in the MITRE ATLAS, and incidents recorded in the AI Incident Database (AIID) to provide a comprehensive view into the AI Risk landscape.
Quickstart:
- Upload one dataset at a time when creating a report
- We support reports for the detection of a potential vulnerability to start, there are more statuses available here that are not integrated yet, https://avidml.gitbook.io/doc/developer-tools/python-sdk/datamodels#auxiliary-classes
- We take your report and save it as a json file to a set path after evaluations has been ran.
- Run AVID reports as apart of the evaluation process programmatically using ModsysML python package.
- Info on how to run an evaluation are here, (your variable file will act as the entire dataset reported) https://modsys.vercel.app/docs/params
- Info on how to setup a programmatic evaluation are here, https://github.com/modsysML/modsysML/blob/main/docs/simple-import/main.py
Code Snippet Details here
def evaluate_and_report():
sdk = Modsys()
sdk.use("google_perspective:analyze", google_perspective_api_key="#API-KEY")
eval = sdk.evaluate(
[
{
"item": "This is hate speech",
"__expected": {"TOXICITY": {"value": "0.78"}}, # expected label (using google)
"__trend": "lower",
},
{
"item": "You suck at this game.",
"__expected": {"TOXICITY": {"value": "0.50"}},
"__trend": "higher",
},
],
"community_id_can_be_None",
)
# connect to the avid database
sdk.connect("avid")
# send report based on output accuracy
return sdk.create_report(
"provider_name",
"ai-model",
"dataset_name",
"link_to_data_set",
"summary",
"./path_to_json_file.json",
)
After creating a report do the following to publish the report:
- An Editor maps inputs to a Report datamodel and, then publishes it as a JSON file for review,
- The Editor checks and edits report as needed, assigns taxonomy categories, then moves it to the database as reports/20XX/AVID-20XX-RXXXX.json,
- The Editor converts report to a new vuln or merges with an existing vuln, saves it in the database as vulnerabilities/20XX/AVID-20XX-VXXX.json,
- Webmaster renders new reports and vulns to markdown files in the website source.
Understanding that the post report creation events are manual, we will push a interface version into our downstream product Apollo ModsysML Console.