Awesome Machine Learning Interpretability
A maintained and curated list of practical and awesome responsible machine learning resources.
If you want to contribute to this list (and please do!), read over the contribution guidelines, send a pull request, or file an issue.
If something you contributed or found here is missing after our September 2023 reboot, please check the archive.
Contents
-
Community and Official Guidance Resources
-
Education Resources
-
Miscellaneous Resources
-
Technical Resources
-
Citing Awesome Machine Learning Interpretability
Community and Official Guidance Resources
Community Frameworks and Guidance
This section is for responsible ML guidance put forward by organizations or individuals, not for official government guidance.
- 8 Principles of Responsible ML
- A Brief Overview of AI Governance for Responsible Machine Learning Systems
- Acceptable Use Policies for Foundation Models
- Access Now, Regulatory Mapping on Artificial Intelligence in Latin America: Regional AI Public Policy Report
- Ada Lovelace Institute, Code and Conduct: How to Create Third-Party Auditing Regimes for AI Systems
- Adversarial ML Threat Matrix
- AI Governance Needs Sociotechnical Expertise: Why the Humanities and Social Sciences Are Critical to Government Efforts
- AI Verify:
- AI Snake Oil*
- The Alan Turing Institute, AI Ethics and Governance in Practice
- The Alan Turing Institute, Responsible Data Stewardship in Practice
- AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
- Andreessen Horowitz (a16z) AI Canon
- Anthropic's Responsible Scaling Policy
- AuditBoard: 5 AI Auditing Frameworks to Encourage Accountability
- Auditing machine learning algorithms: A white paper for public auditors
- AWS Data Privacy FAQ
- AWS Privacy Notice
- AWS, What is Data Governance?
- Berryville Institute of Machine Learning, Architectural Risk Analysis of Large Language Models (requires free account login)
- BIML Interactive Machine Learning Risk Framework
- Boston University AI Task Force Report on Generative AI in Education and Research
- Brendan Bycroft's LLM Visualization
- Brown University, How Can We Tackle AI-Fueled Misinformation and Disinformation in Public Health?
- Casey Flores, AIGP Study Guide
- Center for Security and Emerging Technology (CSET):
- CSET's Harm Taxonomy for the AI Incident Database
- CSET Publications
- Adding Structure to AI Harm: An Introduction to CSET's AI Harm Framework
- AI Incident Collection: An Observational Study of the Great AI Experiment
- Repurposing the Wheel: Lessons for AI Standards
- Translating AI Risk Management Into Practice
- Understanding AI Harms: An Overview
- Censius: AI Audit
- Center for AI and Digital Policy Reports
- Center for Democracy and Technology (CDT), Applying Sociotechnical Approaches to AI Governance in Practice
- CivAI, GenAI Toolkit for the NIST AI Risk Management Framework: Thinking Through the Risks of a GenAI Chatbot
- Coalition for Content Provenance and Authenticity (C2PA)
- Crowe LLP: Internal auditor's AI safety checklist
- Data Provenance Explorer
- Data & Society, AI Red-Teaming Is Not a One-Stop Solution to AI Harms: Recommendations for Using Red-Teaming for AI Accountability
- Dealing with Bias and Fairness in AI/ML/Data Science Systems
- Debugging Machine Learning Models (ICLR workshop proceedings)
- Decision Points in AI Governance
- Demos, AI – Trustworthy By Design: How to build trust in AI systems, the institutions that create them and the communities that use them
- Digital Policy Alert, The Anatomy of AI Rules: A systematic comparison of AI rules across the globe
- Distill
- Dominique Shelton Leipzig, Countries With Draft AI Legislation or Frameworks
- Ethical and social risks of harm from Language Models
- Ethics for people who work in tech
- EU Digital Partners, U.S. A.I. Laws: A State-by-State Study
- Evaluating LLMs is a minefield
- Fairly's Global AI Regulations Map
- FATML Principles and Best Practices
- Federation of American Scientists, A NIST Foundation To Support The Agency’s AI Mandate
- ForHumanity Body of Knowledge (BOK)
- The Foundation Model Transparency Index
- From Principles to Practice: An interdisciplinary framework to operationalise AI ethics
- The Future Society
- Gage Repeatability and Reproducibility
- Georgetown University Library's Artificial Intelligence (Generative) Resources
- Google:
- Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
- The Data Cards Playbook
- Data governance in the cloud - part 1 - People and processes
- Data Governance in the Cloud - part 2 - Tools
- Evaluating social and ethical risks from generative AI
- Generative AI Prohibited Use Policy
- Perspectives on Issues in AI Governance
- Principles and best practices for data governance in the cloud
- Responsible AI Framework
- Responsible AI practices
- Testing and Debugging in Machine Learning
- H2O.ai Algorithms
- HackerOne Blog
- Haptic Networks: How to Perform an AI Audit for UK Organisations
- Hogan Lovells, The AI Act is coming: EU reaches political agreement on comprehensive regulation of artificial intelligence
- Hugging Face, The Landscape of ML Documentation Tools
- IAPP, Global AI Governance Law and Policy: Canada, EU, Singapore, UK and US
- ICT Institute: A checklist for auditing AI systems
- IEEE:
- Independent Audit of AI Systems
- Identifying and Overcoming Common Data Mining Mistakes
- [Infocomm Media Development