Governance, Ethics & Risks

Shield with red team avatars testing AI system

Safety Evaluations and Red Teaming

Safety evaluations and red teaming proactively test AI systems to prevent harm, ensure fairness, and protect vulnerable groups, especially in high-stakes social innovation and international development contexts.
Learn More >
CPU chip with secure enclave shield symbolizing trusted execution environments

Secure Enclaves and Trusted Execution

Secure enclaves and trusted execution environments protect sensitive data during computation, enabling privacy-preserving AI and data analysis in cloud systems critical for health, education, and humanitarian sectors.
Learn More >
Speech bubble with toxic symbols filtered through moderation shield

Toxicity and Content Moderation

Toxicity and content moderation use AI and human review to detect and manage harmful content, protecting communities and supporting safe, inclusive digital spaces across sectors.
Learn More >
Public report document with transparency eye symbol in flat vector style

Transparency Reporting

Transparency reporting builds accountability and trust by openly sharing how AI systems are designed, deployed, and governed, especially for mission-driven organizations in health, education, and humanitarian sectors.
Learn More >
Filter by Categories