Introduction

Completed

AI security controls are the measures and protocols implemented to protect artificial intelligence systems from threats, vulnerabilities, and unauthorized access. While traditional security controls (such as network security, access management, and encryption) still apply, AI systems require additional, specialized controls that address the unique risks introduced by natural language interfaces, model behavior, and agent capabilities.

This module provides an overview of the security controls you can implement in AI systems to strengthen the security posture of AI environments. You explore controls across several areas: supply chain security for AI libraries, content filtering, data security, system prompt design, grounding, application security best practices, and ongoing monitoring.

Diagram showing the seven AI security control areas covered in this module.

Learning objectives

By the end of this module, you're able to:

  • Evaluate open-source AI libraries for security risks
  • Describe content filtering capabilities and how to configure them effectively
  • Explain AI data security principles, including agent identity and access control
  • Design effective metaprompts (system prompts) as a security control
  • Describe how grounding reduces inaccurate AI-generated content and security risks
  • Apply application security best practices to AI-enabled applications
  • Describe monitoring strategies for detecting AI-specific threats

Prerequisites

  • Familiarity with basic security concepts (for example, authentication, access control, encryption)
  • Familiarity with basic artificial intelligence concepts (for example, models, training, inference)
  • Completion of the Fundamentals of AI security module or equivalent knowledge