This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Choose the best response for each question.
What steps can you take to improve the data security of an AI-enabled application?
Ensure the AI system only accesses data that the user it's acting on behalf of is authorized to see
Keep your AI-enabled application isolated from the rest of your IT environment
Run your AI-enabled application on premises rather than in the cloud
What type of AI security issue does a metaprompt (system prompt) help mitigate?
Model poisoning
Jailbreaks and harmful content generation
Network-level denial of service attacks
You want to prevent your AI application from returning harmful content. What should you implement?
Metaprompts only
Content safety filters as part of a defense-in-depth approach
Application security best practices alone
Why is grounding an important security control for AI systems?
It prevents all types of prompt injection attacks
It reduces fabricated outputs by constraining responses to verified data sources
It encrypts the model's training data
What is an AI-specific supply chain risk when adopting open-source AI libraries?
Open-source licenses are always incompatible with commercial use
Pre-trained models included in libraries may contain backdoors or biased behavior that's hard to detect through code review
Open-source AI libraries can't be updated after deployment
You must answer all questions before checking your work.
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?