Amazon Bedrock’s Guardrails Update: ApplyGuardrail API and Contextual Grounding Check

Amazon Bedrock has rolled out an updated version of its ApplyGuardrail API code example, offering customers enhanced safeguards for their generative AI applications. The Guardrails for Amazon Bedrock now provides additional customizable protections on top of native safeguards offered by foundation models (FMs), making it one of the industry’s leading safety features.

The latest update introduces two new capabilities: contextual grounding checks and the ApplyGuardrail API. The contextual grounding check is a new policy type designed to detect hallucinations in model responses, ensuring that responses are grounded in enterprise data and relevant to user queries. This safeguard aims to improve response quality in use cases such as RAG, summarization, or information extraction.

The ApplyGuardrail API, on the other hand, allows users to evaluate input prompts and model responses for all FMs, including custom and third-party FMs, enabling centralized governance across all generative AI applications. This capability ensures consistent safeguards for applications built using any self-managed or third-party FMs, regardless of the underlying infrastructure.

MAPFRE, the largest insurance company in Spain, has already implemented Guardrails for Amazon Bedrock to align with its corporate security policies and responsible AI practices. According to Andres Hevia Vega, Deputy Director of Architecture at MAPFRE, Guardrails have helped minimize architectural errors and simplify API selection processes, proving to be invaluable tools in their journey towards more efficient, innovative, secure, and responsible development practices.

The introduction of contextual grounding checks and the ApplyGuardrail API marks a significant step forward for Amazon Bedrock, offering customers enhanced safeguards and governance for their generative AI applications. These new capabilities are now available in all AWS Regions where Guardrails for Amazon Bedrock is available, providing users with the opportunity to explore and integrate these features into their applications.

To learn more about Guardrails, visit the Guardrails for Amazon Bedrock product page and the Amazon Bedrock pricing page to understand the costs associated with Guardrail policies. Additionally, users can visit the community.aws site to find deep-dive technical content on solutions and discover how builder communities are using Amazon Bedrock in their solutions.

The latest update from Amazon Bedrock demonstrates the platform’s commitment to providing advanced safeguards and governance for generative AI applications, empowering businesses to develop innovative, secure, and responsible AI practices.

Original story: Guardrails for Amazon Bedrock can now detect hallucinations and safeguard apps built using custom or third-party FMs | AWS News Blog