![]() |
| https://www.pexels.com/photo/brown-primate-on-railings-1589942/ |
This is another blog on OpenAI SDK. Here we focus on Agentic Flows with Input Guardrails. The python code implements of input guardrail patterns in agentic systems, demonstrating how to protect agents from inappropriate or harmful requests using the OpenAI Agent SDK with Azure AI Foundry integration.
Input guardrail patterns enable proactive filtering and validation of user inputs before they reach the main agent processing logic. This approach is crucial for building safe, compliant systems that can identify and block potentially harmful, inappropriate, or policy-violating requests.
Guardrails prevent agents from processing requests that violate platform policies or legal regulations. This pattern blocks requests that could lead to harmful outcomes or inappropriate responses. It is maintained by filtering requests that might violate industry regulations or organizational policies.
Additionally, guardrails facilitate by providing an additional layer of protection against misuse of AI agents, ensuring responsible deployment in production environments.
Key Concepts
Input Guardrail
A protective mechanism that analyzes incoming user requests and determines whether they should be processed by the main agent or blocked based on predefined safety criteria. In our use case, we prevent user from asking about drug purchases.
Guardrail Agent
A specialized agent that evaluates user inputs for policy violations, inappropriate content, or potentially harmful requests before allowing them to proceed to the main processing agent.
Tripwire Mechanism
The system that triggers when a guardrail detects a violation, preventing the main agent from processing the request and providing appropriate fallback responses.
Architecture
The following diagram illustrates the input guardrail architecture:
The main agent includes the guardrail in its configuration:
agent = Agent(
name="Customer support agent",
instructions=(
"You are a medical customer support agent. Answer user questions "
"about medical products and services, but do not assist with drug "
"purchases."
),
input_guardrails=[drug_purchase_guardrail],
model=llm_model,
)
Comments
Post a Comment