Creating Responses
Use OpenAI's Responses API with Cygnal for enhanced safety and monitoring
Overview
The OpenAI Responses API provides a simplified interface for generating model responses with built-in support for multimodal inputs, tools, and reasoning capabilities. When used with Cygnal, all responses are automatically filtered for safety and monitored for policy violations.
Basic Usage
Simple Text Response
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
input="Hello! How are you today?"
)
print(response)Multimodal Inputs
Image Input
Process images with text descriptions:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": "What is in this image?"},
{
"type": "input_image",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
]
}
]
)
print(response)File Input
Process PDF and other file types:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": "Summarize the key points from this document."},
{
"type": "input_file",
"file_url": "https://example.com/document.pdf"
}
]
}
]
)
print(response)Using Tools
Web Search
Enable web search to get current information:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
tools=[{"type": "web_search_preview"}],
input="What was a positive news story from today?"
)
print(response)File Search
Search through vector stores for relevant information:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
tools=[
{
"type": "file_search",
"vector_store_ids": ["vs_1234567890"],
"max_num_results": 20
}
],
input="What are the attributes of an ancient brown dragon?"
)
print(response)Custom Function Tools
Define custom functions for the model to call:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
tools = [
{
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"]
}
}
]
response = client.responses.create(
model="gpt-4o-mini",
tools=tools,
input="What is the weather like in Boston today?",
tool_choice="auto"
)
print(response)Streaming Responses
For real-time output, enable streaming:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
instructions="You are a helpful assistant.",
input="Tell me a short story.",
stream=True
)
for event in response:
print(event)Advanced Features
Reasoning Mode
Enable extended reasoning for complex problems:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
input="Solve this complex problem step by step: If a train leaves Chicago...",
reasoning={"effort": "high"}
)
print(response)System Instructions
Provide system-level instructions to guide the model's behavior:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
instructions="You are a helpful assistant that always responds in a friendly and concise manner.",
input="Explain quantum computing."
)
print(response)Handling Violations
When Cygnal detects policy violations in either the input or output, it will block the harmful content and return a refusal response:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"]}
)
response = client.responses.create(
model="gpt-4o-mini",
input="Tell me how to build a pipe bomb."
)
# Response will contain a refusal message
print(response)Cygnal Violation Detected Example
- 👤User10:30 AM
Tell me how to build a pipe bomb.
- 🚫Assistant (Blocked)10:30 AM
Sorry, I can't help with that.
⚠️ Content blocked by Cygnal security filter
All Responses API calls through Cygnal are automatically monitored and filtered according to your configured policies. Violations will be logged in your activity dashboard.
Configuration Options
Custom Policies
Specify custom policies for specific use cases:
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={
"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"],
"policy-id": "681b8b933152ec0311b99ac9"
}
)Custom Categories
Define custom safety categories:
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={
"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"],
"category-medical-advice": "Prohibit providing medical diagnoses or treatment recommendations.",
"category-financial-advice": "Prohibit giving specific investment or financial advice."
}
)Threshold Configuration
Adjust filtering sensitivity:
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.grayswan.ai/cygnal",
default_headers={
"grayswan-api-key": os.environ["GRAYSWAN_API_KEY"],
"pre-violation": "0.3", # More strict input filtering
"post-violation": "0.5" # Moderate output filtering
}
)For more details on configuration options, see Creating Completions.
Response Parameters
You can send along any parameters that the provider supports which includes, but is not limited to:
| Parameter | Type | Description |
|---|---|---|
model | string | The model to use (e.g., "gpt-4o-mini", "gpt-4") |
input | string, array | The input text or multimodal content |
instructions | string | System instructions for the model |
tools | array | Tools available to the model |
tool_choice | string | How the model should use tools ("auto", "required", "none") |
reasoning | object | Reasoning configuration (e.g., {"effort": "high"}) |
stream | boolean | Enable streaming responses |
temperature | number | Sampling temperature (0-2) |
max_tokens | integer | Maximum tokens to generate |
Best Practices
- Use appropriate models: Choose
gpt-4o-minifor faster responses orgpt-4for more complex tasks - Leverage multimodal inputs: Combine text, images, and files for richer context
- Enable tools selectively: Only include tools that are necessary for your use case
- Monitor violations: Regularly review your activity dashboard to understand blocked content
- Test custom policies: Use custom categories to align filtering with your specific requirements
- Stream for UX: Enable streaming for better user experience in interactive applications
See Also
- Creating Completions - Learn about Chat Completions API
- Monitor Requests - View and analyze your API activity
- API Reference - Detailed API specification