Overview
The Suada Python SDK provides a Pythonic interface to interact with the Suada API. Built with type safety in mind using Pydantic, it offers a robust and intuitive way to integrate Suada’s capabilities into your Python applications.
Prerequisites
- Python 3.8 or higher
- A Suada API key
- pip package manager
Installation
Install the Suada Python SDK using pip:
Or using poetry:
Quick Start
Here’s a basic example to get you started:
from suada import Suada, SuadaConfig
# Initialize the client
suada = Suada(
config=SuadaConfig(
api_key="your-api-key"
)
)
# Send a chat message
try:
response = suada.chat(
message="What insights can you provide about our recent performance?"
)
print(response.answer)
except Exception as e:
print(f"Error: {str(e)}")
Authentication
Setting up your API Key
We recommend storing your API key in environment variables:
# .env
SUADA_API_KEY=your-api-key
Then load it in your application:
import os
from dotenv import load_dotenv
from suada import Suada, SuadaConfig
# Load environment variables
load_dotenv()
# Initialize the client
suada = Suada(
config=SuadaConfig(
api_key=os.getenv("SUADA_API_KEY")
)
)
Core Concepts
Chat Messages
The chat endpoint is the primary way to interact with Suada. Each chat request can include:
- A message (required)
- Chat history (optional)
- Configuration options (optional)
response = suada.chat(
message="How's our business performing?",
chat_history=previous_messages, # Optional
privacy_mode=True # Optional
)
Chat responses include several key components:
from pydantic import BaseModel
from typing import List, Optional, Dict
class SuadaResponse(BaseModel):
# The main response text
answer: str
# Internal reasoning process (optional)
thoughts: Optional[str] = None
# Actions taken during processing (optional)
actions: Optional[List[Dict[str, str]]] = None
# Suggested follow-up question (optional)
follow_up_question: Optional[str] = None
# Reasoning behind the response (optional)
reasoning: Optional[str] = None
# Reference sources (optional)
sources: Optional[List[str]] = None
# Conversation tracking ID (optional)
conversation_id: Optional[str] = None
# Response timestamp
timestamp: int
Advanced Features
Passthrough Mode
Passthrough mode allows you to associate Suada conversations with your application’s user system:
suada = Suada(
config=SuadaConfig(
api_key="your-api-key",
passthrough_mode=True
)
)
# When using passthrough mode, include external_user_identifier
response = suada.chat(
message="What's our revenue trend?",
external_user_identifier="user-123" # Required in passthrough mode
)
LangChain Integration
The SDK provides seamless integration with LangChain, allowing you to use Suada’s capabilities within your LangChain applications:
from suada import Suada, SuadaConfig
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
# Initialize Suada
suada = Suada(
config=SuadaConfig(
api_key="your-api-key"
)
)
# Create a Suada tool for LangChain
suada_tool = suada.create_tool(
name="business_analyst",
description="Use this tool to get business insights and analysis",
external_user_identifier="user-123" # Required if using passthrough mode
)
# Create an OpenAI agent with the Suada tool
model = ChatOpenAI(temperature=0)
tools = [suada_tool]
# Create a prompt template
prompt = PromptTemplate.from_template("""
You are a helpful assistant that uses Suada's business analyst capabilities.
Current conversation:
{chat_history}
Human: {input}
Assistant: Let me help you with that.
""")
# Create the agent
agent = create_openai_functions_agent(
llm=model,
tools=tools,
prompt=prompt
)
# Optional: Add conversation memory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
# Create the executor
executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
memory=memory,
verbose=True
)
# Use the agent
result = executor.invoke({
"input": "What's our revenue trend for the last quarter?"
})
print(result["output"])
LangChain Best Practices
-
Tool Configuration
- Write clear, descriptive tool descriptions
- Set appropriate temperature for your use case
- Implement tool-specific error handling
- Consider adding custom tool validation
-
Agent Setup
- Use structured prompts for consistent behavior
- Implement conversation memory when needed
- Consider using different agent types based on your needs
- Test agent behavior with various input types
-
Error Handling
- Implement proper error handling for both Suada and LangChain
- Add retry logic for transient failures
- Log agent actions for debugging
- Consider implementing fallback mechanisms
-
Memory Management
- Choose appropriate memory types for your use case
- Implement memory cleanup when needed
- Consider memory persistence for long-running conversations
- Handle memory size limitations
-
Performance Optimization
- Reuse agent instances when possible
- Implement appropriate timeouts
- Consider caching for frequently used data
- Monitor memory usage in long-running applications
Privacy Mode
Enable privacy mode to ensure sensitive information is handled with additional security:
response = suada.chat(
message="Analyze our financial data",
privacy_mode=True
)
Error Handling
The SDK provides robust error handling with descriptive exceptions:
from suada import Suada, SuadaConfig, SuadaError, SuadaAPIError
suada = Suada(config=SuadaConfig(api_key="your-api-key"))
try:
response = suada.chat(
message="What's our revenue?"
)
except SuadaAPIError as e:
# Handle API-specific errors
print(f"API Error: {e.message}, Status: {e.status}, Code: {e.code}")
except SuadaError as e:
# Handle general SDK errors
print(f"SDK Error: {e.message}")
except Exception as e:
# Handle unexpected errors
print(f"Unexpected error: {str(e)}")
Best Practices
-
Environment Variables
- Store API keys and sensitive configuration in environment variables
- Use python-dotenv for environment variable management
- Never commit API keys to version control
-
Error Handling
- Implement try-except blocks around API calls
- Use specific exception types for better error handling
- Log errors appropriately for debugging
-
Type Safety
- Take advantage of Pydantic models and type hints
- Use mypy for static type checking
- Enable strict type checking in your development environment
-
Response Processing
- Always check for the presence of optional fields
- Handle missing data gracefully
- Implement proper error handling for data parsing
Configuration Options
Option | Type | Required | Description |
---|
api_key | str | Yes | Your Suada API key |
base_url | str | No | Custom API endpoint (defaults to https://suada.ai/api/public) |
passthrough_mode | bool | No | Enable user-specific resources |
FAQ
How do I handle rate limiting?
The SDK automatically implements exponential backoff for rate limits. You can customize the retry behavior:
from suada import Suada, SuadaConfig
suada = Suada(
config=SuadaConfig(
api_key="your-api-key",
max_retries=3,
retry_delay=1.0
)
)
How do I maintain conversation context?
Use the chat_history parameter to maintain conversation context:
messages = []
response1 = suada.chat(message="How's our revenue?")
messages.append({"role": "user", "content": "How's our revenue?"})
messages.append({"role": "assistant", "content": response1.answer})
response2 = suada.chat(
message="Compare that to last year",
chat_history=messages
)
How can I enable debug logging?
Use Python’s built-in logging module:
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("suada")
Can I use the SDK in async/await code?
Yes, the SDK provides async support:
from suada import AsyncSuada, SuadaConfig
async def main():
async_suada = AsyncSuada(
config=SuadaConfig(api_key="your-api-key")
)
response = await async_suada.chat(
message="What's our performance?"
)
Development Setup
For contributors and developers:
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run type checking
mypy suada
# Run linting
flake8 suada
black suada
isort suada
Support