Skip to main content

Overview

The admesh-weave-python SDK is a backend Python package that fetches personalized recommendations from AdMesh. Use it to retrieve recommendations that your LLM can naturally weave into responses. When to use this SDK:
  • You want to embed recommendations directly in LLM responses (Weave Ad Format)
  • You need backend control over recommendation fetching
  • You’re building a custom LLM integration with Python
When NOT to use this SDK:
  • You only need frontend recommendations (use admesh-ui-sdk instead)
  • You want a separate recommendations panel (use Tail/Product Format with admesh-ui-sdk)

Quick Start

Install the package:
pip install admesh-weave-python
Initialize the client:
from admesh_weave import AdMeshClient

client = AdMeshClient(api_key="your-api-key")
Fetch recommendations:
result = await client.get_recommendations_for_weave(
    session_id=session_id,
    message_id=message_id,
    query=user_query
)

if result["found"]:
    context = "\n".join([
        f"- {r['product_title']}: {r['click_url']}"
        for r in result["recommendations"]
    ])

Requirements

  • Python 3.8 or higher
  • API key from AdMesh dashboard
  • Full type hints included
  • Works with FastAPI, Flask, Django, etc.

Installation Methods

pip (recommended):
pip install admesh-weave-python
Poetry:
poetry add admesh-weave-python
pipenv:
pipenv install admesh-weave-python

Core Concepts

AdMeshClient

The main client for fetching recommendations. Initialize once and reuse across your application.
from admesh_weave import AdMeshClient

client = AdMeshClient(api_key="your-api-key")
Configuration options:
  • api_key (required): Your AdMesh API key from the dashboard
  • api_base_url (optional): Custom API endpoint (defaults to production)

Session and Message IDs

AdMesh uses IDs to track user interactions:
  • session_id: Unique identifier for a user’s conversation session
  • message_id: Unique identifier for each individual message/query
import uuid

# Your application generates these IDs
session_id = str(uuid.uuid4())  # Generate once per conversation
message_id = str(uuid.uuid4())  # Generate for each message
Your backend is responsible for generating and managing session and message IDs. The SDK accepts these IDs but does not generate them. These IDs must be provided by the frontend.

Basic Usage

from admesh_weave import AdMeshClient

client = AdMeshClient(api_key="your-api-key")

async def handle_user_query(user_query: str, session_id: str, message_id: str):
    # Fetch recommendations
    result = await client.get_recommendations_for_weave(
        session_id=session_id,  # Required: Must be provided by frontend
        message_id=message_id,  # Required: Must be provided by frontend
        query=user_query,       # Required
        latency_budget_ms=10000  # Optional: 10 second latency budget for auction processing
    )
    
    if result["found"]:
        recommendations = result["recommendations"]
        # Pass to your LLM
        return format_llm_response(user_query, recommendations)
    else:
        # No recommendations available
        return format_llm_response(user_query, [])

Synchronous Usage

from admesh_weave import AdMeshClient

client = AdMeshClient(api_key="your-api-key")

def handle_user_query(user_query: str, session_id: str, message_id: str):
    # Fetch recommendations (sync)
    result = client.get_recommendations_for_weave_sync(
        session_id=session_id,
        message_id=message_id,
        query=user_query
    )
    
    if result["found"]:
        recommendations = result["recommendations"]
        return format_llm_response(user_query, recommendations)

Integration Examples

FastAPI Example

from fastapi import FastAPI, HTTPException
from admesh_weave import AdMeshClient
from pydantic import BaseModel

app = FastAPI()
client = AdMeshClient(api_key="your-api-key")

class ChatRequest(BaseModel):
    session_id: str
    message_id: str
    query: str

@app.post("/api/chat")
async def chat(request: ChatRequest):
    try:
        result = await client.get_recommendations_for_weave(
            session_id=request.session_id,
            message_id=request.message_id,
            query=request.query
        )
        
        return {
            "found": result["found"],
            "recommendations": result.get("recommendations", [])
        }
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

Flask Example

from flask import Flask, request, jsonify
from admesh_weave import AdMeshClient

app = Flask(__name__)
client = AdMeshClient(api_key="your-api-key")

@app.route('/api/chat', methods=['POST'])
def chat():
    data = request.json
    
    result = client.get_recommendations_for_weave_sync(
        session_id=data['session_id'],
        message_id=data['message_id'],
        query=data['query']
    )
    
    return jsonify({
        "found": result["found"],
        "recommendations": result.get("recommendations", [])
    })

Environment Variables

Store your API key securely using environment variables:
# .env
ADMESH_API_KEY=your-api-key-here
import os
from admesh_weave import AdMeshClient

client = AdMeshClient(api_key=os.environ["ADMESH_API_KEY"])

API Methods

get_recommendations_for_weave()

Fetches recommendations for a given query that can be woven into LLM responses.
result = await client.get_recommendations_for_weave(
    session_id: str,           # Required: Must be provided by frontend
    message_id: str,           # Required: Must be provided by frontend
    query: str,                # Required: User query for contextual recommendations
    latency_budget_ms: int = None,  # Optional: Latency budget for auction processing (milliseconds)
    messages: List[dict] = None,   # Optional: Conversation history
    locale: str = None,            # Optional: User language in BCP 47 format (e.g., "en-US")
    geo: str = None,              # Optional: User country code in ISO 3166-1 alpha-2 format (e.g., "US")
    user_id: str = None,          # Optional: Anonymous hashed user ID
    model: str = None,            # Optional: AI model identifier (e.g., "gpt-4o")
    platform_id: str = None,      # Optional: Platform identifier
    platform_surface: str = None, # Optional: Platform surface (e.g., "web")
    timeout_ms: int = None        # Optional: Max wait time (default: calculated from latency_budget_ms or 30000ms)
)
Note: HTTP timeout is automatically calculated from latency_budget_ms when provided: max(latency_budget_ms * 3, 30000). This ensures the HTTP request doesn’t timeout before the auction completes. If latency_budget_ms is not provided, defaults to 30 seconds or timeout_ms if specified. Format Filtering: This method only returns recommendations with “weave” format. If the recommendation format is not “weave”, the method returns {"found": False, "error": "Preferred format is not weave"}. Example:
result = await client.get_recommendations_for_weave(
    session_id='session-abc123',  # Required: Must be provided by frontend
    message_id='msg-xyz789',      # Required: Must be provided by frontend
    query='best project management tools',  # Required
    latency_budget_ms=10000  # Optional: 10 second latency budget for auction processing
)

if result["found"]:
    print(f"Found {len(result['recommendations'])} recommendations")
    for rec in result["recommendations"]:
        print(f"- {rec['title']}: {rec['click_url']}")
        weave_summary = rec.get('weave_summary') or rec.get('creative_input', {}).get('short_description')
        print(f"  Summary: {weave_summary}")
else:
    print('No recommendations found:', result.get('error'))
Returns:
{
    "found": bool,                         # Whether recommendations were found
    "recommendations": List[dict],         # Array of recommendations (if found)
    "query": str,                          # Original query
    "request_id": str,                     # Request ID
    "error": str                           # Error message if not found
}

get_recommendations_for_weave_sync()

Synchronous version of get_recommendations_for_weave(). Same parameters and return type.

Troubleshooting

No recommendations returned

Possible causes:
  • Query is too generic (try more specific queries)
  • No active campaigns match the query
  • API key is invalid
  • Format is not “weave” (SDK only returns “weave” format recommendations)
Solution:
  • Use more specific queries (e.g., “best CRM for startups” instead of “software”)
  • Check that your AdMesh account has active campaigns
  • Verify API key in environment variables
  • Note: If format is not “weave”, the SDK returns {"found": False, "error": "Preferred format is not weave"}

API key errors

Check:
  • ADMESH_API_KEY is set in environment variables
  • API key is valid (check dashboard)
  • No extra whitespace in the key value
Example:
import os
print(f"API Key: {os.environ.get('ADMESH_API_KEY', 'NOT SET')}")

Type errors

Solution:
  • Ensure Python 3.8 or higher
  • Install type stubs if using mypy: pip install types-httpx
  • Check that all required parameters are provided

Network/timeout errors

Check:
  • Server has internet access
  • No firewall blocking outbound requests
  • Network is stable
Solution:
  • Increase timeout: timeout_ms=60000
  • Implement retry logic with exponential backoff
  • Check server network configuration

Next Steps

Weave Ad Format Guide: Complete integration guide for embedding recommendations in LLM responses - /platforms/weave-ad-format Frontend SDK: Install admesh-ui-sdk to detect and track embedded links on the frontend - /ui-sdk/installation
You’re ready to start integrating.
Install admesh-weave-python, fetch recommendations, and pass them to your LLM for natural weaving into responses.