Skip to main content

Overview

The Weave Ad Format embeds AdMesh links directly into your LLM responses using an event-driven architecture. Your backend weaves recommendations into the response, and the frontend automatically detects them, adds transparency labels, and tracks engagement. What you get:
  • ✅ Event-driven link detection (no race conditions)
  • ✅ Automatic exposure tracking when links are detected
  • ✅ Transparency labels ([Ad] added automatically)
  • ✅ “Why this ad?” tooltips on hover
  • ✅ Fallback recommendations if no links detected
  • ✅ Zero duplicate API calls
Setup time: 15-20 minutes | Code complexity: Moderate

How It Works

The Weave Ad Format uses an event-driven architecture to eliminate race conditions and ensure accurate link detection:

The Flow

  1. Backend Integration → Your backend fetches recommendations using the backend SDK (admesh-weave-node or admesh-weave-python) and passes them to your LLM
  2. LLM Weaving → Your LLM naturally weaves AdMesh links into the response text
  3. Streaming Starts → Your chat component dispatches streamingStart event with assistant message ID
  4. Response Streams → LLM response chunks stream to frontend (may or may not contain AdMesh links)
  5. Streaming Completes → Your chat component dispatches streamingComplete event
  6. Link DetectionWeaveAdFormatContainer waits for event, then scans for AdMesh links
  7. Conditional Rendering:
    • Links found → Adds [Ad] labels, fires exposure tracking, shows tooltips (no fallback)
    • No links found → Renders fallback recommendations (tail or product format)

Why Event-Driven?

Traditional timeout-based detection causes race conditions:
  • ❌ Timeout expires before streaming completes → false negative (shows fallback when links exist)
  • ❌ Multiple detection cycles → duplicate API calls
  • ❌ Unpredictable timing → inconsistent behavior
Event-driven detection solves this:
  • ✅ Waits for streaming to complete before detecting links
  • ✅ Single detection cycle per message
  • ✅ Predictable, reliable behavior
  • ✅ Zero duplicate API calls

Component: WeaveAdFormatContainer

The WeaveAdFormatContainer component wraps your LLM response content and uses event-driven detection to handle AdMesh links. Use this component if:
  • ✅ You embed AdMesh links directly in LLM responses
  • ✅ You want automatic link detection with event-driven timing
  • ✅ You want fallback recommendations if no links present
  • ✅ You want automatic tracking and transparency labels
Don’t use this component if:

Installation

# Frontend (React)
npm install admesh-ui-sdk@latest

# Backend (Node.js)
npm install @admesh/weave-node@latest

Backend Integration

Your backend is responsible for fetching recommendations and passing them to your LLM. The LLM then weaves these recommendations into the response text.

Step 1: Install Backend SDK

npm install @admesh/weave-node@latest

Step 2: Fetch Recommendations and Pass to LLM

Use AdMeshClient to fetch recommendations before calling your LLM:
import { AdMeshClient } from '@admesh/weave-node';

const client = new AdMeshClient({
  apiKey: process.env.ADMESH_API_KEY
});

async function generateLLMResponse(userQuery: string, sessionId: string, messageId: string) {
  // Step 1: Fetch AdMesh recommendations
  const result = await client.getRecommendationsForWeave({
    sessionId: sessionId,
    messageId: messageId,
    query: userQuery  // Required: User's search query
  });

  // Step 2: Format recommendations for your LLM
  let recommendationsContext = '';
  if (result.found) {
    recommendationsContext = result.recommendations
      .map(r => `- ${r.product_title}: ${r.click_url}`)
      .join('\n');
  }

  // Step 3: Pass to LLM with recommendations
  const llmResponse = await callYourLLM(
    userQuery + '\n\nRecommendations:\n' + recommendationsContext
  );

  // Step 4: Return response (LLM has woven AdMesh links into the text)
  return llmResponse;
}
What happens:
  • ✅ Backend fetches recommendations from AdMesh
  • ✅ Backend passes recommendations to your LLM as context
  • ✅ LLM naturally weaves them into the response as links
  • ✅ Response contains AdMesh tracking links (e.g., http://localhost:8000/click/r/abc123...)
See the Node.js SDK documentation or Python SDK documentation for complete backend integration details.

Frontend Integration (admesh-ui-sdk)

The frontend integration has three parts:
  1. Wrap your app with AdMeshProvider
  2. Wrap LLM response content with WeaveAdFormatContainer
  3. Dispatch streaming events from your chat component

Step 1: Wrap Your App with AdMeshProvider

import { AdMeshProvider } from 'admesh-ui-sdk';

<AdMeshProvider apiKey="your-api-key" sessionId={sessionId}>
  <YourChatComponent />
</AdMeshProvider>

Step 2: Wrap LLM Response Content with WeaveAdFormatContainer

In your message rendering component (e.g., MessageBox.tsx):
import { WeaveAdFormatContainer } from 'admesh-ui-sdk';

// For each assistant message
<WeaveAdFormatContainer
  messageId={message.messageId}  // Assistant message ID from backend
  query={userQuery}              // User's query that prompted this response
  fallbackFormat="tail"      // or "product"
>
  {/* Your LLM response content - use any markdown renderer or plain HTML */}
  <Markdown>{message.content}</Markdown>
</WeaveAdFormatContainer>
Required props:
  • messageId: The assistant message ID (from backend, not user message ID)
  • query: The user’s query that prompted this response
  • fallbackFormat: "tail" or "product" (format for fallback recommendations)
Optional follow-up props:
  • followups_container_id: DOM element ID where follow-ups will be rendered
  • onExecuteQuery: Callback when a follow-up is clicked (required for follow-up functionality)
  • isContainerReady: Signal when the follow-up container is ready in DOM

Step 3: Dispatch Streaming Events from Chat Component

In your chat component (e.g., ChatWindow.tsx), dispatch events during the streaming flow:
import {
  dispatchStreamingStartEvent,
  dispatchStreamingCompleteEvent
} from 'admesh-ui-sdk';

async function sendMessage(userQuery: string) {
  let assistantMessageId = '';
  let streamingStartDispatched = false;

  // Call your backend API
  const response = await fetch('/api/chat', {
    method: 'POST',
    body: JSON.stringify({ query: userQuery, sessionId, messageId })
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    const chunk = decoder.decode(value);
    const data = JSON.parse(chunk);

    // Capture assistant message ID from backend
    if (data.messageId) {
      assistantMessageId = data.messageId;

      // Dispatch streamingStart event when you first get the assistant message ID
      if (!streamingStartDispatched && assistantMessageId) {
        dispatchStreamingStartEvent(assistantMessageId, sessionId);
        streamingStartDispatched = true;
      }
    }

    // ... handle streaming chunks ...
  }

  // Dispatch streamingComplete event when streaming finishes
  if (assistantMessageId) {
    dispatchStreamingCompleteEvent(assistantMessageId, sessionId);
  }
}
Critical: Use Assistant Message ID The events MUST use the assistant message ID (from backend), not the user message ID:
// ❌ WRONG - Using user message ID
const userMessageId = crypto.randomBytes(7).toString('hex');
dispatchStreamingStartEvent(userMessageId, sessionId);

// ✅ CORRECT - Using assistant message ID from backend
const assistantMessageId = data.messageId; // From backend response
dispatchStreamingStartEvent(assistantMessageId, sessionId);

What Happens Automatically

Once you’ve completed the integration, WeaveAdFormatContainer automatically:
  1. Waits for streamingComplete event (no premature detection)
  2. Scans for AdMesh links in the LLM response
  3. If links found:
    • Adds [Ad] labels next to links
    • Fires exposure tracking pixels
    • Shows “Why this ad?” tooltips on hover
    • Does NOT render fallback recommendations
  4. If no links found:
    • Renders fallback recommendations (tail or product format)
    • Makes single API call to fetch recommendations

Best Practices

DO:
  • Dispatch streamingStart event when you receive assistant message ID from backend
  • Dispatch streamingComplete event when streaming finishes
  • Use assistant message ID (from backend) in events, not user message ID
  • Wrap each assistant message with WeaveAdFormatContainer
  • Provide the user’s query in the query prop
  • Keep AdMesh links intact in your LLM response
  • Let the SDK handle tracking automatically
DON’T:
  • Use user message ID in streaming events (must use assistant message ID)
  • Dispatch events before you have the assistant message ID
  • Modify or remove AdMesh tracking links
  • Manually fire tracking pixels
  • Remove [Ad] labels added by the SDK
  • Create new sessions for every message

Complete End-to-End Example

This example shows the complete event-driven flow based on the Perplexica reference implementation.

Backend

import { AdMeshClient } from '@admesh/weave-node';

const client = new AdMeshClient({
  apiKey: process.env.ADMESH_API_KEY
});

// Streaming API endpoint
app.post('/api/chat', async (req, res) => {
  const { query, sessionId, messageId } = req.body;

  // Set up streaming response
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  try {
    // Step 1: Fetch AdMesh recommendations
    const result = await client.getRecommendationsForWeave({
      sessionId: sessionId,
      messageId: messageId,
      query: query  // Required
    });

    // Step 2: Format recommendations for LLM
    let recommendationsContext = '';
    if (result.found) {
      recommendationsContext = result.recommendations
        .map(r => `- ${r.product_title}: ${r.click_url}`)
        .join('\n');
    }

    // Step 3: Stream LLM response with recommendations
    const llmStream = await callYourLLMStreaming(
      query + '\n\nRecommendations:\n' + recommendationsContext
    );

    // Generate assistant message ID
    const assistantMessageId = generateMessageId();

    // Send message ID first
    res.write(JSON.stringify({
      type: 'messageId',
      messageId: assistantMessageId
    }) + '\n');

    // Stream LLM chunks
    for await (const chunk of llmStream) {
      res.write(JSON.stringify({
        type: 'message',
        data: chunk,
        messageId: assistantMessageId
      }) + '\n');
    }

    res.end();
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

Frontend - Chat Component (ChatWindow.tsx)

import React, { useState } from 'react';
import {
  AdMeshProvider,
  dispatchStreamingStartEvent,
  dispatchStreamingCompleteEvent
} from 'admesh-ui-sdk';
import MessageBox from './MessageBox';

function ChatWindow() {
  const [messages, setMessages] = useState([]);
  const sessionId = 'user-session-123';

  const sendMessage = async (userQuery: string) => {
    // Add user message
    const userMessageId = crypto.randomBytes(7).toString('hex');
    setMessages(prev => [...prev, {
      messageId: userMessageId,
      role: 'user',
      content: userQuery
    }]);

    // Track assistant message ID and streaming state
    let assistantMessageId = '';
    let streamingStartDispatched = false;

    try {
      // Call backend API
      const response = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          query: userQuery,
          sessionId: sessionId,
          messageId: userMessageId
        })
      });

      const reader = response.body.getReader();
      const decoder = new TextDecoder();
      let buffer = '';

      while (true) {
        const { done, value } = await reader.read();
        if (done) break;

        buffer += decoder.decode(value, { stream: true });
        const lines = buffer.split('\n');
        buffer = lines.pop() || '';

        for (const line of lines) {
          if (!line.trim()) continue;

          const data = JSON.parse(line);

          // Capture assistant message ID from backend
          if (data.messageId) {
            assistantMessageId = data.messageId;

            // Dispatch streamingStart event when we first get the assistant message ID
            if (!streamingStartDispatched && assistantMessageId) {
              console.log('[ChatWindow] 📢 Dispatching streamingStart:', assistantMessageId);
              dispatchStreamingStartEvent(assistantMessageId, sessionId);
              streamingStartDispatched = true;
            }
          }

          // Handle message chunks
          if (data.type === 'message') {
            setMessages(prev => {
              const existing = prev.find(m => m.messageId === assistantMessageId);
              if (existing) {
                return prev.map(m =>
                  m.messageId === assistantMessageId
                    ? { ...m, content: m.content + data.data }
                    : m
                );
              } else {
                return [...prev, {
                  messageId: assistantMessageId,
                  role: 'assistant',
                  content: data.data,
                  userQuery: userQuery // Store user query for WeaveAdFormatContainer
                }];
              }
            });
          }
        }
      }

      // Dispatch streamingComplete event when streaming finishes
      if (assistantMessageId) {
        console.log('[ChatWindow] 📢 Dispatching streamingComplete:', assistantMessageId);
        dispatchStreamingCompleteEvent(assistantMessageId, sessionId);
      }
    } catch (error) {
      console.error('Error:', error);
    }
  };

  return (
    <AdMeshProvider apiKey={process.env.REACT_APP_ADMESH_API_KEY} sessionId={sessionId}>
      <div className="chat-container">
        {messages.map((msg) => (
          <MessageBox key={msg.messageId} message={msg} />
        ))}
      </div>
    </AdMeshProvider>
  );
}

export default ChatWindow;

Frontend - Message Component (MessageBox.tsx)

import React from 'react';
import { WeaveAdFormatContainer } from 'admesh-ui-sdk';
import Markdown from 'markdown-to-jsx';

function MessageBox({ message, sendMessage, loading }) {
  if (message.role === 'user') {
    return <div className="user-message">{message.content}</div>;
  }

  // For assistant messages, wrap with WeaveAdFormatContainer
  return (
    <>
      <WeaveAdFormatContainer
        messageId={message.messageId}  // Assistant message ID from backend
        query={message.userQuery}      // User query stored with assistant message
        fallbackFormat="tail"      // or "product"
        followups_container_id={`admesh-followups-${message.messageId}`}
        onExecuteQuery={(query) => {
          sendMessage(query);
        }}
        isContainerReady={!loading}
      >
        <Markdown>{message.content}</Markdown>
      </WeaveAdFormatContainer>

      {/* Existing "Related" section - AdMesh injects sponsored follow-ups here */}
      {message.role === 'assistant' && !loading && (
        <div>
          <h3>Related</h3>
          {/* Container for SDK-managed follow-ups */}
          <div id={`admesh-followups-${message.messageId}`} />
          {/* Your platform's existing suggestions (optional) */}
        </div>
      )}
    </>
  );
}

export default MessageBox;

Optional Follow-Up Recommendations

AdMesh can inject sponsored follow-up queries into your existing follow-up suggestions UI when using WeaveAdFormatContainer. Follow-ups work in both scenarios: when AdMesh links are detected in the LLM response AND when fallback recommendations are displayed, as long as the fetched recommendations contain followup_query.

Setting Up Follow-Up Recommendations

If your platform already has a follow-up suggestions section (e.g., “Related Questions”, “Suggested Queries”, or similar), AdMesh can add sponsored follow-ups directly into that existing container. Step 1: Identify your existing follow-up container (or create one if you don’t have one):
{/* Your existing "Related" or "Suggestions" section */}
<div>
  <h3>Related</h3>
  {/* Your platform's follow-up suggestions container */}
  <div id={`admesh-followups-${message.messageId}`}>
    {/* Your existing suggestions can go here too */}
    {message.suggestions?.map(suggestion => (
      <div key={suggestion.id}>{suggestion.text}</div>
    ))}
  </div>
</div>
Step 2: Pass the container ID to WeaveAdFormatContainer:
<WeaveAdFormatContainer
  messageId={message.messageId}
  query={message.userQuery}
  fallbackFormat="tail"
  followups_container_id={`admesh-followups-${message.messageId}`}
  onExecuteQuery={(query) => {
    // Execute the sponsored follow-up query when user clicks it
    // This continues the conversation with the sponsored query
    sendMessage(query);
  }}
  isContainerReady={!loading}  // Optional: signal when container is ready in DOM
>
  <Markdown>{message.content}</Markdown>
</WeaveAdFormatContainer>
When recommendations fetched for link detection include a followup_query, the SDK will automatically inject the sponsored follow-up into your container using React portals. It will appear alongside your existing suggestions, seamlessly integrated into your UI, regardless of whether links were detected or fallback recommendations are shown. The SDK automatically:
  • Detects follow-up queries from recommendations (works for both link-detected and fallback scenarios)
  • Renders the sponsored follow-up in your existing container
  • Handles engagement tracking when users interact with follow-ups
  • Calls your onExecuteQuery callback when a user clicks the sponsored follow-up

Complete Example

Here’s how to integrate follow-ups with WeaveAdFormatContainer:
function MessageComponent({ message, sendMessage, loading }) {
  return (
    <div>
      {/* LLM response wrapped in WeaveAdFormatContainer */}
      <WeaveAdFormatContainer
        messageId={message.messageId}
        query={message.userQuery}
        fallbackFormat="tail"
        followups_container_id={`admesh-followups-${message.messageId}`}
        onExecuteQuery={(query) => {
          sendMessage(query);
        }}
        isContainerReady={!loading}
      >
        <Markdown>{message.content}</Markdown>
      </WeaveAdFormatContainer>

      {/* Existing "Related" section with follow-up container */}
      {message.role === 'assistant' && !loading && (
        <div>
          <h3>Related</h3>
          {/* Existing container where platform suggestions appear */}
          {/* AdMesh will inject sponsored follow-ups into this container */}
          <div id={`admesh-followups-${message.messageId}`}>
            {/* Your platform's existing suggestions (optional) */}
            {message.suggestions?.map((suggestion, i) => (
              <div key={i} onClick={() => sendMessage(suggestion)}>
                {suggestion}
              </div>
            ))}
          </div>
        </div>
      )}
    </div>
  );
}

Props Reference

PropTypeRequiredDescription
followups_container_idstringNoDOM element ID where the SDK should render follow-ups. When provided, the SDK uses portal rendering.
onExecuteQuery(query: string) => void | Promise<void>NoCallback invoked when a user clicks a follow-up. Required for follow-up functionality. Typically executes the query to continue the conversation.
onFollowupDetected(followupQuery: string, engagementUrl: string, recommendationId: string) => voidNoOptional callback when a sponsored follow-up is detected. Use this for custom integrations if you prefer to handle rendering yourself (advanced use case).
isContainerReadybooleanNoSignal indicating if the follow-up container is ready in the DOM. Useful for streaming or delayed rendering scenarios.

How It Works

  1. Detection: When recommendations fetched by WeaveAdFormatContainer include a followup_query, the SDK detects it automatically.
  2. Rendering: When followups_container_id is provided, the SDK injects the sponsored follow-up into your existing container using React portals. The follow-up appears alongside your existing suggestions, matching your platform’s styling.
  3. Click Handling: When a user clicks a follow-up:
    • The SDK automatically fires engagement tracking (followup_engagement_url)
    • Your onExecuteQuery callback is invoked with the follow-up query
    • You execute the query to continue the conversation (e.g., via sendMessage())

Notes

  • Follow-ups are displayed if recommendations include followup_query from the backend, regardless of whether links are detected or fallback is shown.
  • The SDK handles all engagement tracking automatically—you only need to provide onExecuteQuery to continue the conversation.
  • Use isContainerReady when rendering containers conditionally or after streaming completes.
  • Follow-ups work with recommendations fetched for link detection, not requiring separate API calls.

Troubleshooting

Cause: Multiple detection cycles or timeout-based detection still running.Solution:
  • Ensure you’re using the latest version of admesh-ui-sdk (v1.0.7+)
  • Verify streamingComplete event is dispatched only once per message
  • Check console logs for multiple “Setting up listener” messages
Check:
  • streamingStart event is dispatched when you receive assistant message ID
  • streamingComplete event is dispatched when streaming finishes
  • Both events use the same messageId (assistant message ID)
  • Both events use the same sessionId
  • Events are dispatched BEFORE the component unmounts
Check:
  • AdMesh links are present in the LLM response
  • Links are being detected (check console logs)
  • WeaveResponseProcessor is initialized correctly
  • No CSS conflicts hiding the labels
If you’re using followups_container_id but follow-ups aren’t appearing:Check:
  • Container element with the specified ID exists in the DOM
  • onExecuteQuery callback is provided (required for follow-up functionality)
  • Recommendations from backend include followup_query field
  • Container is ready before SDK tries to render (use isContainerReady if rendering is delayed)
  • Follow-ups work for both link-detected and fallback scenarios
Common issues:
// ❌ WRONG - Container doesn't exist yet
<WeaveAdFormatContainer
  messageId={message.messageId}
  query={message.userQuery}
  followups_container_id="followups-container"  // Container not in DOM yet
/>

// ✅ CORRECT - Container exists and onExecuteQuery provided
<div id="followups-container" />  {/* Container in DOM */}
<WeaveAdFormatContainer
  messageId={message.messageId}
  query={message.userQuery}
  followups_container_id="followups-container"
  onExecuteQuery={(query) => sendMessage(query)}
  isContainerReady={!loading}  // Signal when container is ready
/>

Key Takeaways

Event-Driven Architecture
  • Eliminates race conditions and duplicate API calls
  • Waits for streaming to complete before detecting links
  • Predictable, reliable behavior
Two-Part Integration
  • Backend: Fetch recommendations with admesh-weave-node and pass to LLM
  • Frontend: Wrap responses with WeaveAdFormatContainer and dispatch events
Critical: Use Assistant Message ID
  • Events MUST use assistant message ID (from backend)
  • NOT user message ID (generated in frontend)
  • Must match the messageId prop in WeaveAdFormatContainer
Automatic Handling
  • Link detection happens automatically after streamingComplete event
  • Exposure tracking fires automatically when links detected
  • Fallback recommendations render automatically when no links found
  • Zero manual tracking required