Skip to main content

Overview

The Weave Ad Format embeds AdMesh links directly into your LLM responses using an event-driven architecture. Your backend weaves recommendations into the response, and the frontend automatically detects them, adds transparency labels, and tracks engagement. What you get:
  • ✅ Event-driven link detection (no race conditions)
  • ✅ Automatic exposure tracking when links are detected
  • ✅ Transparency labels ([Ad] added automatically)
  • ✅ “Why this ad?” tooltips on hover
  • ✅ Fallback recommendations if no links detected
  • ✅ Zero duplicate API calls
Setup time: 15-20 minutes | Code complexity: Moderate

How It Works

The Weave Ad Format uses an event-driven architecture to eliminate race conditions and ensure accurate link detection:

The Flow

  1. Backend Integration → Your backend fetches recommendations using the backend SDK (admesh-weave-node or admesh-weave-python) and passes them to your LLM
  2. LLM Weaving → Your LLM naturally weaves AdMesh links into the response text
  3. Streaming Starts → Your chat component dispatches streamingStart event with assistant message ID
  4. Response Streams → LLM response chunks stream to frontend (may or may not contain AdMesh links)
  5. Streaming Completes → Your chat component dispatches streamingComplete event
  6. Link DetectionWeaveAdFormatContainer waits for event, then scans for AdMesh links
  7. Conditional Rendering:
    • Links found → Adds [Ad] labels, fires exposure tracking, shows tooltips (no fallback)
    • No links found → Renders fallback recommendations (citation or product format)

Why Event-Driven?

Traditional timeout-based detection causes race conditions:
  • ❌ Timeout expires before streaming completes → false negative (shows fallback when links exist)
  • ❌ Multiple detection cycles → duplicate API calls
  • ❌ Unpredictable timing → inconsistent behavior
Event-driven detection solves this:
  • ✅ Waits for streaming to complete before detecting links
  • ✅ Single detection cycle per message
  • ✅ Predictable, reliable behavior
  • ✅ Zero duplicate API calls

Component: WeaveAdFormatContainer

The WeaveAdFormatContainer component wraps your LLM response content and uses event-driven detection to handle AdMesh links. Use this component if:
  • ✅ You embed AdMesh links directly in LLM responses
  • ✅ You want automatic link detection with event-driven timing
  • ✅ You want fallback recommendations if no links present
  • ✅ You want automatic tracking and transparency labels
Don’t use this component if:

Installation

# Frontend (React)
npm install admesh-ui-sdk@latest

# Backend (Node.js)
npm install @admesh/weave-node@latest

Backend Integration

Your backend is responsible for fetching recommendations and passing them to your LLM. The LLM then weaves these recommendations into the response text.

Step 1: Install Backend SDK

npm install @admesh/weave-node@latest

Step 2: Fetch Recommendations and Pass to LLM

Use AdMeshClient to fetch recommendations before calling your LLM:
import { AdMeshClient } from '@admesh/weave-node';

const client = new AdMeshClient({
  apiKey: process.env.ADMESH_API_KEY
});

async function generateLLMResponse(userQuery: string, sessionId: string, messageId: string) {
  // Step 1: Fetch AdMesh recommendations
  const result = await client.getRecommendationsForWeave({
    sessionId: sessionId,
    messageId: messageId,
    query: userQuery  // Required: User's search query
  });

  // Step 2: Format recommendations for your LLM
  let recommendationsContext = '';
  if (result.found) {
    recommendationsContext = result.recommendations
      .map(r => `- ${r.product_title}: ${r.click_url}`)
      .join('\n');
  }

  // Step 3: Pass to LLM with recommendations
  const llmResponse = await callYourLLM(
    userQuery + '\n\nRecommendations:\n' + recommendationsContext
  );

  // Step 4: Return response (LLM has woven AdMesh links into the text)
  return llmResponse;
}
What happens:
  • ✅ Backend fetches recommendations from AdMesh
  • ✅ Backend passes recommendations to your LLM as context
  • ✅ LLM naturally weaves them into the response as links
  • ✅ Response contains AdMesh tracking links (e.g., http://localhost:8000/click/r/abc123...)
See the Node.js SDK documentation or Python SDK documentation for complete backend integration details.

Frontend Integration (admesh-ui-sdk)

The frontend integration has three parts:
  1. Wrap your app with AdMeshProvider
  2. Wrap LLM response content with WeaveAdFormatContainer
  3. Dispatch streaming events from your chat component

Step 1: Wrap Your App with AdMeshProvider

import { AdMeshProvider } from 'admesh-ui-sdk';

<AdMeshProvider apiKey="your-api-key" sessionId={sessionId}>
  <YourChatComponent />
</AdMeshProvider>

Step 2: Wrap LLM Response Content with WeaveAdFormatContainer

In your message rendering component (e.g., MessageBox.tsx):
import { WeaveAdFormatContainer } from 'admesh-ui-sdk';

// For each assistant message
<WeaveAdFormatContainer
  messageId={message.messageId}  // Assistant message ID from backend
  query={userQuery}              // User's query that prompted this response
  fallbackFormat="citation"      // or "product"
>
  {/* Your LLM response content - use any markdown renderer or plain HTML */}
  <Markdown>{message.content}</Markdown>
</WeaveAdFormatContainer>
Required props:
  • messageId: The assistant message ID (from backend, not user message ID)
  • query: The user’s query that prompted this response
  • fallbackFormat: "citation" or "product" (format for fallback recommendations)

Step 3: Dispatch Streaming Events from Chat Component

In your chat component (e.g., ChatWindow.tsx), dispatch events during the streaming flow:
import {
  dispatchStreamingStartEvent,
  dispatchStreamingCompleteEvent
} from 'admesh-ui-sdk';

async function sendMessage(userQuery: string) {
  let assistantMessageId = '';
  let streamingStartDispatched = false;

  // Call your backend API
  const response = await fetch('/api/chat', {
    method: 'POST',
    body: JSON.stringify({ query: userQuery, sessionId, messageId })
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    const chunk = decoder.decode(value);
    const data = JSON.parse(chunk);

    // Capture assistant message ID from backend
    if (data.messageId) {
      assistantMessageId = data.messageId;

      // Dispatch streamingStart event when you first get the assistant message ID
      if (!streamingStartDispatched && assistantMessageId) {
        dispatchStreamingStartEvent(assistantMessageId, sessionId);
        streamingStartDispatched = true;
      }
    }

    // ... handle streaming chunks ...
  }

  // Dispatch streamingComplete event when streaming finishes
  if (assistantMessageId) {
    dispatchStreamingCompleteEvent(assistantMessageId, sessionId);
  }
}
Critical: Use Assistant Message ID The events MUST use the assistant message ID (from backend), not the user message ID:
// ❌ WRONG - Using user message ID
const userMessageId = crypto.randomBytes(7).toString('hex');
dispatchStreamingStartEvent(userMessageId, sessionId);

// ✅ CORRECT - Using assistant message ID from backend
const assistantMessageId = data.messageId; // From backend response
dispatchStreamingStartEvent(assistantMessageId, sessionId);

What Happens Automatically

Once you’ve completed the integration, WeaveAdFormatContainer automatically:
  1. Waits for streamingComplete event (no premature detection)
  2. Scans for AdMesh links in the LLM response
  3. If links found:
    • Adds [Ad] labels next to links
    • Fires exposure tracking pixels
    • Shows “Why this ad?” tooltips on hover
    • Does NOT render fallback recommendations
  4. If no links found:
    • Renders fallback recommendations (citation or product format)
    • Makes single API call to fetch recommendations

Best Practices

DO:
  • Dispatch streamingStart event when you receive assistant message ID from backend
  • Dispatch streamingComplete event when streaming finishes
  • Use assistant message ID (from backend) in events, not user message ID
  • Wrap each assistant message with WeaveAdFormatContainer
  • Provide the user’s query in the query prop
  • Keep AdMesh links intact in your LLM response
  • Let the SDK handle tracking automatically
DON’T:
  • Use user message ID in streaming events (must use assistant message ID)
  • Dispatch events before you have the assistant message ID
  • Modify or remove AdMesh tracking links
  • Manually fire tracking pixels
  • Remove [Ad] labels added by the SDK
  • Create new sessions for every message

Complete End-to-End Example

This example shows the complete event-driven flow based on the Perplexica reference implementation.

Backend

import { AdMeshClient } from '@admesh/weave-node';

const client = new AdMeshClient({
  apiKey: process.env.ADMESH_API_KEY
});

// Streaming API endpoint
app.post('/api/chat', async (req, res) => {
  const { query, sessionId, messageId } = req.body;

  // Set up streaming response
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  try {
    // Step 1: Fetch AdMesh recommendations
    const result = await client.getRecommendationsForWeave({
      sessionId: sessionId,
      messageId: messageId,
      query: query  // Required
    });

    // Step 2: Format recommendations for LLM
    let recommendationsContext = '';
    if (result.found) {
      recommendationsContext = result.recommendations
        .map(r => `- ${r.product_title}: ${r.click_url}`)
        .join('\n');
    }

    // Step 3: Stream LLM response with recommendations
    const llmStream = await callYourLLMStreaming(
      query + '\n\nRecommendations:\n' + recommendationsContext
    );

    // Generate assistant message ID
    const assistantMessageId = generateMessageId();

    // Send message ID first
    res.write(JSON.stringify({
      type: 'messageId',
      messageId: assistantMessageId
    }) + '\n');

    // Stream LLM chunks
    for await (const chunk of llmStream) {
      res.write(JSON.stringify({
        type: 'message',
        data: chunk,
        messageId: assistantMessageId
      }) + '\n');
    }

    res.end();
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

Frontend - Chat Component (ChatWindow.tsx)

import React, { useState } from 'react';
import {
  AdMeshProvider,
  dispatchStreamingStartEvent,
  dispatchStreamingCompleteEvent
} from 'admesh-ui-sdk';
import MessageBox from './MessageBox';

function ChatWindow() {
  const [messages, setMessages] = useState([]);
  const sessionId = 'user-session-123';

  const sendMessage = async (userQuery: string) => {
    // Add user message
    const userMessageId = crypto.randomBytes(7).toString('hex');
    setMessages(prev => [...prev, {
      messageId: userMessageId,
      role: 'user',
      content: userQuery
    }]);

    // Track assistant message ID and streaming state
    let assistantMessageId = '';
    let streamingStartDispatched = false;

    try {
      // Call backend API
      const response = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          query: userQuery,
          sessionId: sessionId,
          messageId: userMessageId
        })
      });

      const reader = response.body.getReader();
      const decoder = new TextDecoder();
      let buffer = '';

      while (true) {
        const { done, value } = await reader.read();
        if (done) break;

        buffer += decoder.decode(value, { stream: true });
        const lines = buffer.split('\n');
        buffer = lines.pop() || '';

        for (const line of lines) {
          if (!line.trim()) continue;

          const data = JSON.parse(line);

          // Capture assistant message ID from backend
          if (data.messageId) {
            assistantMessageId = data.messageId;

            // Dispatch streamingStart event when we first get the assistant message ID
            if (!streamingStartDispatched && assistantMessageId) {
              console.log('[ChatWindow] 📢 Dispatching streamingStart:', assistantMessageId);
              dispatchStreamingStartEvent(assistantMessageId, sessionId);
              streamingStartDispatched = true;
            }
          }

          // Handle message chunks
          if (data.type === 'message') {
            setMessages(prev => {
              const existing = prev.find(m => m.messageId === assistantMessageId);
              if (existing) {
                return prev.map(m =>
                  m.messageId === assistantMessageId
                    ? { ...m, content: m.content + data.data }
                    : m
                );
              } else {
                return [...prev, {
                  messageId: assistantMessageId,
                  role: 'assistant',
                  content: data.data,
                  userQuery: userQuery // Store user query for WeaveAdFormatContainer
                }];
              }
            });
          }
        }
      }

      // Dispatch streamingComplete event when streaming finishes
      if (assistantMessageId) {
        console.log('[ChatWindow] 📢 Dispatching streamingComplete:', assistantMessageId);
        dispatchStreamingCompleteEvent(assistantMessageId, sessionId);
      }
    } catch (error) {
      console.error('Error:', error);
    }
  };

  return (
    <AdMeshProvider apiKey={process.env.REACT_APP_ADMESH_API_KEY} sessionId={sessionId}>
      <div className="chat-container">
        {messages.map((msg) => (
          <MessageBox key={msg.messageId} message={msg} />
        ))}
      </div>
    </AdMeshProvider>
  );
}

export default ChatWindow;

Frontend - Message Component (MessageBox.tsx)

import React from 'react';
import { WeaveAdFormatContainer } from 'admesh-ui-sdk';
import Markdown from 'markdown-to-jsx';

function MessageBox({ message }) {
  if (message.role === 'user') {
    return <div className="user-message">{message.content}</div>;
  }

  // For assistant messages, wrap with WeaveAdFormatContainer
  return (
    <WeaveAdFormatContainer
      messageId={message.messageId}  // Assistant message ID from backend
      query={message.userQuery}      // User query stored with assistant message
      fallbackFormat="citation"      // or "product"
    >
      <Markdown>{message.content}</Markdown>
    </WeaveAdFormatContainer>
  );
}

export default MessageBox;
Reference Implementation: This example is based on the Perplexica integration. See:
  • perplexica-backend/src/routes/chat.ts - Backend streaming implementation
  • perplexica/src/components/ChatWindow.tsx - Event dispatching
  • perplexica/src/components/MessageBox.tsx - WeaveAdFormatContainer usage

Troubleshooting

Cause: Multiple detection cycles or timeout-based detection still running.Solution:
  • Ensure you’re using the latest version of admesh-ui-sdk (v1.0.7+)
  • Verify streamingComplete event is dispatched only once per message
  • Check console logs for multiple “Setting up listener” messages
Check:
  • streamingStart event is dispatched when you receive assistant message ID
  • streamingComplete event is dispatched when streaming finishes
  • Both events use the same messageId (assistant message ID)
  • Both events use the same sessionId
  • Events are dispatched BEFORE the component unmounts
Check:
  • AdMesh links are present in the LLM response
  • Links are being detected (check console logs)
  • WeaveResponseProcessor is initialized correctly
  • No CSS conflicts hiding the labels

Key Takeaways

Event-Driven Architecture
  • Eliminates race conditions and duplicate API calls
  • Waits for streaming to complete before detecting links
  • Predictable, reliable behavior
Two-Part Integration
  • Backend: Fetch recommendations with admesh-weave-node and pass to LLM
  • Frontend: Wrap responses with WeaveAdFormatContainer and dispatch events
Critical: Use Assistant Message ID
  • Events MUST use assistant message ID (from backend)
  • NOT user message ID (generated in frontend)
  • Must match the messageId prop in WeaveAdFormatContainer
Automatic Handling
  • Link detection happens automatically after streamingComplete event
  • Exposure tracking fires automatically when links detected
  • Fallback recommendations render automatically when no links found
  • Zero manual tracking required

Next Steps