AI7 min readBy Paul Lefizelier

Vercel Chat SDK: Deploy Your AI Agent on Slack, Discord and WhatsApp in a Few Lines — and Monetize Its Wait Time With Idlen

Vercel launches Chat SDK, an open-source TypeScript library to deploy one AI agent on Slack, Discord, WhatsApp and 5 more platforms from a single codebase.

Vercel Chat SDK: Deploy Your AI Agent on Slack, Discord and WhatsApp in a Few Lines — and Monetize Its Wait Time With Idlen

On March 19, 2026, Vercel launches Chat SDK — an open-source TypeScript library that solves a problem every AI agent developer knows: deploying the same agent on Slack, Discord, WhatsApp, Teams, Telegram, GitHub, Linear and Google Chat without rewriting a single line of business logic. One codebase, eight platforms, zero rewrites.

The Origin: An Internal Challenge Turned Product

It all starts in January 2026 at Vercel. The team sets itself a challenge: every employee must multiply their output with AI agents. A realization quickly emerges — agents are only useful if users find them. Keeping them inside a dedicated web interface isn't enough.

"Instead of asking people to come to agents, we needed to deliver agents to the places they were already working."

That conviction gives birth to Chat SDK. Instead of forcing users to come to agents, agents need to go where users already work: Slack, Discord, WhatsApp.

Write Once, Deploy Everywhere: The Adapter Architecture

Chat SDK's core is built on an adapter system. Each messaging platform has its own adapter that translates the SDK's universal components into the target platform's native format.

In practice, switching from a Slack deployment to Discord means changing a single line of code — the adapter import. The agent's logic, prompt, connected tools: everything stays the same.

Platforms Supported at Launch

PlatformStreamingMessage EditingUI ComponentsSpecifics
SlackNative chatStream API
DiscordBuilt-in rate-limit throttle
WhatsAppPartialImages, voice, stickers, location, buttons
Microsoft TeamsFull integration
TelegramBot API
GitHubIssues, PRs, discussions
LinearNative project management
Google ChatPost + edit

WhatsApp is the most anticipated addition. The adapter natively handles images, voice messages, stickers, location and reply buttons — formats most frameworks ignore.

Native LLM Streaming: Responses Write Themselves in Real Time

Chat SDK ships with native LLM streaming. When a user mentions the agent in Slack, the response starts appearing immediately — token by token — instead of waiting for full generation.

The update interval is configurable via streamingUpdateIntervalMs (500 ms by default). On Discord, the SDK automatically handles throttling to respect API rate limits without dropping tokens.

Compatibility covers all major LLM providers: OpenAI, Anthropic Claude, Google Gemini and any model accessible through Vercel's AI SDK.

Universal Components: Tables, Cards, Buttons Everywhere

Chat SDK offers a set of universal UI components — Tables, Cards, Buttons — that automatically render in each platform's native format. A table becomes a Block Kit in Slack, an embed in Discord, a formatted message in WhatsApp.

The time savings are significant. No more maintaining platform-specific templates. The developer describes the interface once, Chat SDK renders it everywhere.

State Management: Redis or Postgres, Zero Friction

State persistence is built in. Chat SDK natively supports Redis and Postgres as state management backends. Conversation history, user context, preferences — everything is persisted automatically, with no extra configuration.

For agents that need to maintain complex state between messages (multi-step workflows, conversational forms, validation pipelines), this is a critical component that saves developers from reinventing the wheel.

Code Example: A Slack Agent in 6 Lines

Here's a Slack agent that summarizes a thread with Claude — the core of the "write once, deploy everywhere" promise:

// Slack agent that summarizes a thread with Claude
bot.onNewMention(async (thread) => {
  const result = await streamText({
    model: "anthropic/claude-sonnet-4",
    prompt: "Summarize what's happening in this thread.",
  });
  await thread.post(result.textStream);
});

Six lines. Swap the Slack adapter for the Discord adapter, and this same code runs on Discord. That's the power of the adapter model: the logic never changes, only the distribution channel does.

Wait Time Is Dead Time — Unless You Monetize It

Chat SDK brilliantly solves multi-platform deployment. But it also creates a reality few developers anticipate: the agent's think time.

When an LLM generates a complex response — thread summary, code analysis, product recommendation — there's a delay. A few seconds, sometimes more. The user waits. Streaming shows progress, but attention is captive. It's idle time: the user is engaged, the channel is open, and nothing is happening on the monetization side.

This is exactly the gap that @idlen/chat-sdk fills. The Idlen SDK plugs in as a complementary adapter to Chat SDK: while the agent thinks, a contextual native ad appears in the conversation — sponsored recommendation, CTA card, inline mention. The format adapts to context and platform.

The integration fits in three lines:

import { IdlenChatAds } from "@idlen/chat-sdk";

const idlen = new IdlenChatAds({ publisherId: "your-id" });
// Native ads display during the agent's think time

For developers shipping agents on Slack or Discord via Chat SDK, it's a way to turn every conversation into passive revenue — without degrading user experience. The model is transparent: displayed CPM, 70% rev share, monthly payouts. Details and integration guide at idlen.io/publishers.

Chat SDK vs Alternatives: Where Does Vercel Stand?

The market for conversational agent frameworks is taking shape fast. Here's how Chat SDK compares to existing alternatives.

CriteriaVercel Chat SDKLangChainMicrosoft Bot FrameworkBotpress
Multi-platform8 native platformsVia integrationsTeams + WebMulti via connectors
LLM Streaming✅ nativePartial
TypeScript-firstPython-firstC#/.NETLow-code
UI ComponentsUniversalAdaptive CardsVisual
State managementRedis / PostgresCustomAzureBuilt-in
Open sourcePartially

Chat SDK isn't trying to replace LangChain or agent orchestrators. It focuses on the distribution layer — the last mile between the agent and the end user. It's complementary, not competitive.

What This Changes for AI Agent Developers

Chat SDK arrives at a pivotal moment. AI agents are no longer prototypes — they're in production, inside companies, used daily. But their distribution remains fragmented. Each platform imposes its own SDK, formats and limitations.

Vercel unifies this layer. A developer can now write an agent once and push it simultaneously to the eight platforms where their users work. Streaming, components, state — everything is handled.

For the ecosystem, it's a strong signal. After AI SDK unified LLM access, Chat SDK unifies agent distribution. Vercel is methodically building the complete AI developer infrastructure — from model to message.


Key Takeaways

  • Chat SDK is an open-source TypeScript library from Vercel for deploying one AI agent on 8 platforms from a single codebase.
  • The adapter system lets you switch from Slack to Discord by changing a single line of code.
  • Native LLM streaming displays responses token by token across all supported platforms.
  • WhatsApp is supported with images, voice, stickers, location and reply buttons.
  • Built-in state management (Redis / Postgres) persists conversation state with no configuration.
  • Agent think time can be monetized via @idlen/chat-sdk, which displays contextual native ads during the wait.

Chat SDK turns every AI agent into a multi-platform application. But the real question isn't technical — it's economic: how does an independent developer who ships a useful agent on Slack make a living from it? Streaming created a new attention window. The question is who captures it. Full documentation at chat-sdk.dev.

#vercel #chat-sdk #ai-agents #slack #discord #whatsapp #typescript #idlen #monetization #developer-tools