Tiago Fortunato
ProjectsOdysAI Layer

AI Tool Calling Pattern

Tool-calling pattern: schema definition, dispatch, error handling

AI Tool Calling Pattern

This page details the tool-calling pattern in the AI layer, focusing on how tool schemas are defined for the LLM, how tool calls are dispatched, and how errors are handled during execution. The implementation is consistent across both the WhatsApp intake agent (src/lib/ai-intake.ts) and the dashboard AI chat (src/app/api/ai/chat/route.ts).

Schema Definition

Tool schemas are defined in the TOOLS array in both src/lib/ai-intake.ts and src/app/api/ai/chat/route.ts. Each tool conforms to Groq.Chat.Completions.ChatCompletionTool and includes a name, description, and parameters specified via JSON Schema.

The description field is critical for guiding the LLM’s decision-making and often includes explicit usage instructions. For example, in src/lib/ai-intake.ts, the get_available_slots tool includes the directive: "Use SEMPRE antes de sugerir horários."

{
  type: "function",
  function: {
    name: "get_available_slots",
    description: "Retorna horários disponíveis para agendamento em uma data específica. Use SEMPRE antes de sugerir horários.",
    parameters: {
      type: "object",
      properties: {
        date: { type: "string", description: "Data no formato YYYY-MM-DD" },
      },
      required: ["date"],
    },
  },
}

Similarly, in src/app/api/ai/chat/route.ts, the get_stats tool specifies: "Use para: resumo do mês, taxa de no-show, receita, comparação entre meses, evolução ao longo do tempo."

Tool Implementations

Each tool in the TOOLS array has a corresponding async function that executes the business logic. These functions perform database queries and return structured data.

In src/lib/ai-intake.ts, the getAvailableSlots function:

  • Validates the input date format
  • Queries professionals and availability tables
  • Computes open time slots based on session duration and existing appointments
  • Returns a structured object with available, slots, date, and optional message
async function getAvailableSlots(professionalId: string, dateStr: string) {
  // ... validation and DB queries
  return {
    available: slots.length > 0,
    slots,
    date: formatSaoPauloDate(saoPauloDate(dateStr, "12:00")),
    message: slots.length === 0 ? "Não há horários disponíveis nesta data." : undefined,
  }
}

In src/app/api/ai/chat/route.ts, getStats aggregates appointment data over the last six months and returns both global and per-month summaries, including revenue and no-show rates.

Tool Dispatch

The tool dispatch follows a two-pass pattern:

  1. First LLM call: The LLM receives the user message and the TOOLS array. If it decides to use a tool, it returns a tool_call with the function name and arguments.
  2. Tool execution: The application parses the tool_call, invokes the corresponding function, and captures the result.
  3. Second LLM call: The result is sent back as a tool role message, and the LLM generates a final natural language response.

The dispatch logic is implemented via a for loop over choice.tool_calls in both files. Each tool_call is matched using an if-else chain that checks call.function.name and invokes the correct implementation.

In src/lib/ai-intake.ts, the dispatch includes side effects:

  • On successful book_appointment, conversation.context.appointmentCreated is set to true
  • If client_name is provided, it updates conversation.clientName

Error Handling

Tool implementations return structured error objects instead of throwing exceptions for expected failures. This allows the LLM to interpret and respond to errors naturally.

In src/lib/ai-intake.ts, getAvailableSlots returns:

{ error: "Data inválida. Use o formato YYYY-MM-DD." }

for malformed dates, and:

{ error: "Profissional não encontrado" }

if the professional does not exist.

Similarly, bookAppointment returns:

{ success: false, reason: "Este horário acabou de ser preenchido. Tente outro." }

on conflict, rather than throwing.

The dispatch layer handles unknown tools by returning:

{ error: "Ferramenta desconhecida" }

This pattern ensures that all outcomes — success or failure — are surfaced to the LLM in a consistent format, enabling coherent user responses without interrupting the flow.

Why this shape

The two-pass tool-calling pattern decouples intent recognition from data retrieval and action execution. By defining tools with precise JSON Schema and descriptive prompts, the LLM can reliably select and parameterize functions. The direct mapping from tool_call.name to implementation via if-else is simple and auditable, suitable for a small, fixed set of tools. Returning structured errors instead of throwing keeps the LLM in control of the conversation flow, turning validation and business logic failures into dialogue opportunities rather than crashes. This design prioritizes resilience and user experience in a production AI agent.

On this page