LLM Node

LLM Nodes allow you to add AI intelligence to any workflow. Whether you’re summarizing data, classifying support tickets, or automating complex actions based on intent, the LLM node acts as the "brain

LLM Nodes allow you to add AI intelligence to any workflow. Whether you’re summarizing data, classifying support tickets, or automating complex actions based on intent, the LLM node acts as the "brain" of your automation. Under the hood the LLM Node uses EbbotGPT API.


Configuration Guide

The LLM node features several sections that allow you to fine-tune how the AI behaves and what data it can access.

Prompt

The Prompt is the core instruction set where you specify how the model should behave.

  • Markdown Support: You can use Markdown (headers, lists, bold text) to structure your instructions. This helps the model distinguish between different parts of your prompt, such as "Context," "Constraints," and "Goal."

  • Injecting Variables: To pull data from a previous node in your workflow, use the keyboard shortcut Ctrl + + (or Cmd + + on Mac).

    Note: This shortcut is the primary way to access the variable picker and link your workflow data to the prompt.

Tools

The Tools section grants the LLM the ability to perform actions using your installed integrations. By selecting a tool here, the LLM can decide to "call" that action based on your instructions.

For example, in a ticket classification workflow:

  1. Give the LLM access to the "List Categories" and "Set Category" tools.

  2. In the Prompt, instruct it:

    "Find the correct category ID using the List Categories tool, then apply it to the ticket using the Set Category tool."

This allows the AI to navigate the logic for you, making your workflows much more resilient and easier to maintain by reducing the amount of condition nodes and normal action nodes used in your workflow.

  • Execution Flow: If the LLM decides to use a tool, it executes the action first, gathers the result, and then completes its generation. The final output of the LLM node will include the results of these actions.

Model Config

This is where you define the technical parameters of the AI.

  • Model Selection: Choose from various available LLMs.

  • Max Tokens: Limits how many words or characters the model can generate.

  • Temperature: Controls the "creativity" or "randomness" of the response. Lower values (0–0.3) are best for factual, consistent tasks; higher values (0.7+) are better for creative writing.

RAG (Retrieval-Augmented Generation)

RAG allows the LLM to use your uploaded knowledge articles. Here you can read more about the embedding models.

  • Automatic Injection: You don't need to add special placeholders or variables to your prompt. The system automatically retrieves relevant information from your sources and provides it to the LLM behind the scenes.

  • Where to configure: You configure your knowledge in the Ebbot Chat Platform under "EbbotGPT" -> "Knowledge". You can read more about the various source types here.

Structured Output

By default, LLMs return a block of text. Structured Output allows you to force the model to follow a specific JSON schemaarrow-up-right.

This is ideal for classification or extracting specific data points. For example, you can require the model to return a category_id and a priority_score.

Important: When you enable Structured Output, the standard text output is disabled in the workflow builder to reflect your new data structure. Subsequent nodes will only be able to see and use your custom JSON fields.


Node Outputs

The data available to "downstream" nodes changes depending on your configuration:

Output Key
Availability
Description

text

Standard Mode Only

The plain-text response from the AI.

structuredOutput

Structured Mode Only

The JSON object containing the keys you defined in your schema.

sources

Always Available

A list of any documents or links the LLM referenced (requires RAG enabled).

Last updated

Was this helpful?