# LLM Nodes

## **LLM nodes** <a href="#id-3.-llm-nodes" id="id-3.-llm-nodes"></a>

LLM Nodes allow you to add AI to any workflow. Whether you’re summarizing data, classifying support tickets, or automating complex actions based on intent, the LLM node acts as the "brain" of your automation. Under the hood the LLM Node uses EbbotGPT API.

<p align="center"><img src="https://docs.ebbot.ai/ebbot-docs/~gitbook/image?url=https%3A%2F%2F2927500783-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MQWB_yIsohr3UeVmprb%252Fuploads%252Fq8XtX4Swnp6ys36tTcuY%252Fimage.png%3Falt%3Dmedia%26token%3Dd1789563-9cd1-4b3e-8129-2977fcf1a2e1&#x26;width=768&#x26;dpr=3&#x26;quality=100&#x26;sign=a51ab650&#x26;sv=2" alt="" data-size="original"></p>

**Purpose**: Process data using Large Language Models (AI)

**Characteristics:**

* Send prompts to EbbotGPT or other AI models
* Support **structured output** (JSON schemas)
* Can use **RAG** (Retrieval-Augmented Generation) for context
* Support **tool calling** for function execution

> **Note on rate limits:** When the LLM node is configured with tools, it is subject to the rate limits of the systems those tools are connected to.

**LLM Nodes** allow you to add AI intelligence to any workflow. Whether you’re summarizing data, classifying support tickets, or automating complex actions based on intent, the LLM node acts as the "brain" of your automation. Under the hood the LLM Node uses [EbbotGPT API](/ebbot-docs/developer-resources/ebbotgpt/ebbotgpt-api.md).

***

## Configuration guide

The LLM node features several sections that allow you to fine-tune how the AI behaves and what data it can access.

## Prompt

The **Prompt** is the core instruction set where you specify how the model should behave.

* **Markdown Support:** You can use Markdown (headers, lists, bold text) to structure your instructions. This helps the model distinguish between different parts of your prompt, such as "Context," "Constraints," and "Goal."
* **Injecting Variables:** To pull data from a previous node in your workflow, use the keyboard shortcut **Ctrl + +** (or **Cmd + +** on Mac).

  > **Note:** This shortcut is the primary way to access the variable picker and link your workflow data to the prompt.

## Tools

The **Tools** section grants the LLM the ability to perform actions using your installed integrations. By selecting a tool here, the LLM can decide to "call" that action based on your instructions.

For example, in a ticket classification workflow:

1. Give the LLM access to the **"List Categories"** and **"Set Category"** tools.
2. In the **Prompt**, instruct it:

   > "Find the correct category ID using the List Categories tool, then apply it to the ticket using the Set Category tool."

This allows the AI to navigate the logic for you, making your workflows much more resilient and easier to maintain by reducing the amount of condition nodes and normal action nodes used in your workflow.

* **Execution Flow:** If the LLM decides to use a tool, it executes the action first, gathers the result, and then completes its generation. The final output of the LLM node will include the results of these actions.

> **Note on Rate Limits:** When using tools, the LLM is subject to the rate limits of the systems those tools are connected to (e.g., Zendesk, TOPdesk).

## Model config

This is where you define the technical parameters of the AI.

* **Model Selection:** Choose from various available LLMs.
* **Max Tokens:** Limits how many words or characters the model can generate.
* **Temperature:** Controls the "creativity" or "randomness" of the response. Lower values (0–0.3) are best for factual, consistent tasks; higher values (0.7+) are better for creative writing.

## RAG (Retrieval-Augmented Generation)

RAG allows the LLM to use your uploaded knowledge articles. [Here](/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-knowledge/embedder-models.md) you can read more about the embedding models.

* **Automatic Injection:** You don't need to add special placeholders or variables to your prompt. The system automatically retrieves relevant information from your sources and provides it to the LLM behind the scenes.
* **Where to configure:** You configure your knowledge in the Ebbot Chat Platform under "EbbotGPT" -> "Knowledge". You can read more about the various source types [here](/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-knowledge.md).

## Structured output

By default, LLMs return a block of text. **Structured Output** allows you to force the model to follow a specific [JSON schema](https://json-schema.org/learn/getting-started-step-by-step).

This is ideal for classification or extracting specific data points. For example, you can require the model to return a `category_id` and a `priority_score`.

> **Important:** When you enable Structured Output, the standard `text` output is disabled in the workflow builder to reflect your new data structure. Subsequent nodes will only be able to see and use your custom JSON fields.

***

## Node outputs

The data available to "downstream" nodes changes depending on your configuration:

| Output Key         | Availability             | Description                                                                 |
| ------------------ | ------------------------ | --------------------------------------------------------------------------- |
| `text`             | **Standard Mode Only**   | The plain-text response from the AI.                                        |
| `structuredOutput` | **Structured Mode Only** | The JSON object containing the keys you defined in your schema.             |
| `sources`          | **Always Available**     | A list of any documents or links the LLM referenced (requires RAG enabled). |

***

## Context window and limits

LLM execution is subject to the **context window** of the selected model. For example, EbbotGPT 3 can process up to 31,000 tokens per request.

Staying within the token limit of the context window is recommended for best results.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ebbot.ai/ebbot-docs/core-capabilities/automations/workflow-nodes/llm-nodes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
