# EbbotGPT Configurations

## What EbbotGPT Configurations is for

EbbotGPT Configurations are available when configuring chat agents. Here is where you add the chat agent's persona, tools, etc. The config connects with a specific dataset of your choice, and together they are the main building blocks for your chat agent.&#x20;

## Persona

The Persona is the set of instructions that governs an AI agent's behavior and response style. By configuring name, purpose, and interaction guidelines, you ensure the agent stays within organizational boundaries and follows commands for complex scenarios.

Read more about Persona by clicking the link below.

{% content-ref url="ebbotgpt-configurations/persona" %}
[persona](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations/persona)
{% endcontent-ref %}

## Language

This setting will prompt the bot to speak a specific language.&#x20;

**Same as bot -** The language set when the bot was created&#x20;

**Same as user -** The bot imitates the language of the end user&#x20;

**Custom language -** Manually set a language

## Prompt version

The Prompt version determines the operational logic and technical capabilities of the AI Agent.&#x20;

Learn more about Prompt version when clicking the link below.

{% content-ref url="ebbotgpt-configurations/prompt-version" %}
[prompt-version](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations/prompt-version)
{% endcontent-ref %}

## Knowledge - select dataset

Select what dataset the chat agent should use as knowledge. The knowledge of your AI agents (regardless if it's a chat agent or email agent) is setup in EbbotGPT Knowledge. In the EbbotGPT Configurations part in the Chat platform, you select the dataset you would like to use for your specific chat agents.&#x20;

Learn about EbbotGPT Knowledge through the link below.

{% content-ref url="ebbotgpt-knowledge" %}
[ebbotgpt-knowledge](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-knowledge)
{% endcontent-ref %}

With **Source Retrieval Settings** in Configurations you can create a custom setup for how the embedder should find sources. For most use cases the default setting is good enough. Below is the default setting of how sources are retrieved.

<figure><img src="https://2117387010-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F3rWESGvwA3vHJ3zNiAG1%2Fuploads%2FSh6ArJfudfzchoLrXs0R%2Fimage.png?alt=media&#x26;token=3dfc611e-c86d-45ac-847e-d2ec1fbc8fdc" alt=""><figcaption></figcaption></figure>

Retrieving significantly more sources than the default setting can slow down answer generation. Additionally, EbbotGPT models have built-in limits on how many sources they can process, so even if you retrieve 30 sources, the model may not be able to read them all.

## Model

Choose provider and model.&#x20;

Note: depending on your user profile, the available models can vary. If you are interested in trying out a model that you are not able to select, reach out to us and we will assist.&#x20;

Below are the supported AI providers in EbbotGPT Configurations.

* **Ebbot GPT** - Select if you want to use one of Ebbot's models, hosted on EU servers by EU companies and fine tuned for customer service.&#x20;
* **OpenAI GPT** **(Azure)** - Select OpenAI if you want to use one of their models, hosted through Microsoft Azure.
* **Google AI** - Access Google's models available through their API.

### Sampling mode: Simple

Sampling mode and Output style are the settings that alters the model's probability engine. Every time an LLM generates text, it calculates a list of potential next words and assigns each a probability score; these parameters determine how the model filters and selects from that list to balance predictable accuracy with creative variety.

**Focused**: Prioritizes factual precision and strict consistency. Essential for Legal, Financial, or Technical support where accuracy and adherence to documentation are non-negotiable.

**Balanced**: Blends professional logic with a natural, conversational flow. Ideal for General Retail or E-commerce inquiries that require helpful, clear, and safe communication.

**Creative**: Emphasizes imagination and expressive phrasing. Perfect for Tourism, Hospitality, or Marketing where unique suggestions and brand flair drive a better customer experience.

### Sampling mode: Advanced

These advanced LLM settings can slightly alter the LLM's output.&#x20;

**Temperature**: Controls the balance between logic and randomness. Lowering it makes the model more confident and literal, while raising it "flattens" the odds to encourage more creative and varied phrasing.

**Top K**: Limits the model to a fixed number of the most likely words. A low setting forces the model to choose from only a tiny handful of candidates, ensuring predictable results, while a high setting opens the gate to a much larger pool for more variety.

**Top P**: Acts as a dynamic filter based on cumulative probability. A low setting keeps responses safe and literal by only considering the most "sure-fire" words, while a high setting allows the model to explore a broader range of plausible options that adapt to the context.

#### **Advanced**

**Number of messages in GPT memory** - How many messages in the Chat history should the  LLM use as a basis for its answer.  \
\
Changing the following settings usually has a minimal impact on the chat agent's answers. While adjustments can positively affect specific questions, they might negatively impact others. Use with caution.

{% content-ref url="ebbotgpt-llms" %}
[ebbotgpt-llms](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-llms)
{% endcontent-ref %}

## Safe Guards

Use our Safe Guards to block malicious attempts to generate irrelevant or harmful content. To learn more about Safe Guards, visit the linked page below.&#x20;

{% content-ref url="ebbotgpt-configurations/security-and-guardrails/safe-guards" %}
[safe-guards](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations/security-and-guardrails/safe-guards)
{% endcontent-ref %}

## Tools and MCP configuration

You can give your Chat agent access to tools within EbbotGPT Configurations.&#x20;

Note: AI tools are  available for Prompt version v2 and later ones. Read more about the available tools and how to use them below.&#x20;

{% content-ref url="ebbotgpt-configurations/ai-tools" %}
[ai-tools](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations/ai-tools)
{% endcontent-ref %}

{% content-ref url="ebbotgpt-configurations/model-context-protocol-mcp" %}
[model-context-protocol-mcp](https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations/model-context-protocol-mcp)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ebbot.ai/ebbot-docs/core-capabilities/ebbotgpt/ebbotgpt-configurations.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
