Dashboard & Monitoring
Using the Dashboard
The Dashboard is the first page you encounter when logging in. It also serves as a starting point before any integrations are installed, to get the user started with automations.
Once a workflow has been set up, the Dashboard starts displaying analytics and monitoring:
Workflow Runs: Donut chart showing success vs. failure distribution.
Token Usage by Day: Area chart for LLM token consumption.
Average Execution Time: Performance metric.
Actions by Day: Volume of automation workload.
Monitoring Tokens and Rate Limits
The Dashboard provides real-time visibility into your AI resource consumption:
Token Usage Chart: Track your consumption over time to stay within your token allowance.
Execution Errors: If you hit rate limits on external systems (e.g., Zendesk, TOPdesk), failed runs will appear in the success/failure donut chart. You can then drill down into the Logs to see the specific error messages and retry if necessary.
Tip: If you see a spike in errors correlated with peak usage, it’s a strong indicator that you are hitting concurrency or rate limits.
Logs
The Logs page is your primary tool for monitoring and debugging your automations. It provides a comprehensive, step-by-step record of every workflow execution, ensuring full transparency into how data is processed.
Page Layout
Ongoing runs: Active executions currently in progress.
Historic runs: A searchable list of past executions with their status (Done, Error, etc.), execution ID, and timestamp.
Using Logs for Debugging
When a workflow doesn't behave as expected, the Logs allow you to "See what happened" in a deep-dive view. This is crucial for identifying where a flow might have diverted from its intended path.
Key Debugging Features:
Entry Payload: Inspect the raw data that originally triggered the workflow. This helps verify if the trigger provided all necessary information.
Visual Execution Trace: See a visual sequence of every node executed. This makes it easy to spot skipped steps or unexpected paths taken by condition nodes.
Action Inspection: For every individual action (e.g., "Get Incident" or "Update Ticket"), you can see exactly:
Called with the arguments: The specific data sent to the external service.
Got the response: The exact data returned.
Error Identification: If a run fails, the logs will pinpoint the exact node that encountered an error, displaying the error message and the state of the workflow at that moment.
By tracing the input and output of every single step, you can rapidly diagnose data mapping issues, API errors, or logic flaws in your automation.
Filtering Logs
In the top right of the Logs page, you'll find a Filter icon (funnel). Use this to quickly find specific executions:
Flow: Filter by one or more specific workflows (e.g., "Full Incident Flow", "Search for user").
Status: Filter by the state of the execution:
Done: Successfully completed.
Error: Encountered an issue.
In Progress: Currently running.
Initializing: Powering up the execution.
Last updated
Was this helpful?

