Before using AI capabilities in Board to analyze information from your Screens and Capsules, you must first configure your Datasets and Agents. For more information, see Create and configure.
This article explains how to interact with already existing and configured AI Agents. It focuses on how to use Agents after the initial setup phase, following the configuration of Datasets and Agents.
This article covers the following topics:
To better understand the role of AI Agents within the Board platform, see About Board AI Agents.
AI Agents on Screens
The Agent uses the Screen as the interaction entry point and inherits the user context (Screen Selections, filters such as the Selector and Pager Objects, and navigation state).
Before it is possible to interact with the Agent you have configured, you have to link it to the specific Screen you wish to analyze with the help from the Agent. To do this:
Enter the Screen you wish to analyze in Design Mode by selecting “Design Mode” from the burger menu on the Top Menu
On the Screen Properties panel, under “AI Agent“, choose what Agent you wish to link to this specific Screen from the dropdown menu. If your Agent doesn’t display in this menu make sure that the underlying Dataset has the “AI Access“ option enabled.
AI Agents are not available when "Multiple Data Model" is enabled.
When adding an Agent to an existing Screen, make sure the relevant Datasets are also included. This ensures the Agent has the right context and prevents confusing or incomplete results for users.
After this configuration, the AI icon (
) will display on the Top Menu through which the End User will be able to interact with the Agent in Play Mode.
Once you click on this icon, the chat window will display where you can click on the suggested prompts created or type your questions in the “Type here” box.
.png)
During the conversation you will see in each answer provided by the Agent:
The “Answer” tab. Read the AI answer to your prompt.
The “Chart” tab. Review the chart generated by the Agent based on your prompt. You can download the chart by clicking the download icon in the top-right corner.
The “Steps” tab. Describes the Agent’s reasoning/execution plan, including the tasks executed to address the user’s request and the data processing steps used to produce the result.
The “Data Source“ tab. Read about the data configuration process behind your AI Agent - Entities by row, column and Selections used by the Agent to provide your answer.
Other considerations
Always validate the answers provided by the AI Agent.
The same Agent can be linked to different Screens.
The Agent does not parse Screen Objects (Data View, Flex Grid, Charts, Labels, etc). The analysis is executed on the linked Datasets.
Changes made directly on the Screen (Screen Selections/Pagers/Selectors) are not automatically re-read by the Agent. To update context, users should interact via the Agent (or re-open/refresh the Agent context after the changes are saved in Design Mode). Dynamic Selections, on the other hand, if present on Screen, will not be read by the agent - read more about other known limitations.
AI Agents essential prompting guide
With Board Agents, a prompt is not simply free text. It acts as an execution specification that determines selections, analysis steps, and outputs. Prompts that lack sufficient detail may lead the Agent to infer missing information, which can result in inconsistent or less predictable outcomes.
RTCROS is a simple checklist to make prompts complete, constrained, and repeatable across Board Datasets (built-in and customer custom models).ç
Framework | Definition | Example |
|---|---|---|
R - Role | Define who the Agent should act as | Act as a CFO / FP&A analyst preparing a cash flow variance brief. |
T- Task | Clearly state what you want done | Perform a cash flow variance analysis: Actual vs Budget for FY2025 and identify the key drivers |
C - Context | Provide background or situation | Using the Financial Statements screen context and the linked datasets (P&L, BS, CF). |
R - Rules | Set boundaries or constraints | Period: FY2025 (and monthly trend), Scenario: Actual vs Budget, Currency: USD, Use dataset definitions + rules & don’t invent metrics |
O - Output | Specify the format and style of output generated | Return:
|
S - Style | Define tone, voice and creativity level | Professional, data-driven, CFO-ready. Concise. No technical jargon. Max 200–300 words. |
Start broad, then refine. If the first answer is too high-level, ask the Agent to drill down by adding a dimension, narrowing the time range, or requesting the top contributors.
Prompt best practices
These are some prompt best practices optimized for GPT-5):
1) Use a strict structure: GOAL + numbered TASKS
Structure | Best Practice | Example |
|---|---|---|
Goals | Write one clear sentence with the fixed Selections and time horizon | Perform a Cash Flow variance analysis for Year = 2025 comparing Actual vs Budget |
Tasks | Write sequential (Task 1, Task 2, …) with explicit dependencies (e.g., “Run Task 3 only after Tasks 1–2”) Limit the number of tasks from 1 to 5 to avoid long execution plans. If there are too many tasks, split in 2 different prompts. The agent memory will use conversation to answer the second prompt. | TASK 1: Yearly variance drivers (CFO/CFI/CFF) Quantify and explain variance drivers across CFO, CFI, CFF at yearly level. Rank categories by absolute variance and highlight the most material deviations impacting liquidity and net cash position. TASK 2: Legal Entity responsibility + root cause Identify which Legal Entity primarily drives the variance (rank by absolute variance on Net Cashflow). Provide root cause analysis using only available P&L and BS metrics (e.g., Revenue, CAPEX, Working Capital components). For each root cause, reference 1–2 concrete metrics that support it. |
2) Write selections as a dedicated “Selections” Block
Treat selections as an upfront contract, not scattered text.
Example:
Time: Year = 2025; Periodicity = Monthly
Scenario/Version: Actual = “2025 Actual”; Budget = “2025 Budget”
Currency = “US Dollar”
FP Version = “2025 Budget”
Chart of Account = “Net Cashflow”, “CFO”, “CFI”, “CFF” (exact names)
Selections on Entities can be requested from the users , as follows:
“Step 0: Gather User Input
Ask the user:
Reporting Unit to analyze (options: All or specific unit)
Month to analyze (e.g., Dec 2025)”
If selections are not specified, the Agent will use the Screen Selection by default.
3) Define ranking and materiality rules
Avoid narrative outputs by enforcing rules like:
Rank drivers by absolute variance (Actual – Budget)
Show Top N drivers and group the rest as “Other”
Apply a materiality threshold (e.g., > X% of total variance or > $Y)
4) Specify formulas and sign conventions
Cash Flow is sign-hell. Be explicit:
Variance = Actual – Budget
Define what “positive variance” means for Net Cashflow
“Use the dataset sign convention”
5)Hard-code the desired output format
Tell the model exactly what to return per task:
Task 1: bullets + small summary table
Task 2: mandatory table with fixed columns
Task 3: chart spec (title, series, axes)
Example: “Return a table: Month | Actual Net Cashflow | Budget Net Cashflow | Variance | Variance %”
Use consistent terminology and zero typos
In technical prompts, consistency is a feature:
Use the same label everywhere (e.g., “Legal Entity”, not “Entity”)
Keep measure names identical (e.g., “Net Cashflow” vs “Net Cash Flow”)
Fix typos, because matching and selection depend on it (otherwise the agent will use best match)
How to request data and visualization
You can ask the Agent to show the underlying data used in its analysis. To avoid ambiguity, specify the layout and the level of detail you want.
When requesting data, include:
Rows (which entity/dimension on rows)
Columns (which time period or dimension on columns)
Measures (which KPIs)
Filters/Selections (scenario, version, region, entity, etc.)
Sorting/Top N (optional, but very useful)
Examples
“Show the data as a table: rows = Product Group, columns = Months, measure = Revenue Actual and Budget, filter = Region EMEA, period = FY2026.”
“Display a pivot for Opex variance vs Budget: rows = Cost Center, columns = Quarter, top 10 by absolute variance.”
“List the exceptions found and show the related values and selections used.”
If the Agent output includes a summary, you can follow up with:
“Show me the exact breakdown behind this result.”
“Drill down one level deeper on the main driver.”
“Provide the same view but for the top 5 entities only.”
Known limitations
Flex Grid not supported. At this stage, only Data Views are compatible for creating AI-enabled Datasets. Flex Grid Layouts are not yet supported.
Single Data Model only. An Agent cannot be linked to a Screen that uses multiple Data Models. It must be connected to a Screen configured with one Data Model only.
No long-term memory (session-based context only). The Agent does not retain memory across sessions. During a single session, the Agent can remember the previous prompt context and build on it – but this context resets if you:
Refresh the page
Move to a screen linked to a different Agent or no Agent, or
Log out of the environment
This ensures each session starts cleanly without carrying over prior data or instructions.
Agents can generate only 1 chart per user prompt. Multiple charts in one answer is not supported at this point.
The download button of a chart sometimes downloads the previous loaded chart, so the current workaround is:
Right click and download
Ensure the first answer from the Agent is on the answer tab. The chart tab should not be open on the previous answers if generated a chart.
The Agent can only use Entities that are exposed in the Dataset Quick Layout. Any Entity used in the Screen context must also be included in the Dataset Quick Layout, otherwise the Agent will ignore that selection from its query.
Dynamic Screen Selections are not supported/available to the Agent as context. Eg: Screen selections on UBH are not supported.
Do not add Entities to the Quick Layout that are already enforced as Dataset Selection filters (they are fixed scope, not pivot axes).
Other considerations
The FP&A Agents operate within the Board semantic and layout logic and depend on how Entities and Datasets are configured in the Data Model.
Core Logic Overview
(by design) Entity must be defined in Quick Layout. The Agent will be able to query the Dataset and understand what selections to make only if that specific Entity is mentioned in the Quick Layout of the Dataset. Entities not listed there will not be recognized or used for filtering purposes or to make any selections. Ensure that if there are any screen selections as well, those Entities need to be in the Quick Layout for the Agent to take it into consideration.
The Agent inherits Board’s native layout select behavior (with “keep” and “to”)
Must be configured in “keep” mode so that when a user prompts with a new selection (e.g. “Show only 2025 planned data for country Italy”), the Agent can merge these with the existing layout selections
Entity Filtration and Selection sent to Agent
The Agent receives only the list of entities that are explicitly configured in the Quick Layout of the linked Datasets
Quick Layout Configurations
Entities must be clearly set as “by row”, “by column”, “mandatory”, “nested” for the Agent to interpret layout structures. You can leave this unmarked if it is better to give the Agent the ability to set Entity by row or column.
These configurations define how the Agent can pivot or reconfigure the Layout when a user prompt requests a view change
Entity Description and Code
The Agent currently recognizes and responds to Entity code and descriptions. The Agent understands the Unbalanced Hierarchy nature of the Entities.
Grounding Model
The Agent interacts with data through the semantic layer of the Board Datasets.
It does not query data models directly – all data access happens through quick layout-level configurations
Data Visualization
The Agent now supports chart generation and you can download it as an image. Currently supported chat types: bar, line, and pie, waterfall and combined charts.
This capability is at its initial stage. You can already try it out by asking the Agent in natural language to create a chart, the Agent will automatically prepare the required data format. Simple visualizations work smoothly, while complex ones may need some time and a few iterative refinements.
Responsible Behavior
The Agent combines layout query output with LLM-based natural language generation to:
Explain values or variances (for analysis use cases)
Draft narrative text (for reporting use cases)