Run an AI Agent
An AI agent is an autonomous system that uses a Large Language Model (LLM). Each run combines a system message and a prompt. The system message defines the agent's role and behavior, while the prompt carries the actual user input for that execution. Together, they guide the agent's response. The agent can also use tools, content retrievers, and memory to provide richer context during execution.
type: "io.kestra.plugin.ai.agent.AIAgent"
Examples
Summarize arbitrary text with controllable length and language.
id: simple_summarizer_agent
namespace: company.ai
inputs:
- id: summary_length
displayName: Summary Length
type: SELECT
defaults: medium
values:
- short
- medium
- long
- id: language
displayName: Language ISO code
type: SELECT
defaults: en
values:
- en
- fr
- de
- es
- it
- ru
- ja
- id: text
type: STRING
displayName: Text to summarize
defaults: |
Kestra is an open-source orchestration platform that:
- Allows you to define workflows declaratively in YAML
- Allows non-developers to automate tasks with a no-code interface
- Keeps everything versioned and governed, so it stays secure and auditable
- Extends easily for custom use cases through plugins and custom scripts.
Kestra follows a "start simple and grow as needed" philosophy. You can schedule a basic workflow in a few minutes, then later add Python scripts, Docker containers, or complicated branching logic if the situation calls for it.
tasks:
- id: multilingual_agent
type: io.kestra.plugin.ai.agent.AIAgent
systemMessage: |
You are a precise technical assistant.
Produce a {{ inputs.summary_length }} summary in {{ inputs.language }}.
Keep it factual, remove fluff, and avoid marketing language.
If the input is empty or non-text, return a one-sentence explanation.
Output format:
- 1-2 sentences for 'short'
- 2-5 sentences for 'medium'
- Up to 5 paragraphs for 'long'
prompt: |
Summarize the following content: {{ inputs.text }}
- id: english_brevity
type: io.kestra.plugin.ai.agent.AIAgent
prompt: Generate exactly 1 sentence English summary of "{{ outputs.multilingual_agent.textOutput }}"
pluginDefaults:
- type: io.kestra.plugin.ai.agent.AIAgent
values:
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
Interact with an MCP Server subprocess running in a Docker container
id: agent_with_docker_mcp_server_tool
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: What is the current UTC time?
tasks:
- id: agent
type: io.kestra.plugin.ai.agent.AIAgent
prompt: "{{ inputs.prompt }}"
provider:
type: io.kestra.plugin.ai.provider.OpenAI
apiKey: "{{ kv('OPENAI_API_KEY') }}"
modelName: gpt-5-nano
tools:
- type: io.kestra.plugin.ai.tool.DockerMcpClient
image: mcp/time
Run an AI agent with a memory
id: agent_with_memory
namespace: company.ai
tasks:
- id: first_agent
type: io.kestra.plugin.ai.agent.AIAgent
prompt: Hi, my name is John and I live in New York!
- id: second_agent
type: io.kestra.plugin.ai.agent.AIAgent
prompt: What's my name and where do I live?
pluginDefaults:
- type: io.kestra.plugin.ai.agent.AIAgent
values:
provider:
type: io.kestra.plugin.ai.provider.OpenAI
apiKey: "{{ kv('OPENAI_API_KEY') }}"
modelName: gpt-5-mini
memory:
type: io.kestra.plugin.ai.memory.KestraKVStore
memoryId: JOHN
ttl: PT1M
messages: 5
Run an AI agent leveraging Tavily Web Search as a content retriever. Note that in contrast to tools, content retrievers are always called to provide context to the prompt, and it's up to the LLM to decide whether to use that retrieved context or not.
id: agent_with_content_retriever
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: What is the latest Kestra release and what new features does it include? Name at least 3 new features added exactly in this release.
tasks:
- id: agent
type: io.kestra.plugin.ai.agent.AIAgent
prompt: "{{ inputs.prompt }}"
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
contentRetrievers:
- type: io.kestra.plugin.ai.retriever.TavilyWebSearch
apiKey: "{{ kv('TAVILY_API_KEY') }}"
Run an AI Agent returning a structured output specified in a JSON schema. Note that some providers and models don't support JSON Schema; in those cases, instruct the model to return strict JSON using an inline schema description in the prompt and validate the result downstream.
id: agent_with_structured_output
namespace: company.ai
inputs:
- id: customer_ticket
type: STRING
defaults: >-
I can't log into my account. It says my password is wrong, and the reset link never arrives.
tasks:
- id: support_agent
type: io.kestra.plugin.ai.agent.AIAgent
provider:
type: io.kestra.plugin.ai.provider.MistralAI
apiKey: "{{ kv('MISTRAL_API_KEY') }}"
modelName: open-mistral-7b
systemMessage: |
You are a classifier that returns ONLY valid JSON matching the schema.
Do not add explanations or extra keys.
configuration:
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
prompt: |
Classify the following customer message:
{{ inputs.customer_ticket }}
Perform market research with an AI Agent using a web search retriever and save the findings as a Markdown report.
The retriever gathers up-to-date information, the agent summarizes it, and the filesystem tool writes the result to the task working directory.
Mount to a container path (e.g., /tmp) so the generated report file is accessible and can be collected with outputFiles
.
id: market_research_agent
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: |
Research the latest trends in workflow and data orchestration.
Use web search to gather current, reliable information from multiple sources.
Then create a well-structured Markdown report that includes an introduction,
key trends with short explanations, and a conclusion.
Save the final report as `report.md` in the `/tmp` directory.
tasks:
- id: agent
type: io.kestra.plugin.ai.agent.AIAgent
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ kv('GEMINI_API_KEY') }}"
modelName: gemini-2.5-flash
prompt: "{{ inputs.prompt }}"
systemMessage: |
You are a research assistant that must always follow this process:
1. Use the TavilyWebSearch content retriever to gather the most relevant and up-to-date information for the user prompt. Do not invent information.
2. Summarize and structure the findings clearly in Markdown format. Use headings, bullet points, and links when appropriate.
3. Save the final Markdown report as `report.md` in the `/tmp` directory by using the provided filesystem tool.
Important rules:
- Never output raw text in your response. The final result must always be written to `report.md`.
- If no useful results are retrieved, write a short note in `report.md` explaining that no information was found.
- Do not attempt to bypass or ignore the retriever or the filesystem tool.
contentRetrievers:
- type: io.kestra.plugin.ai.retriever.TavilyWebSearch
apiKey: "{{ kv('TAVILY_API_KEY') }}"
maxResults: 10
tools:
- type: io.kestra.plugin.ai.tool.DockerMcpClient
image: mcp/filesystem
command: ["/tmp"]
binds: ["{{workingDir}}:/tmp"] # mount host_path:container_path to access the generated report
outputFiles:
- report.md
Properties
prompt *Requiredstring
Text prompt
The input prompt for the language model
provider *RequiredNon-dynamicAmazonBedrockAnthropicAzureOpenAIDeepSeekGoogleGeminiGoogleVertexAIMistralAIOllamaOpenAI
Language model provider
configuration Non-dynamicChatConfiguration
{}
Language model configuration
contentRetrievers GoogleCustomWebSearchTavilyWebSearch
Content retrievers
Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.
maxSequentialToolsInvocations integerstring
Maximum sequential tools invocations
memory Non-dynamicKestraKVStoreRedis
Agent memory
Agent memory will store messages and add them as history to the LLM context.
outputFiles array
The files from the local filesystem to send to Kestra's internal storage.
Must be a list of glob expressions relative to the current working directory, some examples: my-dir/**
, my-dir/*/**
or my-dir/my-file.txt
.
systemMessage string
System message
The system message for the language model
tools Non-dynamicCodeExecutionDockerMcpClientGoogleCustomWebSearchKestraFlowKestraTaskSseMcpClientStdioMcpClientStreamableHttpMcpClientTavilyWebSearch
Tools that the LLM may use to augment its response
Outputs
finishReason string
STOP
LENGTH
TOOL_EXECUTION
CONTENT_FILTER
OTHER
Finish reason
jsonOutput object
LLM output for JSON
response format
The result of the LLM completion for response format of type JSON
, null otherwise.
outputFiles object
URIs of the generated files in Kestra's internal storage
requestDuration integer
Request duration in milliseconds
textOutput string
LLM output for TEXT
response format
The result of the LLM completion for response format of type TEXT
(default), null otherwise.
tokenUsage TokenUsage
Token usage
Definitions
Mistral AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
API base URL
Model Context Protocol (MCP) Stdio client tool
command *Requiredarray
MCP client command, as a list of command parts
type *Requiredobject
env object
Environment variables
logEvents booleanstring
false
Log events
Call a Kestra flow as a tool
type *Requiredobject
description string
Description of the flow if not already provided inside the flow itself
Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.
flowId string
Flow ID of the flow that should be called
inheritLabels booleanstring
false
Whether the flow should inherit labels from this execution that triggered it
By default, labels are not inherited. If you set this option to true
, the flow execution will inherit all labels from the agent's execution.
Any labels passed by the LLM will override those defined here.
inputs object
Input values that should be passed to flow's execution
Any inputs passed by the LLM will override those defined here.
labels arrayobject
Labels that should be added to the flow's execution
Any labels passed by the LLM will override those defined here.
namespace string
Namespace of the flow that should be called
revision integerstring
Revision of the flow that should be called
scheduleDate string
date-time
Schedule the flow execution at a later date
If the LLM sets a scheduleDate, it will override the one defined here.
Model Context Protocol (MCP) SSE client tool
type *Requiredobject
url *Requiredstring
URL of the MCP server
headers object
Custom headers
Useful, for example, for adding authentication tokens via the Authorization
header.
logRequests booleanstring
false
Log requests
logResponses booleanstring
false
Log responses
timeout string
duration
Connection timeout duration
Call a Kestra runnable task as a tool
io.kestra.plugin.ai.domain.AIOutput-ToolExecution
requestArguments object
requestId string
requestName string
result string
Deepseek Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
https://api.deepseek.com/v1
API base URL
io.kestra.plugin.ai.domain.AIOutput-AIResponse
completion string
Generated text completion
The result of the text completion
finishReason string
STOP
LENGTH
TOOL_EXECUTION
CONTENT_FILTER
OTHER
Finish reason
id string
Response identifier
requestDuration integer
Request duration in milliseconds
tokenUsage TokenUsage
Token usage
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
jsonSchema object
JSON Schema (used when type = JSON)
Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.
jsonSchemaDescription string
Schema description (optional)
Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."
type string
TEXT
TEXT
JSON
Response format type
Specifies how the LLM should return output. Allowed values:
- TEXT (default): free-form natural language.
- JSON: structured output validated against a JSON Schema.
Model Context Protocol (MCP) Docker client tool
image *Requiredstring
Container image
type *Requiredobject
apiVersion string
API version
binds array
Volume binds
command array
MCP client command, as a list of command parts
dockerCertPath string
Docker certificate path
dockerConfig string
Docker configuration
dockerContext string
Docker context
dockerHost string
Docker host
dockerTlsVerify booleanstring
Whether Docker should verify TLS certificates
env object
Environment variables
logEvents booleanstring
false
Whether to log events
registryEmail string
Container registry email
registryPassword string
Container registry password
registryUrl string
Container registry URL
registryUsername string
Container registry username
Google Custom Search web tool
apiKey *Requiredstring
API key
csi *Requiredstring
Custom search engine ID (cx)
type *Requiredobject
Ollama Model Provider
endpoint *Requiredstring
Model endpoint
modelName *Requiredstring
Model name
type *Requiredobject
Code execution tool using Judge0
apiKey *Requiredstring
RapidAPI key for Judge0
You can obtain it from the RapidAPI website.
type *Requiredobject
OpenAI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
API base URL
Web search content retriever for Google Custom Search
apiKey *Requiredstring
API key
csi *Requiredstring
Custom search engine ID (cx)
type *Requiredobject
maxResults integerstring
3
Maximum number of results
io.kestra.plugin.ai.domain.ChatConfiguration
logRequests booleanstring
Log LLM requests
If true, prompts and configuration sent to the LLM will be logged at INFO level.
logResponses booleanstring
Log LLM responses
If true, raw responses from the LLM will be logged at INFO level.
responseFormat ChatConfiguration-ResponseFormat
Response format
Defines the expected output format. Default is plain text.
Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use.
When using a JSON schema, the output will be returned under the key jsonOutput
.
seed integerstring
Seed
Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.
temperature numberstring
Temperature
Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.
topK integerstring
Top-K
Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.
topP numberstring
Top-P (nucleus sampling)
Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.
io.kestra.plugin.ai.domain.TokenUsage
inputTokenCount integer
outputTokenCount integer
totalTokenCount integer
io.kestra.plugin.ai.domain.AIOutput-AIResponse-ToolExecutionRequest
arguments object
Tool request arguments
id string
Tool execution request identifier
name string
Tool name
Azure OpenAI Model Provider
endpoint *Requiredstring
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
modelName *Requiredstring
Model name
type *Requiredobject
apiKey string
API Key
clientId string
Client ID
clientSecret string
Client secret
serviceVersion string
API version
tenantId string
Tenant ID
Google VertexAI Model Provider
endpoint *Requiredstring
Endpoint URL
location *Requiredstring
Project location
modelName *Requiredstring
Model name
project *Requiredstring
Project ID
type *Requiredobject
Google Gemini Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
Model Context Protocol (MCP) SSE client tool
sseUrl *Requiredstring
SSE URL of the MCP server
type *Requiredobject
headers object
Custom headers
Could be useful, for example, to add authentication tokens via the Authorization
header.
logRequests booleanstring
false
Log requests
logResponses booleanstring
false
Log responses
timeout string
duration
Connection timeout duration
WebSearch content retriever for Tavily Search
apiKey *Requiredstring
API Key
type *Requiredobject
maxResults integerstring
3
Maximum number of results to return
Chat Memory backed by Redis
host *Requiredstring
Redis host
The hostname of your Redis server (e.g., localhost or redis-server)
type *Requiredobject
drop string
NEVER
NEVER
BEFORE_TASKRUN
AFTER_TASKRUN
Drop memory: never, before, or after the agent's task run
By default, the memory ID is the value of the system.correlationId
label, meaning that the same memory will be used by all tasks of the flow and its subflows.
If you want to remove the memory eagerly (before expiration), you can set drop: AFTER_TASKRUN
to erase the memory after the taskrun.
You can also set drop: BEFORE_TASKRUN
to drop the memory before the taskrun.
memoryId string
{{ labels.system.correlationId }}
Memory ID - defaults to the value of the system.correlationId
label. This means that a memory is valid for the entire flow execution including its subflows.
messages integerstring
10
Maximum number of messages to keep in memory. If memory is full, the oldest messages will be removed in a FIFO manner. The last system message is always kept.
port integerstring
6379
Redis port
The port of your Redis server
ttl string
PT1H
duration
Memory duration - defaults to 1h
Anthropic AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
WebSearch tool for Tavily Search
apiKey *Requiredstring
Tavily API Key - you can obtain one from the Tavily website
type *Requiredobject
In-memory Chat Memory that stores its data as Kestra KV pairs
type *Requiredobject
drop string
NEVER
NEVER
BEFORE_TASKRUN
AFTER_TASKRUN
Drop memory: never, before, or after the agent's task run
By default, the memory ID is the value of the system.correlationId
label, meaning that the same memory will be used by all tasks of the flow and its subflows.
If you want to remove the memory eagerly (before expiration), you can set drop: AFTER_TASKRUN
to erase the memory after the taskrun.
You can also set drop: BEFORE_TASKRUN
to drop the memory before the taskrun.
memoryId string
{{ labels.system.correlationId }}
Memory ID - defaults to the value of the system.correlationId
label. This means that a memory is valid for the entire flow execution including its subflows.
messages integerstring
10
Maximum number of messages to keep in memory. If memory is full, the oldest messages will be removed in a FIFO manner. The last system message is always kept.
ttl string
PT1H
duration
Memory duration - defaults to 1h
Amazon Bedrock Model Provider
accessKeyId *Requiredstring
AWS Access Key ID
modelName *Requiredstring
Model name
secretAccessKey *Requiredstring
AWS Secret Access Key
type *Requiredobject
modelType string
COHERE
COHERE
TITAN
Amazon Bedrock Embedding Model Type