Changelog
New Features:
- Memory Generally Available: We have made improvements and adjustments to how Agentic user memory management works. This is now out of beta and generally available. See these examples and these docs for more info.
- OpenAI Tools: Added
OpenAITools
to enable text-to-speech and image generation through OpenAI’s APIs. - Zep Tools: Added
ZepTools
andAsyncZepTools
to manage memories for your Agent usingzep-cloud
Improvements:
- Azure AI Foundry Reasoning: Added support for reasoning models via Azure AI Foundry. E.g. Deepseek-R1.
- Include/Exclude Tools: Added
include_tools
andexclude_tools
for all toolkits. This allows for selective enabling / disabling of tools inside toolkits, which is especially useful for larger toolkits.
Bug Fixes:
- Gemini with Memory: Fixed issue with
deepcopy
when Gemini is used withMemory
.
Breaking Changes:
- Memory: Agents will now by default use an improved
Memory
instead of the now deprecatedAgentMemory
.agent.memory.messages
→run.messages for run in agent.memory.runs
(oragent.get_messages_for_session()
)create_user_memories
→enable_user_memories
and is now set on the Agent/Team directly.create_session_summary
→enable_session_summaries
and is now set on the Agent/Team directly.
Improvements:
- Further Async Vector DB Support: Support added for:
- Reasoning on Agno Platform:
- Added extensive support for reasoning on the Agno Platform. Go see your favourite reasoning agents in action!
- Changes from SDK
- send proper events in different types of reasoning and populate the
reasoning_content
onRunResponse
forstream/non-stream
,async/non-async
- unified json structure for all types of reasoning in
Reasoning events
- send proper events in different types of reasoning and populate the
- Google Caching Support: Added support for caching files and sending the cached content to Gemini.
Bug Fixes:
- Firecrawl Scrape: Fixed issues with non-serializable types for during Firecrawl execution.
New Features:
- Web Browser Tool: Introduced a
webbrowser
tool for agents to interact with the web. - Proxy Support: Added
proxy
parameter support to both URL and PDF tools for network customization.
Improvements:
- Session State: Added examples for managing session state in agents.
- AzureOpenAIEmbedder: Now considers
client_params
passed in theclient_params
argument for more flexible configuration. - LiteLLM: Now uses built-in environment validation to simplify setup.
- Team Class: Added a
mode
attribute to team data serialization for enhanced team configuration. - Insert/Upsert/Log Optimization: insert/upsert/log_info operations now trigger only when documents are present in the reader.
- Database Preference: Session state now prefers database-backed storage if available.
- Memory Management: Internal memory system updated for better session handling and resource efficiency.
- Module Exports: Init files that only import now explicitly export symbols using
__all__
.
Bug Fixes:
- DynamoDB Storage: Fixed an issue with storage handling in DynamoDB-based setups.
- DeepSeek: Fixed a bug with API key validation logic.
Improvements:
- Gemini File Upload: Enabled direct use of uploaded files with Gemini.
- Metrics Update: Added audio, reasoning and cached token counts to metrics where available on models.
- Reasoning Updates: We now natively support Ollama and AzureOpenAI reasoning models.
Bug Fixes:
- PPrint Util Async: Added
apprint_run_response
to support async. - Mistral Reasoning: Fixed issues with using a Mistral model for chain-of-thought reasoning.
New Features:
- Redis Memory DB: Added Redis as a storage provider for
Memory
. See here.
Improvements:
- Memory Updates: Various performance improvements made and convenience functions added:
agent.get_session_summary()
→ Use to get the previous session summary from the agent.agent.get_user_memories()
→ Use to get the current user’s memories.- You can also add additional instructions to the
MemoryManager
orSessionSummarizer
.
- Confluence Bypass SSL Verification: If required, you can now skip SSL verification for Confluence connections.
- More Flexibility On Team Prompts: Added
add_member_tools_to_system_message
to remove the member tool names from the system message given to the team leader, which allows flexibility to make teams transfer functions work in more cases.
Bug Fixes:
- LiteLLM Streaming Tool Calls: Fixed issues with tool call streaming in LiteLLM.
- E2B Casing Issue: Fixed issues with parsed Python code that would make some values lowercase.
- Team Member IDs: Fixed edge-cases with team member IDs causing teams to break.
New Features:
- Memory Revamp: Releasing a complete revamp of Agno Memory. This includes a new
Memory
class that supports adding, updating and deleting user memories, as well as doing semantic search with a model. This also adds additional abilities to the agent to manage memories on your behalf. See the docs <HERE>. - User ID and Session ID on Run: You can now pass
user_id
andsession_id
onagent.run()
. This will ensure the agent is set up for the session belonging to thesession_id
and that only the memories of the current user is accessible to the agent. This allows you to build multi-user and multi-session applications with a single agent configuration. - Redis Storage: Support added for Redis as a session storage provider.
Improvements:
- Teams Improvements: Multiple improvements to teams to make task forwarding to member agents more reliable and to make the team leader more conversational. Also added various examples of reasoning with teams.
- Knowledge on Teams: Added
knowledge
toTeam
to better align with the functionality onAgent
. This comes withretriever
to set a custom retriever andsearch_knowledge
to enable Agentic RAG.
Bug Fixes:
- Gemini Grounding Chunks: Fixed error when Gemini Grounding was used in streaming.
- OpenAI Defaults in Structured Outputs: OpenAI does not allow defaults in structured outputs. To make our structured outputs as compatible as possible without adverse effects, we made updates to
OpenAIResponses
andOpenAIChat
.
Improvements:
- Improved Github Tools: Added many more capabilities to
GithubTools
. - Windows Scripts Support: Converted all the utility scripts to be Windows compatible.
- MongoDB VectorDB Async Support: MongoDB can now be used in async knowledge bases.
Bug Fixes:
- Gemini Tool Formatting: Fixed various cases where functions would not be parsed correctly when used with Gemini.
- ChromaDB Version Compatibility: Fix to ensure that ChromaDB and Agno are compatible with newer versions of ChromaDB.
- Team-Member Interactions: Fixed issue where if members respond with empty content the team would halt. This is now be resolved.
- Claude Empty Response: Fixed a case when the response did not include any content with tool calls resulting in an error from the Anthropic API.
New Features:
- Timezone Identifier: Added a new
timezone_identifier
parameter in the Agent class to include the timezone alongside the current date in the instructions. - Google Cloud JSON Storage: Added support for JSON-based session storage on Google Cloud.
- Reasoning Tools: Added
ReasoningTools
for an advanced reasoning scratchpad for agents.
Improvements:
New Features:
- Knowledge Tools: Added
KnowledgeTools
for thinking, searching and analysing documents in a knowledge base.
Improvements:
- Simpler MCP Interface: Added
MultiMCPTools
to support multiple server connections and simplified the interface to only allowcommand
to be passed. See these examples of how to use it.
New Features:
- Toolkit Instructions: Extended
Toolkit
withinstructions
andadd_instructions
to enable you to specify additional instructions related to how a tool should be used. These instructions are then added to the model’s “system message” ifadd_instructions=True
.
Bug Fixes:
- Teams transfer functions: Some tool definitions of teams failed for certain models. This has been fixed.
New Features:
- Gemini Image Generation: Added support for generating images straight from Gemini using the
gemini-2.0-flash-exp-image-generation
Improvements:
- Vertex AI: Improved use of Vertex AI with Gemini Model class to closely follow the official Google specification
- Function Result Caching Improvement: We now have result caching on all Agno Toolkits and any custom functions using the
@tool
decorator. See the docs here. - Async Vector DB and Knowledge Base Improvements: Various knowledge bases, readers and vector DBs now have
async-await
support, so it will be used inagent.arun
andagent.aprint_response
. This also means thatknowledge_base.aload()
is possible which should greatly increase loading speed in some cases. The following have been converted:- Vector DBs:
- Knowledge Bases:
JSONKnowledgeBase
→ Here is a cookbook to illustrate how to use it.PDFKnowledgeBase
→ Here is a cookbook to illustrate how to use it.PDFUrlKnowledgeBase
→ Here is a cookbook to illustrate how to use it.CSVKnowledgeBase
→ Here is a cookbook to illustrate how to use it.CSVUrlKnowledgeBase
→ Here is a cookbook to illustrate how to use it.ArxivKnowledgeBase
→ Here is a cookbook to illustrate how to use it.WebsiteKnowledgeBase
→ Here is a cookbook to illustrate how to use it.YoutubeKnowledgeBase
→ Here is a cookbook to illustrate how to use it.TextKnowledgeBase
→ Here is a cookbook to illustrate how to use it.
Bug Fixes:
- Recursive Chunking Infinite Loop: Fixes an issue with RecursiveChunking getting stuck in an infinite loop for large documents.
Bug Fixes:
- Gemini Function call result fix: Fixed a bug with function call results failing formatting and added proper role mapping .
- Reasoning fix: Fixed an issue with default reasoning and improved logging for reasoning models .
New Features:
- E2B Tools: Added E2B Tools to run code in E2B Sandbox
Improvements:
- Teams Tools: Add
tools
andtool_call_limit
toTeam
. This means the team leader itself can also have tools provided by the user, so it can act as an agent. - Teams Instructions: Improved instructions around attached images, audio, videos, and files. This should increase success when attaching artifacts to prompts meant for member agents.
- MCP Include/Exclude Tools: Expanded
MCPTools
to allow you to specify tools to specifically include or exclude from all the available tools on an MCP server. This is very useful for limiting which tools the model has access to. - Tool Decorator Async Support: The
@tool()
decorator now supports async functions, including async pre and post-hooks.
Bug Fixes:
- Default Chain-of-Thought Reasoning: Fixed issue where reasoning would not default to manual CoT if the provided reasoning model was not capable of reasoning.
- Teams non-markdown responses: Fixed issue with non-markdown responses in teams.
- Ollama tool choice: Removed
tool_choice
from Ollama usage as it is not supported. - Worklow session retrieval from storage: Fixed
entity_id
mappings
Improvements:
- Tool Choice on Teams: Made
tool_choice
configurable.
Bug Fixes:
- Sessions not created: Made issue where sessions would not be created in existing tables without a migration be more visible. Please read the docs on storage schema migrations.
- Todoist fixes: Fixed
update_task
onTodoistTools
.
Improvements:
- Teams Error Handling: Improved the flow in cases where the model gets it wrong when forwarding tasks to members.
Bug Fixes:
- Teams Memory: Fixed issues related to memory not persisting correctly across multiple sessions.
New Features:
- Financial Datasets Tools: Added tools for https://www.financialdatasets.ai/.
- Docker Tools: Added tools to manage local docker environments.
Improvements:
- Teams Improvements: Reasoning enabled for the team.
- MCP Simplification: Simplified creation of
MCPTools
for connections to external MCP servers. See the updated docs.
Bug Fixes:
- Azure AI Factory: Fix for a broken import in Azure AI Factory.
Improvements:
- Tool Result Caching: Added caching of selected searchers and scrapers. This is only intended for testing and should greatly improve iteration speed, prevent rate limits and reduce costs (where applicable) when testing agents. Applies to:
- DuckDuckGoTools
- ExaTools
- FirecrawlTools
- GoogleSearchtools
- HackernewsTools
- NewspaperTools
- Newspaper4kTools
- Websitetools
- YFinanceTools
- Show tool calls: Improved how tool calls are displayed when
print_response
andaprint_response
is used. They are now displayed in a separate panel different from response panel. It can also be used in conjunction inresponse_model
.
New Features:
- Teams Revamp: Announcing a new iteration of Agent teams with the following features:
- Create a
Team
in one of 3 modes: “Collaborate”, “Coordinate” or “Route”. - Various improvements have been made that was broken with the previous teams implementation. Including returning structured output from member agents (for “route” mode), passing images, audio and video to member agents, etc.
- It has added features like “agentic shared context” between team members and sharing of individual team member responses with other team members.
- This also comes with a revamp of Agent and Team debug logs. Use
debug_mode=True
andteam.print_response(...)
to see it in action. - Find the docs here. Please look at the example implementations here.
- This is the first release. Please give us feedback. Updates and improvements will follow.
- Support for
Agent(team=[])
is still there, but deprecated (see below).
- Create a
- LiteLLM: Added LiteLLM support, both as a native implementation and via the
OpenAILike
interface.
Improvements:
- Change structured_output to response_format: Added
use_json_mode: bool = False
as a parameter ofAgent
andTeam
, which in conjunction withresponse_model=YourModel
, is used to indicate whether the agent/team model should be forced to respond in json instead of (now default) structured output. Previous behaviour defaulted to “json-mode”, but since most models now support native structured output, we are now defaulting to native structured output. It is now also much simpler to work with response models, since now onlyresponse_model
needs to be set. It is not necessary anymore to setstructured_output=True
to specifically get structured output from the model. - Website Tools + Combined Knowledgebase: Added functionality for
WebsiteTools
to also update combined knowledgebases.
Bug Fixes:
- AgentMemory: Fixed
get_message_pairs()
fetching incorrect messages. - UnionType in Functions: Fixed issue with function parsing where pipe-style unions were used in function parameters.
- Gemini Array Function Parsing: Fixed issue preventing gemini function parsing to work in some MCP cases.
Deprecations:
Structured Ouput:
Agent.structured_output
has been replaced byAgent.use_json_mode
. This will be removed in a future major version release.Agent Team:
Agent.team
is deprecated with the release of our new Teams implementation. This will be removed in a future major version release.
Improvements:
- OpenAIResponses File Search: Added support for the built-in “File Search” function from OpenAI. This automatically uploads
File
objects attached to the agent prompt. - OpenAIReponses web citations: Added support to extract URL citations after usage of the built-in “Web Search” tool from OpenAI.
- Anthropic document citations: Added support to extract document citations from Claude responses when
File
objects are attached to agent prompts. - Cohere Command A: Support and examples added for Coheres new flagship model
Bug Fixes:
- Ollama tools: Fixed issues with tools where parameters are not typed.
- Anthropic Structured Output: Fixed issue affecting Anthropic and Anthropic via Azure where structured output wouldn’t work in some cases. This should make the experience of using structured output for models that don’t natively support it better overall. Also now works with enums as types in the Pydantic model.
- Google Maps Places: Support from Google for Places API has been changed and this brings it up to date so we can continue to support “search places”.
New Features:
- Citations: Improved support for capturing, displaying, and storing citations from models, with integration for Gemini and Perplexity.
Improvements:
- CalComTools: Improvement to tool Initialization.
Bug Fixes:
- MemoryManager: Limit parameter was added fixing a KeyError in MongoMemoryDb
New Features:
- OpenAI Responses: Added a new model implementation that supports OpenAI’s Responses API. This includes support for their “websearch” built-in tool.
- Openweather API Tool: Added tool to get real-time weather information.
Improvements:
- Storage Refactor: Merged agent and workflow storage classes to align storage better for agents, teams and workflows. This change is backwards compatible and should not result in any disruptions.
New Features:
- File Prompts: Introduced a new
File
type that can be added to prompts and will be sent to the model providers. Only Gemini and Anthropic Claude supported for now. - LMStudio: Added support for LMStudio as a model provider. See the docs.
- AgentQL Tools: Added tools to support AgentQL for connecting agents to websites for scraping, etc. See the docs.
- Browserbase Tool: Added Browserbase tool.
Improvements:
- Cohere Vision: Added support for image understanding with Cohere models. See this cookbook to try it out.
- Embedder defaults logging: Improved logging when using the default OpenAI embedder.
Bug Fixes:
- Ollama Embedder: Fix for getting embeddings from Ollama across different versions.
New Features:
- IBM Watson X: Added support for IBM Watson X as a model provider. Find the docs here.
- DeepInfra: Added support for DeepInfra. Find the docs here.
- Support for MCP: Introducing MCPTools along with examples for using MCP with Agno agents.
Bug Fixes:
- Mistral with reasoning: Fixed cases where Mistral would fail when reasoning models from other providers generated reasoning content.
New Features:
- Video File Upload on Playground: You can now upload video files and have a model interpret the video. This feature is supported only by select
Gemini
models with video processing capabilities.
Bug Fixes:
- Huggingface: Fixed multiple issues with the
Huggingface
model integration. Tool calling is now fully supported in non-streaming cases. - Gemini: Resolved an issue with manually setting the assistant role and tool call result metrics.
- OllamaEmbedder: Fixed issue where no embeddings were returned.
New Features:
- Audio File Upload on Playground: You can now upload audio files and have a model interpret the audio, do sentiment analysis, provide an audio transcription, etc.
Bug Fixes:
- Claude Thinking Streaming: Fix Claude thinking when streaming is active, as well as for async runs.
New Features:
- Claude 3.7 Support: Added support for the latest Claude 3.7 Sonnet model
Bug Fixes:
- Claude Tool Use: Fixed an issue where tools and content could not be used in the same block when interacting with Claude models.
New Features:
- Audio Responses: Agents can now deliver audio responses (both with streaming and non-streaming).
- The audio is in the
agent.run_response.response_audio
. - This only works with
OpenAIChat
with thegpt-4o-audio-preview
model. See their docs for more on how it works.
See the audio_conversation_agent cookbook to test it out on the Agent Playground.
- The audio is in the
- Image understanding support for Together.ai and XAi: You can now give images to agents using models from XAi and Together.ai.
Improvements:
- Automated Tests: Added integration tests for all models. Most of these will be run on each pull request, with a suite of integration tests run before a new release is published.
- Grounding and Search with Gemini: Grounding and Search can be used to improve the accuracy and recency of responses from the Gemini models.
Bug Fixes:
- Structured output updates: Fixed various cases where native structured output was not used on models.
- Ollama tool parsing: Fixed cases for Ollama with tools with optional parameters.
- Gemini Memory Summariser: Fixed cases where Gemini models were used as the memory summariser.
- Gemini auto tool calling: Enabled automatic tool calling when tools are provided, aligning behavior with other models.
- FixedSizeChunking issue with overlap: Fixed issue where chunking would fail if overlap was set.
- Claude tools with multiple types: Fixed an issue where Claude tools would break when handling a union of types in parameters.
- JSON response parsing: Fixed cases where JSON model responses returned quoted strings within dictionary values.
Improvements:
- Gmail Tools: Added
get_emails_by_thread
andsend_email_reply
methods toGmailTools
.
Bug Fixes:
- Gemini List Parameters: Fixed an issue with functions using list-type parameters in Gemini.
- Gemini Safety Parameters: Fixed an issue with passing safety parameters in Gemini.
- ChromaDB Multiple Docs: Fixed an issue with loading multiple documents into ChromaDB.
- Agentic Chunking: Fixed an issue where OpenAI was required for chunking even when a model was provided.
Bug Fixes:
- Gemini Tool-Call History: Fixed an issue where Gemini rejected tool-calls from historic messages.
Improvements:
- Reasoning with o3 Models: Reasoning support added for OpenAI’s o3 models.
- Gemini embedder update: Updated the
GeminiEmbedder
to use the new Google’s genai SDK.
Bug Fixes:
- Singlestore Fix: Fixed an issue where querying SingleStore caused the embeddings column to return in binary format.
- MongoDB Vectorstore Fix: Fixed multiple issues in MongoDB , including duplicate creation and deletion of collections during initialization. All known issues have been resolved.
- LanceDB Fix: Fixed various errors in LanceDB and added
on_bad_vectors
as a parameter.
Improvements:
- File / Image Uploads on Agent UI: Agent UI now supports file and image uploads with prompts.
- Supported file formats: .pdf , .csv , .txt , .docx , .json
- Supported image formats:
.png
,.jpeg ,.jpg
,.webp
- Firecrawl Custom API URL: Allowed users to set a custom API URL for Firecrawl.
- Updated
ModelsLabTools
Toolkit Constructor: The constructor in/libs/agno/tools/models_labs.py
has been updated to accommodate audio generation API calls. This is a breaking change, as the parameters for theModelsLabTools
class have changed. Theurl
andfetch_url
parameters have been removed, and API URLs are now decided based on thefile_type
provided by the user.
Bug Fixes:
- Gemini functions with no parameters: Addressed an issue where Gemini would reject function declarations with empty properties.
- Fix exponential memory growth: Fixed certain cases where the agent memory would grow exponentially.
- Chroma DB: Fixed various issues related to metadata on insertion and search.
- Gemini Structured Output: Fixed a bug where Gemini would not generate structured output correctly.
- MistralEmbedder: Fixed issue with instantiation of
MistralEmbedder
. - Reasoning: Fixed an issue with setting reasoning models.
- Audio Response: Fixed an issue with streaming audio artefacts to the playground.
Model Improvements:
- Models Refactor: A complete overhaul of our models implementation to improve on performance and to have better feature parity across models.
- This improves metrics and visibility on the Agent UI as well.
- All models now support async-await, with the exception of
AwsBedrock
.
- Azure AI Foundry: We now support all models on Azure AI Foundry. Learn more here..
- AWS Bedrock Support: Our redone AWS Bedrock implementation now supports all Bedrock models. It is important to note which models support which features.
- Gemini via Google SDK: With the 1.0.0 release of Google’s genai SDK we could improve our previous implementation of
Gemini
. This will allow for easier integration of Gemini features in future. - Model Failure Retries: We added better error handling of third-party errors (e.g. Rate-Limit errors) and the agent will now optionally retry with exponential backoff if
exponential_backoff
is set toTrue
. <docs>
Other Improvements:
- Exa Answers Support: Added support for the Exa answers capability.
- GoogleSearchTools: Updated the name of
GoogleSearch
toGoogleSearchTools
for consistency.
Deprecation:
- Our
Gemini
implementation directly on the Vertex API has been replaced by the Google SDK implementation ofGemini
. - Our
Gemini
implementation via the OpenAI client has been replaced by the Google SDK implementation ofGemini
. - Our
OllamaHermes
has been removed as the implementation ofOllama
was improved.
Bug Fixes:
- Team Members Names: Fixed a bug where teams where team members have non-aphanumeric characters in their names would cause exceptions.
New Features:
- Perplexity Model: We now support Perplexity as a model provider.
- Todoist Toolkit: Added a toolkit for managing tasks on Todoist.
- JSON Reader: Added a JSON file reader for use in knowledge bases.
Improvements:
- LanceDb: Implemented
name_exists
function for LanceDb.
Bug Fixes:
- Storage growth bug: Fixed a bug with duplication of
run_messages.messages
for every run in storage.
New Features:
- Google Sheets Toolkit: Added a basic toolkit for reading, creating and updating Google sheets.
- Weviate Vector Store: Added support for Weviate as a vector store.
Improvements:
- Mistral Async: Mistral now supports async execution via
agent.arun()
andagent.aprint_response()
. - Cohere Async: Cohere now supports async execution via
agent.arun()
andagent.aprint_response()
.
Bug Fixes:
- Retriever as knowledge source: Added small fix and examples for using the custom
retriever
parameter with an agent.
New Features:
- Google Maps Toolkit: Added a rich toolkit for Google Maps that includes business discovery, directions, navigation, geocode locations, nearby places, etc.
- URL reader and knowledge base: Added reader and knowledge base that can process any URL and store the text contents in the document store.
Bug Fixes:
- Zoom tools fix: Zoom tools updated to include the auth step and other misc fixes.
- Github search_repositories pagination: Pagination did not work correctly and this was fixed.
New Features:
- Gmail Tools: Add tools for Gmail, including mail search, sending mails, etc.
Improvements:
- Exa Toolkit Upgrade: Added
find_similar
toExaTools.
- Claude Async: Claude models can now be used with
await agent.aprint_response()
andawait agent.arun()
. - Mistral Vision: Mistral vision models are now supported. Various examples were added to illustrate example.
Bug Fixes:
- Claude Tool Invocation: Fixed issue where Claude was not working with tools that have no parameters.
Improvements:
- Model Client Caching: Made all models cache the client instantiation, improving Agno agent instantiation time.
- XTools: Renamed TwitterTools to XTools and updated capabilities to be compatible with Twitter API v2.
Bug Fixes:
- Agent Dataclass Compatibility: Removed slots=True from the agent dataclass decorator, which was not compatible with Python <3.10.
- AzureOpenAIEmbedder: Fixed issue where AzureOpenAIEmbedder was not correctly made a dataclass.
This is a major refactor to be coupled with the launch of Agno.
Interface Changes:
- Class Renaming: Renamed certain classes. For example, phi.model.x is now agno.models.x. See Changes
- Multi-modal interface updates: We have improved the overall multimodal interface to be more intuitive. See Changes
Improvements:
- Dataclasses: Changed various instances of Pydantic models to dataclasses to improve the speed.
Removals:
- Removed all references to Assistant.
- Removed all references to llm.
- Removed the PhiTools tool.
- Removed the PythonAgent and DuckDbAgent (this will be brought back in future with more specific agents).
Bug Fixes:
- Semantic Chunking: Fixed semantic chunking by replacing similarity_threshold param with threshold param.
New Features:
- Evals for Agents: Introducing Evals to measure the performance, accuracy, and reliability of your Agents.