Learning isn’t Free

AI Agents are often characterized as smart workers who learn and improve with practice. This can be true, but it isn’t automatic, and the “learning” doesn’t happen in the way I see a lot of people assume it does.

Let’s start with what makes an AI Agent:

🌍An understanding of the world (Large Language Model)

🎯A Goal

🛠️Knowledge / Tools

AI learning commonly refers to training Large Language Models, so it is easy to assume that’s where AI Agent learning would happen. LLM training is a long process, and once a model is in use by an AI Agent, it cannot be easily changed. So, what we traditionally refer to as learning through AI training (models) doesn’t work for Agents.

That leaves us with the options of improving the goal, knowledge, or tools. All of these attributes can be tuned to improve an agent’s results. But this needs to be done manually or through programming a system to do this. It doesn’t just happen for every agent. The reality is that this style of tuning is essentially just like any other process automation improvement.

It might seem like I am splitting semantic hairs, but the terms we use to talk about AI carry tremendous implicit meanings that shape how the majority of people think about the technology. AI Agents, when built well, can do smart work, but that’s a lot different than being able to drop in an artificial entity that can figure out how to do a complex task without any help.


Posted

in

,

by

Tags: