Conversation to Action¶
One of the most appealing things about modern Generative AI is of course the promise of Agentic: the potential to have fully autonomous AI that goes beyond words to enact actions in the real world that follow our direction. Watching an Agentic AI analyze its environment (in a local or cloud computing environment) and perform several tasks to reach its goal can feel very thrilling and satisfying. I can’t match that experience in this humble Jupyter notebook, but instead I would like to discuss and demonstrate the fundamentals of implementic Agentic AI. All computation can be boiled down to the execution of certain functions with certain parameters, as seen in something like the simply-typed lambda calculus, for example.
Similarly, we can reduce Agentic AI to the basic model of having a machine recommend:
given a certain environment state,
and also a certain goal,
which requires a sequence of actions to be realized
what particular action needs to happen next
what function
fout of a set of possible functions best represents this actionand also what sequence of arguments
**kwargsto pass to that function if it requires any
We basically encode the above steps as the body of the Agentic loop, which continues until the goal has been achieved and we can exit the loop.
The agentic loop is in its simplest form a while-break pattern as we will see below.
Structured Function Calling¶
At its heart, tool calling is about giving language models a structured way to say “I need to use this specific function with these specific parameters.” Instead of just generating text that describes what should happen, the model generates structured data that your code can execute.
Let’s see this in action with Ollama and the Qwen3 model:
import ollama
import json
# Define a simple tool: get the current weather
tools = [{
'type': 'function',
'function': {
'name': 'get_weather',
'description': 'Get the current weather for a location',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'City name, e.g. San Francisco'
},
'unit': {
'type': 'string',
'enum': ['celsius', 'fahrenheit'],
'description': 'Temperature unit'
}
},
'required': ['location']
}
}
}]
response = ollama.chat(
model='qwen3',
messages=[{'role': 'user', 'content': 'What is the weather in Paris?'}],
tools=tools
)
print(response['message']['tool_calls'])[ToolCall(function=Function(name='get_weather', arguments={'location': 'Paris', 'unit': 'celsius'}))]
What’s happening here? We’ve given the model a schema—a formal description of what the get_weather function expects.
When the model sees “What is the weather in Paris?”, it recognizes it needs to call a tool and outputs structured JSON rather than freeform text.
Anatomy of a Tool Definition¶
Tool definitions follow a specific structure that tells the model everything it needs to know:
weather_tool = {
'type': 'function', # Currently, 'function' is the only type
'function': {
'name': 'get_weather', # Unique identifier
'description': 'Get weather for a location', # Helps model decide when to use it
'parameters': { # JSON Schema for the parameters
'type': 'object',
'properties': {
'location': {'type': 'string', 'description': 'City name'},
'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}
},
'required': ['location']
}
}
}The description field is crucial—it’s how the model decides when to use the tool. Good descriptions are clear, specific, and include examples when helpful.
It’s also possible to specify required parameters, like location in this case.
Building Your First Agentic Loop¶
An agent isn’t just one tool call—it’s a conversation between the model and your tools. Here’s the basic loop:
User sends a message
Model decides if it needs a tool
If yes, model generates tool call(s)
You execute the tool(s)
You send results back to model
Model responds to user (or calls more tools!)
Let’s implement this:
import ollama
from ollama import chat, ChatResponse, Client
SERVER_HOST = 'http://ollama.cs.wallawalla.edu:11434'
client = Client(host=SERVER_HOST)
def get_weather(location, unit='celsius'):
"""Simulated weather API"""
return {
'location': location,
'temperature': 22 if unit == 'celsius' else 72,
'conditions': 'sunny',
'unit': unit
}
def run_agent(user_message):
messages = [{'role': 'user', 'content': user_message}]
response = client.chat(
model='qwen3:4b',
messages=messages,
tools=[weather_tool]
)
# Check if model wants to call a tool
if response['message'].get('tool_calls'):
# Add model's response to messages
messages.append(response['message'])
# Execute each tool call
for tool in response['message']['tool_calls']:
if tool['function']['name'] == 'get_weather':
args = tool['function']['arguments']
result = get_weather(**args)
# Add tool result to messages
messages.append({
'role': 'tool',
'content': json.dumps(result),
})
# Get final response with tool results
final_response = client.chat(
model='qwen3:4b',
messages=messages
)
return final_response['message']['content']
return response['message']['content']
print(run_agent("What's the weather like in Tokyo?"))Okay, let me process this step by step. The user asked about the weather in Tokyo. I called the get_weather function with Tokyo as the location. The response came back with temperature 22°C, sunny conditions, and unit is celsius.
First, I need to present this information clearly. The user probably wants a straightforward answer without technical jargon. Since the unit is already specified as celsius, I don't need to convert or mention it unless the user asked for Fahrenheit.
The temperature is 22 degrees, which is a mild day. The conditions are sunny, so I should mention that. I'll structure the response to start with the current weather, then the temperature, and the conditions. Keep it concise.
Wait, the user's original question was "What's the weather like in Tokyo?" So the answer should be something like: "The weather in Tokyo is currently sunny with a temperature of 22°C."
Check if there's any need for additional info. The response from the tool has all the necessary data. No need to add extra details unless specified. Make sure the units are clear, but since it's in celsius, and the user didn't specify, it's okay.
Double-check the numbers: 22°C is correct. Conditions are sunny. So the answer is straightforward. No errors here.
I should write the response in natural language, no markdown, just plain text. Keep it friendly and helpful.
</think>
The weather in Tokyo is currently sunny with a temperature of 22°C.
Notice the message flow: user message → model response with tool call → tool result → final model response. This is the fundamental pattern of agentic AI.
Multiple Tools: Expanding Capabilities¶
Real agents have access to multiple tools. Let’s create a more interesting agent:
tools = [
{
'type': 'function',
'function': {
'name': 'calculate',
'description': 'Perform mathematical calculations',
'parameters': {
'type': 'object',
'properties': {
'expression': {
'type': 'string',
'description': 'Math expression like "2 + 2" or "sqrt(16)"'
}
},
'required': ['expression']
}
}
},
{
'type': 'function',
'function': {
'name': 'search_database',
'description': 'Search a product database',
'parameters': {
'type': 'object',
'properties': {
'query': {'type': 'string', 'description': 'Search query'},
'max_results': {'type': 'integer', 'description': 'Max results'}
},
'required': ['query']
}
}
}
]
def calculate(expression):
"""Safe calculator"""
try:
return {'result': eval(expression, {'__builtins__': {}},
{'sqrt': __import__('math').sqrt})}
except:
return {'error': 'Invalid expression'}
def search_database(query, max_results=5):
"""Simulated database"""
products = {
'laptop': {'name': 'UltraBook Pro', 'price': 1299},
'phone': {'name': 'SmartPhone X', 'price': 899}
}
return [v for k, v in products.items() if query.lower() in k]The model will now automatically choose which tool to use based on the user’s request.
You might have noticed that in Python we only have to pass the function itself as the argument to in the tools field, but not any additional information about the function such as its type, how many arguments the function has or it’s arguments types.
This is thanks to object introspection which is performed by the Python Ollama API at runtime. However for other programming language APIs it may be necessary to create a JSON blob with the relevant aformentioned information, including required arguments.
Parallel Tool Calls: Efficiency Matters¶
Sometimes an agent needs to call multiple tools at once. Modern models support parallel tool calls:
# User asks: "What's the weather in London and Paris?"
response = client.chat(
model='qwen3:4b',
messages=[{
'role': 'user',
'content': 'What is the weather in London and Paris?'
}],
tools=[weather_tool]
)
# Model might return multiple tool calls at once!
for tool_call in response['message'].get('tool_calls', []):
print(f"Calling {tool_call['function']['name']} with args:")
print(tool_call['function']['arguments'])Calling get_weather with args:
{'location': 'London'}
Calling get_weather with args:
{'location': 'Paris'}
This is more efficient than sequential calls and shows how agents can be surprisingly sophisticated in their planning.
Chain of Thought with Tools¶
Sometimes agents need to “think” before acting. Reasoning models like DeepSeek-R1 make their thought process explicit through a special thinking field. This gives us unprecedented insight into why an agent chooses to use certain tools:
import ollama
import json
# Define tools for our thoughtful agent
math_tools = [
{
'type': 'function',
'function': {
'name': 'calculate',
'description': 'Evaluate a mathematical expression',
'parameters': {
'type': 'object',
'properties': {
'expression': {
'type': 'string',
'description': 'Math expression like "2 + 2" or "sqrt(16)"'
}
},
'required': ['expression']
}
}
},
{
'type': 'function',
'function': {
'name': 'get_constant',
'description': 'Get the value of mathematical constants',
'parameters': {
'type': 'object',
'properties': {
'constant': {
'type': 'string',
'enum': ['pi', 'e', 'golden_ratio'],
'description': 'The mathematical constant to retrieve'
}
},
'required': ['constant']
}
}
}
]
def calculate(expression):
"""Safe mathematical calculator"""
import math
safe_dict = {
'sqrt': math.sqrt,
'sin': math.sin,
'cos': math.cos,
'pi': math.pi,
'e': math.e
}
try:
result = eval(expression, {"__builtins__": {}}, safe_dict)
return {'result': result, 'expression': expression}
except Exception as e:
return {'error': str(e)}
def get_constant(constant):
"""Retrieve mathematical constants"""
import math
constants = {
'pi': math.pi,
'e': math.e,
'golden_ratio': (1 + math.sqrt(5)) / 2
}
return {'constant': constant, 'value': constants[constant]}
def thoughtful_agent(user_message):
"""Agent that shows its reasoning process"""
messages = [{'role': 'user', 'content': user_message}]
print(f"\n{'='*60}")
print(f"USER: {user_message}")
print(f"{'='*60}\n")
max_iterations = 5 # Prevent infinite loops
iteration = 0
while iteration < max_iterations:
iteration += 1
# Get response from DeepSeek-R1
response = ollama.chat(
model='qwen3:latest',
messages=messages,
tools=math_tools
)
# DeepSeek-R1 exposes internal reasoning in 'thinking' field
if response['message'].get('thinking'):
print(f"AGENT'S INTERNAL REASONING (Step {iteration}):")
print("-" * 60)
# Truncate long thinking for readability
thinking = response['message']['thinking']
if len(thinking) > 500:
print(thinking[:500] + "...")
else:
print(thinking)
print("-" * 60)
print()
# Check if model wants to use tools
if not response['message'].get('tool_calls'):
print("AGENT RESPONSE:")
print(response['message']['content'])
return response['message']['content']
# Add assistant's message to conversation
messages.append(response['message'])
# Execute each tool call
print("TOOL CALLS:")
for tool_call in response['message']['tool_calls']:
func_name = tool_call['function']['name']
args = tool_call['function']['arguments']
print(f" → {func_name}({', '.join(f'{k}={repr(v)}' for k, v in args.items())})")
# Execute the function
if func_name == 'calculate':
result = calculate(args['expression'])
elif func_name == 'get_constant':
result = get_constant(args['constant'])
else:
result = {'error': f'Unknown function: {func_name}'}
print(f" ✓ Result: {result}")
# Add tool result to messages
messages.append({
'role': 'tool',
'content': json.dumps(result)
})
print()
return "Max iterations reached"
# Try it with a complex query!
result = thoughtful_agent(
"If I have a circle with radius 5, what's its area? "
"Also, what's that area divided by the golden ratio?"
)
============================================================
USER: If I have a circle with radius 5, what's its area? Also, what's that area divided by the golden ratio?
============================================================
AGENT'S INTERNAL REASONING (Step 1):
------------------------------------------------------------
Okay, the user is asking about the area of a circle with radius 5 and then wants that area divided by the golden ratio. Let me break this down.
First, the area of a circle is π multiplied by the radius squared. The radius here is 5, so the formula would be π * 5². That simplifies to π * 25. So the area is 25π. I can calculate that numerically using the get_constant function for π. Alternatively, maybe the user wants the exact value in terms of π, but since they mentioned dividing by the golden ...
------------------------------------------------------------
🔧 TOOL CALLS:
→ get_constant(constant='pi')
✓ Result: {'constant': 'pi', 'value': 3.141592653589793}
→ get_constant(constant='golden_ratio')
✓ Result: {'constant': 'golden_ratio', 'value': 1.618033988749895}
AGENT'S INTERNAL REASONING (Step 2):
------------------------------------------------------------
Okay, let's see. The user asked for two things: the area of a circle with radius 5 and then that area divided by the golden ratio.
First, I need to calculate the area of the circle. The formula is π multiplied by radius squared. The radius is 5, so 5 squared is 25. Then multiply by π, which we got from the get_constant function as approximately 3.141592653589793. So 25 * π equals about 78.53981633974483.
Next, the user wants that area divided by the golden ratio. The golden ratio value from th...
------------------------------------------------------------
🔧 TOOL CALLS:
→ calculate(expression='25 * 3.141592653589793')
✓ Result: {'result': 78.53981633974483, 'expression': '25 * 3.141592653589793'}
→ calculate(expression='78.53981633974483 / 1.618033988749895')
✓ Result: {'result': 48.54027596813666, 'expression': '78.53981633974483 / 1.618033988749895'}
AGENT'S INTERNAL REASONING (Step 3):
------------------------------------------------------------
Okay, let me wrap this up. The user asked for two things: the area of a circle with radius 5 and then that area divided by the golden ratio.
First, I calculated the area using the formula πr². With r = 5, that's 25π. Using the value of π from the get_constant function (3.141592653589793), the area came out to approximately 78.5398.
Next, I divided that area by the golden ratio, which was retrieved as 1.618033988749895. The result of that division is roughly 48.5403.
So, the final answers are:...
------------------------------------------------------------
AGENT RESPONSE:
The area of a circle with radius 5 is calculated as:
$$
\text{Area} = \pi \times r^2 = 3.141592653589793 \times 25 \approx 78.54
$$
Dividing this area by the golden ratio ($\phi \approx 1.618033988749895$):
$$
\frac{78.5398}{1.618034} \approx 48.54
$$
**Final Answers:**
- **Area of the circle:** $ \boxed{78.54} $
- **Area divided by the golden ratio:** $ \boxed{48.54} $
The thinking field is invaluable for debugging and alignment.
Qwen3 and similar reasoning models make this explicit, but the principle applies broadly: transparent agent reasoning leads to more reliable systems.
8-Puzzle¶
We can structure a classical AI problem, such as searching through a state space for a challenge, problem or game such as 8-puzzle - as an Agentic loop with the available actions presented as tools to the agent.
from ollama import chat, ChatResponse, Client
SERVER_HOST = 'http://ollama.cs.wallawalla.edu:11434'
client = Client(host=SERVER_HOST)
class EightPuzzle:
"""Represents an 8-puzzle problem."""
def __init__(self, initial_state, goal_state):
self.current = list(initial_state)
self.goal = goal_state
def _get_blank_pos(self):
"""Returns (row, col) of blank tile."""
blank_idx = self.current.index(0)
return blank_idx // 3, blank_idx % 3
def _swap(self, pos1, pos2):
"""Swap two positions."""
self.current[pos1], self.current[pos2] = self.current[pos2], self.current[pos1]
def up(self) -> str:
"""Move blank tile up."""
row, col = self._get_blank_pos()
if row == 0:
return "Invalid move: Can't move up from top row"
blank_idx = row * 3 + col
new_idx = (row - 1) * 3 + col
self._swap(blank_idx, new_idx)
return f"Moved blank up. Current state: {tuple(self.current)}"
def down(self) -> str:
"""Move blank tile down."""
row, col = self._get_blank_pos()
if row == 2:
return "Invalid move: Can't move down from bottom row"
blank_idx = row * 3 + col
new_idx = (row + 1) * 3 + col
self._swap(blank_idx, new_idx)
return f"Moved blank down. Current state: {tuple(self.current)}"
def left(self) -> str:
"""Move blank tile left."""
row, col = self._get_blank_pos()
if col == 0:
return "Invalid move: Can't move left from leftmost column"
blank_idx = row * 3 + col
new_idx = row * 3 + (col - 1)
self._swap(blank_idx, new_idx)
return f"Moved blank left. Current state: {tuple(self.current)}"
def right(self) -> str:
"""Move blank tile right."""
row, col = self._get_blank_pos()
if col == 2:
return "Invalid move: Can't move right from rightmost column"
blank_idx = row * 3 + col
new_idx = row * 3 + (col + 1)
self._swap(blank_idx, new_idx)
return f"Moved blank right. Current state: {tuple(self.current)}"
def get_state(self) -> str:
"""Get current puzzle state."""
state_str = f"Current: {tuple(self.current)}\nGoal: {self.goal}\n"
state_str += "Current board:\n"
for i in range(0, 9, 3):
state_str += f" {self.current[i]} {self.current[i+1]} {self.current[i+2]}\n"
return state_str
def is_goal(self) -> bool:
"""Check if current state is goal."""
return tuple(self.current) == self.goal
def solve_with_llm(puzzle, max_steps=30):
"""Solve puzzle using LLM tool calling."""
# Available tools for the LLM
available_functions = {
'up': puzzle.up,
'down': puzzle.down,
'left': puzzle.left,
'right': puzzle.right,
'get_state': puzzle.get_state,
'is_goal': puzzle.is_goal
}
# Initial prompt
system_prompt = """You are solving an 8-puzzle. The puzzle is a 3x3 grid with tiles numbered 0-8, where 0 is the blank.
Your goal is to rearrange tiles to match the goal state by moving the blank tile (0).
Available actions:
- up(): Move blank up
- down(): Move blank down
- left(): Move blank left
- right(): Move blank right
- get_state(): See current state
- is_goal(): Check if we've reached our goal
Work step-by-step. YOU ARE IN A LOOP. RETURN ONLY THE NEXT TOOL CALL."""
initial_state = puzzle.get_state()
messages = [
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': f'Return the first action to solve this puzzle:\n{initial_state}'}
]
print("=" * 50)
print("8-Puzzle LLM Solver")
print("=" * 50)
print(initial_state)
print("=" * 50)
step = 0
while step < max_steps:
# Get LLM response
response: ChatResponse = client.chat(
model='qwen3',
messages=messages,
tools=[puzzle.up, puzzle.down, puzzle.left, puzzle.right, puzzle.get_state],
)
messages.append(response.message)
# Check for tool calls
if response.message.tool_calls:
for tc in response.message.tool_calls:
if tc.function.name in available_functions:
step += 1
print(f"\nStep {step}: Calling {tc.function.name}()")
# Execute the function
result = available_functions[tc.function.name]()
print(f"Result: {result}")
# Add result to messages
messages.append({
'role': 'tool',
'tool_name': tc.function.name,
'content': str(result)
})
# Check if solved
if puzzle.is_goal():
print("\n" + "=" * 50)
print(f"✓ Puzzle solved in {step} steps!")
print("=" * 50)
puzzle.get_state()
return True
else:
# No more tool calls
if response.message.content:
print(f"\nLLM: {response.message.content}")
break
print("\n" + "=" * 50)
print("=" * 50)
return False
if __name__ == "__main__":
# Simple puzzle (2 moves from goal)
initial = (1, 2, 3, 4, 5, 6, 0, 7, 8)
goal = (1, 2, 3, 4, 5, 6, 7, 8, 0)
puzzle = EightPuzzle(initial, goal)
solve_with_llm(puzzle)==================================================
8-Puzzle LLM Solver
==================================================
Current: (1, 2, 3, 4, 5, 6, 0, 7, 8)
Goal: (1, 2, 3, 4, 5, 6, 7, 8, 0)
Current board:
1 2 3
4 5 6
0 7 8
==================================================
Step 1: Calling right()
Result: Moved blank right. Current state: (1, 2, 3, 4, 5, 6, 7, 0, 8)
Step 2: Calling right()
Result: Moved blank right. Current state: (1, 2, 3, 4, 5, 6, 7, 8, 0)
==================================================
✓ Puzzle solved in 2 steps!
==================================================