Building Agents with Ollama and the Hugging Face Smolagents Library
We have seen a few useful examples of tool use (function calling) and now we will build on tool use to build both single agents and multi-agent systems. There are commercial and open source resources to build agents, CrewAI and LangGraph being popular choices. We will follow a different learning path here, preferring to use the smolagents library. Please bookmark https://github.com/huggingface/smolagents for reference while working through this chapter.
Each example program and utility for this chapter uses the prefix smolagents_ in the Python file name.
Note: We are using the 2 GB model Llama3.2:latest here. Different models support tools and agents differently.
Choosing Specific LLMs for Writing Agents
As agents operate performing tasks like interpreting user input, performing Chain of Thought (Cot) reasoning, observe the output from calling tools, and following plan steps one by one, then LLMs errors, hallucinations, and inconsistencies accumulate. When using Ollama we prefer using the most powerful models that we can run on our hardware.
Here we use Llama3.2:latest that is recognized for its function calling capabilities, facilitating seamless integration with various tools.
As you work through the examples here using different local models running on Ollama, you might encounter compounding errors problems. When I am experimenting with ideas for implementing agents, I sometimes keep two versions of my code, one for a local model and one using eight of the commercial models GPT-4o or Claude Sonnet 3.5. Comparing the same agent setup using different models might provide some insight into runtime agent problems being your code or the model you are using.
Installation notes
As I updatethis chapter on October 15, 2025, smolagents needs to be run with Python version 3.12. For exampleL
1 $ uv run python3 -V
2 Python 3.12.0
3 $ uv run smolagents_test.py
4 $ uv run smolagents_agent_test1.py
Overview of the Hugging Face Smolagents Library
The smolagents library https://github.com/huggingface/smolagents is built around a minimalist and modular architecture that emphasizes simplicity and composability. The core components are cleanly separated into the file agents.py for agent definitions, tools.py for tool implementations, and related support files. This design philosophy allows developers to easily understand, extend, and customize the components while maintaining a small codebase footprint - true to the “smol” name.
This library implements a tools-first approach where capabilities are encapsulated as discrete tools that agents can use. The tools.py file in the smolagents implementation defines a clean interface for tools with input/output specifications, making it straightforward to add new tools. This tools-based architecture enables agents to have clear, well-defined capabilities while maintaining separation of concerns between the agent logic and the actual implementation of capabilities.
Agents are designed to be lightweight and focused on specific tasks rather than trying to be general-purpose. The BaseAgent class provides core functionality while specific agents like WebAgent extend it for particular use cases. This specialization allows the agents to be more efficient and reliable at their designated tasks rather than attempting to be jack-of-all-trades.
Overview for LLM Agents (optional section)
You might want to skip this section if you want to quickly work through the examples in this chapter and review this material later.
In general, we use the following steps to build agent based systems:
- Define agents (e.g., Researcher, Writer, Editor, Judge outputs of other models and agents).
- Assign tasks (e.g., research, summarize, write, double check the work of other agents).
- Use an orchestration framework to manage task sequencing and collaboration.
Features of Agents:
- Retrieval-Augmented Generation (RAG): Enhance agents’ knowledge by integrating external documents or databases.–Example: An agent that retrieves and summarizes medical research papers.
-
Memory Management: Enable agents to retain context across interactions.
- Example: A chatbot that remembers user preferences over time.
-
Tool Integration: Equip agents with tools like web search, data scraping, or API calls.
- Example: An agent that fetches real-time weather data and provides recommendations. We will use tools previously developed in this book.
Examples of Real-World Applications
- Healthcare: Agents that analyze medical records and provide diagnostic suggestions.
- Education: Virtual tutors that explain complex topics using Ollama’s local models.
- Customer Support: Chatbots that handle inquiries without relying on cloud services.
- Content Creation: Agents that generate articles, summaries, or marketing content.
Let’s Write Some Code
I am still experimenting with LLM-based agents. Please accept the following examples as my personal works in progress.
“Hello World” Smolagents Example
Here we look at a simple example taken from the smolagents documentation and converted to run using local models with Ollama. Here is a listing of file smolagents_test.py:
1 """
2 smolagents example program (slightly modified)
3 """
4
5 from smolagents.agents import ToolCallingAgent
6 from smolagents import tool, LiteLLMModel
7 from typing import Optional
8
9 model = LiteLLMModel(
10 model_id="ollama_chat/llama3.2:latest",
11 api_base="http://localhost:11434",
12 api_key="your-api-key" # not used
13 )
14
15 @tool
16 def get_weather(location: str, celsius: Optional[bool] = False) -> str:
17 """
18 Get weather in the next days at given location.
19 Secretly this tool does not care about the location, it hates the weather everywhere.
20
21 Args:
22 location: the location
23 celsius: the temperature
24 """
25 return "The weather is UNGODLY with torrential rains and temperatures below -10°C"
26
27 agent = ToolCallingAgent(tools=[get_weather], model=model)
28
29 print(agent.run("What's the weather like in Paris?"))
Understanding the smolagents and Ollama Example
This code demonstrates a simple integration between smolagents (a tool-calling framework) and Ollama (a local LLM server). Here’s what the code accomplishes: Core Components
Utilizes smolagents for creating AI agents with tool capabilities Integrates with a local Ollama server running llama3.2 Implements a basic weather checking tool (though humorously hardcoded)
Model Configuration
The code sets up a LiteLLM model instance that connects to a local Ollama server on port 11434. It’s configured to use the llama3.2 model and supports optional API key authentication.
Weather Tool Implementation
The code defines a weather-checking tool using the @tool decorator. While it accepts a location parameter and an optional celsius flag, this example version playfully returns the same dramatic weather report regardless of the input location.
Agent Setup and Execution
The implementation creates a ToolCallingAgent with the weather tool and the configured model. Users can query the agent about weather conditions in any location, though in this example it always returns the same humorous response about terrible weather conditions.
Key Features
Demonstrates tool-calling capabilities through smolagents Shows local LLM integration using Ollama Includes proper type hinting for better code clarity Provides an extensible structure for adding more tools
Python Tools Compatible with Smolagents
The tools I developed in previous chapters are not quite compatible with the smolagents library so I wrap a few of the tools I previously wrote in the utility smolagents_tools.py:
1 """
2 Wrapper for book example tools for smloagents compatibility
3 """
4 from pathlib import Path
5
6 from smolagents import tool, LiteLLMModel
7 from typing import Optional
8 from pprint import pprint
9
10 from tool_file_dir import list_directory
11
12 @tool
13 def sa_list_directory(list_dots: Optional[bool]=None) -> str:
14 """
15 Lists files and directories in the current working directory
16
17 Args:
18 list_dots: optional boolean (if true, include dot files)
19
20 Returns:
21 string with directory name, followed by list of files in the directory
22 """
23 lst = list_directory()
24 pprint(lst)
25 return lst
26
27 @tool
28 def read_file_contents(file_path: str) -> str:
29 """
30 Reads contents from a file and returns the text
31
32 Args:
33 file_path: Path to the file to read
34
35 Returns:
36 Contents of the file as a string
37 """
38 try:
39 path = Path(file_path)
40 if not path.exists():
41 return f"File not found: {file_path}"
42
43 with path.open("r", encoding="utf-8") as f:
44 content = f.read()
45 return f"Contents of file '{file_path}' is:\n{content}\n"
46
47 except Exception as e:
48 return f"Error reading file '{file_path}' is: {str(e)}"
49
50 @tool
51 def summarize_directory() -> str:
52 """
53 Summarizes the files and directories in the current working directory
54
55 Returns:
56 string with directory name, followed by summary of files in the directory
57 """
58 lst = list_directory()
59 num_files = len(lst)
60 num_dirs = len([x for x in lst if x[1] == 'directory'])
61 num_files = num_files - num_dirs
62 return f"Current directory contains {num_files} files and {num_dirs} directories."
This code defines a wrapper module containing three tool functions designed for compatibility with the smolagents framework. The module includes sa_list_directory(), which lists files and directories in the current working directory with an optional parameter to include dot files; read_file_contents(), which takes a file path as input and returns the contents of that file as a string while handling potential errors and file encoding; and summarize_directory(), which provides a concise summary of the current directory by counting the total number of files and directories. All functions are decorated with @tool for integration with smlolagents, and the code imports necessary modules including pathlib for file operations, typing for type hints, and pprint for formatted output. The functions rely on an external list_directory() function imported from tool_file_dir.py, and they provide clear documentation through docstrings explaining their parameters, functionality, and return values. Error handling is implemented particularly in the file reading function to gracefully handle cases where files don’t exist or cannot be read properly.
A Complete Smolagents Example using Three Tools
This listing shows the script smolagents_agent_test.py:
1 from smolagents.agents import ToolCallingAgent
2 from smolagents import tool, LiteLLMModel
3 from typing import Optional
4
5 from smolagents_tools import sa_list_directory
6 from smolagents_tools import summarize_directory
7 from smolagents_tools import read_file_contents
8
9 model = LiteLLMModel(
10 model_id="ollama_chat/llama3.2:latest",
11 api_base="http://localhost:11434",
12 api_key="your-api-key" # not used
13 )
14
15 agent = ToolCallingAgent(tools=[sa_list_directory,
16 summarize_directory,
17 read_file_contents],
18 model=model)
19
20 print(agent.run("What are the files in the current directory? Describe the current directory"))
21
22 print(agent.run("Which Python scripts evaluate the performance of LLMs?"))
This code demonstrates the creation of an AI agent using the smolagents library, specifically configured to work with file system operations. It imports three specialized tools from smolagents_tools: sa_list_directory for listing directory contents, summarize_directory for providing directory summaries, and read_file_contents for accessing file contents. The code sets up a LiteLLMModel instance that connects to a local Ollama server running the llama3.2 model on port 11434, with provisions for API key authentication if needed. A ToolCallingAgent is then created with these three file-system-related tools, enabling it to interact with and analyze the local file system. The agent is instructed to examine the current directory through a natural language query, asking for both a listing and description of the files present. There’s also a second section that would have asked the agent to specifically analyze Python programs in the directory and identify those related to LLM performance evaluation, showing the agent’s potential for more complex file analysis tasks. This setup effectively creates an AI-powered file system navigator that can understand and respond to natural language queries about directory contents and file analysis.
Output from the First Example: “List the Python programs in the current directory, and then tell me which Python programs in the current directory evaluate the performance of LLMs?”
In the following output please notice that sometimes tool use fails and occasionally wrong assumptions are made, but after a long chain or thought (CoT) process the final result is good.
The output for for the query “Which python scripts evaluate the performance of LLMs?” is:
1 $ uv run smolagents_agent_test1.py
2 ╭───────────────────────────────────────────── New run ──────────────────────────────────────────────╮
3 │ │
4 │ What are the files in the current directory? Describe the current directory │
5 │ │
6 ╰─ LiteLLMModel - ollama_chat/llama3.2:latest ───────────────────────────────────────────────────────╯
7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8 15:54:33 - LiteLLM:INFO: utils.py:3386 -
9 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
10 INFO:LiteLLM:
11 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
12 15:54:40 - LiteLLM:INFO: utils.py:1295 - Wrapper: Completed Call, calling success_handler
13 INFO:LiteLLM:Wrapper: Completed Call, calling success_handler
14 ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
15 │ Calling tool: 'sa_list_directory' with arguments: {} │
16 ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
17 ('Contents of current directory '
18 '/Users/markw/GITHUB/Ollama_in_Action_Book/source-code/smolagents is: '
19 '[README.md, __pycache__, pyproject.toml, smolagents_agent_test1.py, '
20 'smolagents_compat.py, smolagents_test.py, smolagents_tools.py]')
21 Observations: Contents of current directory
22 /Users/markw/GITHUB/Ollama_in_Action_Book/source-code/smolagents is: [README.md, __pycache__,
23 pyproject.toml, smolagents_agent_test1.py, smolagents_compat.py, smolagents_test.py,
24 smolagents_tools.py]
25 [Step 0: Duration 6.97 seconds| Input tokens: 1,282 | Output tokens: 14]
26 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
27 15:54:40 - LiteLLM:INFO: utils.py:3386 -
28 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
29 INFO:LiteLLM:
30 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
31 15:54:41 - LiteLLM:INFO: utils.py:1295 - Wrapper: Completed Call, calling success_handler
32 INFO:LiteLLM:Wrapper: Completed Call, calling success_handler
33 ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
34 │ Calling tool: 'sa_summarize_directory' with arguments: {} │
35 ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
36 lst='Contents of current directory /Users/markw/GITHUB/Ollama_in_Action_Book/source-code/smolagents is: [README.md, __pycache__, pyproject.toml, smolagents_agent_test1.py, smolagents_compat.py, smolagents_test.py, smolagents_tools.py]'
37 Observations: Summary of directory:Based on the contents of the current directory, here's a summary
38 and educated guess for the major purposes of each file:
39
40 1. **README.md**: This is likely a Markdown-formatted README file that provides an introduction to the
41 project, its purpose, and how to get started with it.
42 2. **__pycache__/**: This is likely a cache directory created by Python's `pycompimport` feature. It
43 contains compiled versions of imported modules (e.g., `.cpycache`) and may be used for efficient
44 import operations in the codebase.
45 3. **pyproject.toml**: This file is probably a project configuration file, similar to `package.json`
46 or `pom.xml`, which contains metadata about the project, such as its dependencies, build tools, and
47 other settings.
48 4. **smolagents_agent_test1.py**: This appears to be a test script for an agent in a reinforcement
49 learning environment. The "_test1" suffix suggests that it's a specific test case or scenario being
50 tested.
51 5. **smolagents_compat.py**: This file might contain compatibility code or utilities that ensure the
52 project works correctly across different Python versions, environments, or platforms.
53 6. **smolagents_test.py**: Similar to `smolagents_agent_test1.py`, this is likely a test script for
54 the agent in general, rather than just one specific scenario. It may cover various aspects of the
55 agent's behavior and functionality.
56 7. **smolagents_tools.py**: This file might contain additional tools or utility functions that support
57 the development, testing, or deployment of the agent. The name suggests that it provides useful
58 functionality for working with agents.
59
60 Please note that these are educated guesses based on common naming conventions and practices in
61 software development. Without more context or information about the project, it's difficult to provide
62 a definitive explanation for each file's purpose.
63 [Step 1: Duration 7.43 seconds| Input tokens: 2,721 | Output tokens: 30]
64 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
65 15:54:47 - LiteLLM:INFO: utils.py:3386 -
66 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
67 INFO:LiteLLM:
68 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
69 15:54:51 - LiteLLM:INFO: utils.py:1295 - Wrapper: Completed Call, calling success_handler
70 INFO:LiteLLM:Wrapper: Completed Call, calling success_handler
71 ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
72 │ Calling tool: 'sa_read_file_contents' with arguments: {'file_path': './README.md'} │
73 ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
74 Observations: Contents of file './README.md' is:
75 Running Examples using uv
76
77 uv run smolagents_test.py
78 uv run smolagents_agent_test1.py
79
80 [Step 2: Duration 3.64 seconds| Input tokens: 4,634 | Output tokens: 51]
81 Reached max iterations.
82 15:54:51 - LiteLLM:INFO: utils.py:3386 -
83 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
84 INFO:LiteLLM:
85 LiteLLM completion() model= llama3.2:latest; provider = ollama_chat
86 15:54:56 - LiteLLM:INFO: utils.py:1295 - Wrapper: Completed Call, calling success_handler
87 INFO:LiteLLM:Wrapper: Completed Call, calling success_handler
88 Final answer: The files in the current directory are:
89
90 - `README.md`
91 - `__pycache__`
92 - `pyproject.toml`
93 - `smolagents_agent_test1.py`
94 - `smolagents_compat.py`
95 - `smolagents_test.py`
96 - `smolagents_tools.py`
97
98 The current directory is a Python project named "smolagents" that appears to be focused on
99 reinforcement learning and agent development. The directory contains various files, including:
100
101 * A README file that provides an introduction to the project
102 * A cache directory (`__pycache__`) generated by Python's `pycompimport` feature
103 * A project configuration file (`pyproject.toml`)
104 * Test scripts for agents in reinforcement learning environments (e.g., `smolagents_agent_test1.py`
105 and `smolagents_test.py`)
106 * Compatibility code or utilities (`smolagents_compat.py`)
107 * Additional tools or utility functions (`smolagents_tools.py`)
108 [Step 3: Duration 0.00 seconds| Input tokens: 5,509 | Output tokens: 245]
109 The files in the current directory are:
110
111 - `README.md`
112 - `__pycache__`
113 - `pyproject.toml`
114 - `smolagents_agent_test1.py`
115 - `smolagents_compat.py`
116 - `smolagents_test.py`
117 - `smolagents_tools.py`
118
119 The current directory is a Python project named "smolagents" that appears to be focused on reinforcement learning and agent development. The directory contains various files, including:
120
121 * A README file that provides an introduction to the project
122 * A cache directory (`__pycache__`) generated by Python's `pycompimport` feature
123 * A project configuration file (`pyproject.toml`)
124 * Test scripts for agents in reinforcement learning environments (e.g., `smolagents_agent_test1.py` and `smolagents_test.py`)
125 * Compatibility code or utilities (`smolagents_compat.py`)
126 * Additional tools or utility functions (`smolagents_tools.py`)
This is a lot of debug output to list in a book but I want you, dear reader, to get a feeling for the output generated by tools becomes the data for an again to observe before determining the next step in a plan to process.
This output shows the execution of the example smolagent-based agent that analyzes Python files in a directory looking for Python files containing code to evaluate the output results of LLMs. The agent follows a systematic approach by first listing all files using the sa_list_directory tool, then using sa_summarize_directory to provide detailed analysis of the contents.
The agent successfully identified all Python programs in the directory and specifically highlighted three files that evaluate LLM performance: tool_anti_hallucination.py (which checks for false information generation), tool_llm_eval.py (for general LLM evaluation), and tool_summarize_text.py (which likely tests LLM summarization capabilities). The execution includes detailed step-by-step logging, showing input/output tokens and duration for each step, demonstrating the agent’s methodical approach to file analysis and classification.
Output from the Second example: Simple Tool Use: smolagents_test.py
In this section we look at another agent processing cycle. Again, pay attention to the output of tools, and whether the agent can observe tool output and make sense of it (often the agent can’t!)
It is fairly normal for tools to fail with errors and it is important that agents can observe a failure and move on to try something else.
1 $ uv run smolagents_test.py
2 ╭───────────────────────────────────────────── New run ──────────────────────────────────────────────╮
3 │ │
4 │ What's the weather like in Paris? │
5 │ │
6 ╰─ LiteLLMModel - ollama_chat/llama3.2:latest ───────────────────────────────────────────────────────╯
7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8 ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
9 │ Calling tool: 'get_weather' with arguments: {'celsius': False, 'location': 'Paris'} │
10 ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
11 Observations: The weather is UNGODLY with torrential rains and temperatures below -10°C
12 [Step 0: Duration 3.01 seconds| Input tokens: 1,170 | Output tokens: 23]
13 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
14 ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
15 │ Calling tool: 'final_answer' with arguments: {'answer': 'The weather in Paris is unpleasant, with │
16 │ torrential rains and very cold temperatures.'} │
17 ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
18 Final answer: The weather in Paris is unpleasant, with torrential rains and very cold temperatures.
19 [Step 1: Duration 1.35 seconds| Input tokens: 2,463 | Output tokens: 55]
20 The weather in Paris is unpleasant, with torrential rains and very cold temperatures.
This output shows the agent performing a directory analysis using multiple tool calls, primarily utilizing sa_list_directory and sa_summarize_directory to examine the contents of the current working directory. The analysis revealed a Python-based project focused on natural language processing (NLP) and agent-based systems, containing various components including example scripts, testing files, and utility tools. The agent executed multiple iterations to gather and process information about the directory structure, with each step taking between 1.58 to 18.89 seconds to complete.
The final analysis identified key project components including a Makefile for build automation, example scripts demonstrating text summarization and graph-based algorithms, testing scripts for smolagent (Small Model-based Language Agent) and OLLAMA tools, and various utility scripts for tasks like anti-hallucination, database interactions, and web searching. The directory structure suggests this is a development and testing environment for NLP-related technologies, complete with its own virtual environment and dependency management through requirements.txt. The agent’s analysis provided detailed insights into the purpose and organization of the codebase while maintaining a focus on its NLP and agent-based systems orientation.
Output from Third Example: “Read the text in the file ‘data/economics.txt’ file and then summarize this text.”
1 $ uv run$ smolagents_agent_test1.py
2 ╭────────────────────────────────── New run ───────────────────────────────────╮
3 │ │
4 │ Read the text in the file 'data/economics.txt' file and then summarize this │
5 │ text. │
6 │ │
7 ╰─ LiteLLMModel - ollama_chat/llama3.2:latest ─────────────────────────────────╯
8 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
9 ╭──────────────────────────────────────────────────────────────────────────────╮
10 │ Calling tool: 'sa_read_file_contents' with arguments: {'file_path': │
11 │ 'data/economics.txt'} │
12 ╰──────────────────────────────────────────────────────────────────────────────╯
13 Observations: Contents of file 'data/economics.txt' is:
14 The Austrian School (also known as the Vienna School or the Psychological School
15 ) is a Schools of economic thought|school of economic thought that emphasizes
16 the spontaneous organizing power of the price mechanism. Austrians hold that the
17 complexity of subjective human choices makes mathematical modelling of the
18 evolving market extremely difficult (or Undecidable and advocate a "laissez
19 faire" approach to the economy. Austrian School economists advocate the strict
20 enforcement of voluntary contractual agreements between economic agents, and
21 hold that commercial transactions should be subject to the smallest possible
22 imposition of forces they consider to be (in particular the smallest possible
23 amount of government intervention). The Austrian School derives its name from
24 its predominantly Austrian founders and early supporters, including Carl Menger,
25 Eugen von Böhm-Bawerk and Ludwig von Mises.
26
27 Economics is the social science that analyzes the production, distribution, and
28 consumption of goods and services. Political economy was the earlier name for
29 the subject, but economists in the late 19th century suggested "economics" as a
30 shorter term for "economic science" that also avoided a narrow
31 political-interest connotation and as similar in form to "mathematics",
32 "ethics", and so forth.[2]
33
34 A focus of the subject is how economic agents behave or interact and how
35 economies work. Consistent with this, a primary textbook distinction is between
36 microeconomics and macroeconomics. Microeconomics examines the behavior of basic
37 elements in the economy, including individual agents (such as households and
38 firms or as buyers and sellers) and markets, and their interactions.
39 Macroeconomics analyzes the entire economy and issues affecting it, including
40 unemployment, inflation, economic growth, and monetary and fiscal policy.
41
42 The professionalization of economics, reflected in the growth of
43 graduate programs on the subject, has been described as "the main change in
44 economics since around 1900".[93] Most major universities and many colleges have
45 a major, school, or department in which academic degrees are awarded in the
46 subject, whether in the liberal arts, business, or for professional study; see
47 Master of Economics.
48
49
50 Economics is the social science that studies the behavior of
51 individuals, households, and organizations (called economic actors, players, or
52 agents), when they manage or use scarce resources, which have alternative uses,
53 to achieve desired ends. Agents are assumed to act rationally, have multiple
54 desirable ends in sight, limited resources to obtain these ends, a set of stable
55 preferences, a definite overall guiding objective, and the capability of making
56 a choice. There exists an economic problem, subject to study by economic
57 science, when a decision (choice) is made by one or more resource-controlling
58 players to attain the best possible outcome under bounded rational conditions.
59 In other words, resource-controlling agents maximize value subject to the
60 constraints imposed by the information the agents have, their cognitive
61 limitations, and the finite amount of time they have to make and execute a
62 decision. Economic science centers on the activities of the economic agents that
63 comprise society.[1] They are the focus of economic analysis.[2]
64
65 The traditional concern of economic analysis is to gain an
66 understanding of the processes that govern the production, distribution and
67 consumption of goods and services in an exchange economy.[3] An approach to
68 understanding these processes, through the study of agent behavior under
69 scarcity, may go as s:
70
71 An interesting Economist is Pauli Blendergast who teaches at the University of
72 Krampton Ohio and is famouse for saying economics is bullshit.
73 [Step 0: Duration 3.51 seconds| Input tokens: 1,338 | Output tokens: 39]
74 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
75 ╭──────────────────────────────────────────────────────────────────────────────╮
76 │ Calling tool: 'sa_read_file_contents' with arguments: {'file_path': │
77 │ 'data/economics.txt'} │
78 ╰──────────────────────────────────────────────────────────────────────────────╯
79 Observations: Contents of file 'data/economics.txt' is:
80 The Austrian School (also known as the Vienna School or the Psychological School
81 ) is a Schools of economic thought|school of economic thought that emphasizes
82 the spontaneous organizing power of the price mechanism. Austrians hold that the
83 complexity of subjective human choices makes mathematical modelling of the
84 evolving market extremely difficult (or Undecidable and advocate a "laissez
85 faire" approach to the economy. Austrian School economists advocate the strict
86 enforcement of voluntary contractual agreements between economic agents, and
87 hold that commercial transactions should be subject to the smallest possible
88 imposition of forces they consider to be (in particular the smallest possible
89 amount of government intervention). The Austrian School derives its name from
90 its predominantly Austrian founders and early supporters, including Carl Menger,
91 Eugen von Böhm-Bawerk and Ludwig von Mises.
92
93 Economics is the social science that analyzes the production, distribution, and
94 consumption of goods and services. Political economy was the earlier name for
95 the subject, but economists in the late 19th century suggested "economics" as a
96 shorter term for "economic science" that also avoided a narrow
97 political-interest connotation and as similar in form to "mathematics",
98 "ethics", and so forth.[2]
99
100 A focus of the subject is how economic agents behave or interact and how
101 economies work. Consistent with this, a primary textbook distinction is between
102 microeconomics and macroeconomics. Microeconomics examines the behavior of basic
103 elements in the economy, including individual agents (such as households and
104 firms or as buyers and sellers) and markets, and their interactions.
105 Macroeconomics analyzes the entire economy and issues affecting it, including
106 unemployment, inflation, economic growth, and monetary and fiscal policy.
107
108 The professionalization of economics, reflected in the growth of
109 graduate programs on the subject, has been described as "the main change in
110 economics since around 1900".[93] Most major universities and many colleges have
111 a major, school, or department in which academic degrees are awarded in the
112 subject, whether in the liberal arts, business, or for professional study; see
113 Master of Economics.
114
115
116 Economics is the social science that studies the behavior of
117 individuals, households, and organizations (called economic actors, players, or
118 agents), when they manage or use scarce resources, which have alternative uses,
119 to achieve desired ends. Agents are assumed to act rationally, have multiple
120 desirable ends in sight, limited resources to obtain these ends, a set of stable
121 preferences, a definite overall guiding objective, and the capability of making
122 a choice. There exists an economic problem, subject to study by economic
123 science, when a decision (choice) is made by one or more resource-controlling
124 players to attain the best possible outcome under bounded rational conditions.
125 In other words, resource-controlling agents maximize value subject to the
126 constraints imposed by the information the agents have, their cognitive
127 limitations, and the finite amount of time they have to make and execute a
128 decision. Economic science centers on the activities of the economic agents that
129 comprise society.[1] They are the focus of economic analysis.[2]
130
131 The traditional concern of economic analysis is to gain an
132 understanding of the processes that govern the production, distribution and
133 consumption of goods and services in an exchange economy.[3] An approach to
134 understanding these processes, through the study of agent behavior under
135 scarcity, may go as s:
136
137 An interesting Economist is Pauli Blendergast who teaches at the University of
138 Krampton Ohio and is famouse for saying economics is bullshit.
139 [Step 1: Duration 4.33 seconds| Input tokens: 3,386 | Output tokens: 59]
140
141 ...
142
143 [Step 6: Duration 0.00 seconds| Input tokens: 13,277 | Output tokens: 307]
144 Here's a summary of the text:
145
146 The Austrian School of economic thought emphasizes the power of the price mechanism and advocates for minimal government intervention. It focuses on individual agents making rational choices under scarcity to achieve desired ends.
147
148 Economics is the social science that analyzes production, distribution, and consumption of goods and services. It studies how economic agents interact and behave in markets. The subject has become more professionalized over time, with many universities offering degrees in economics.
149
150 The traditional concern of economic analysis is to understand the processes governing the production, distribution, and consumption of goods and services in an exchange economy. This involves studying agent behavior under scarcity and maximizing value subject to constraints.
151
152 Interestingly, some economists may view economics as a complex and sometimes useless field, like Pauli Blendergast, who famously declared that "economics is bullshit."
This output shows a sequence of steps where the agent repeatedly calls directory listing and summarization tools to understand the contents of a Python project directory. The agent uses tools like sa_list_directory and sa_summarize_directory to gather information, with each step building on previous observations to form a more complete understanding of the codebase.
Through multiple iterations, the agent analyzes a directory containing various Python files related to NLP and agent-based systems. The files include examples of text summarization, graph processing with Kuzu, language model evaluation tools, and various utility scripts. The agent ultimately produces a comprehensive summary categorizing the files into groups like build scripts, example code, testing scripts, and tool implementations, while noting the project appears to be focused on demonstrating and testing NLP-related technologies. This output log shows the agent taking about 75 seconds total across 6 steps to complete its analysis, with each step consuming progressively more tokens as it builds its understanding.
Agents Wrap Up
There are several options for LLM agent frameworks. I especially like smolagents because it works fairly well with smaller models run with Ollama. I have experimented with other agent frameworks that work well with Claude, GPT-4o, etc., but fail more frequently when used with smaller LLMs.