A minimal Agentic AI framework built for learning and experimentation.
This project demonstrates how a Large Language Model (LLM) can decide:
• ➜✅ Whether to answer a query directly
• ➜🔧 Or call a tool when needed
• ➜🔁 And return back to the LLM after tool execution
It is designed for understanding the fundamentals of Agentic Systems.
+---------+
| __start__ |
+---------+
|
v
+------------------+
| tool_calling_llm |
+------------------+
| |
| v
| +-------+
| | tools |
| +-------+
| |
+------------+
|
v
+-------+
| __end__ |
+-------+
flowchart TD A [start] --> B[tool_calling_llm] B --> C[tools] C --> B B --> D[end]
start → tool_calling_llm → (tools if needed) → tool_calling_llm → end
-
User sends a query
-
LLM (Llama-3.1-8b-instant via Groq) decides:
- If tool is required → move to tool node
- If not required → respond directly
-
If tool is called:
- Tool executes
- Output is sent back to LLM
-
LLM generates final response
-
End state reached
• This project is intentionally simple to help you understand:
• What is an Agent?
• What is Tool Calling?
• How LLMs decide actions?
• Basic Agentic Architecture
• State Graph Workflow