This project consists of two phases:
- Path Planning Algorithms Visualization and Comparison
- Reinforcement Learning for Autonomous Navigation
An interactive system for visualizing and comparing different pathfinding algorithms through a GUI interface. The system allows users to select specific algorithms and observe their behavior in real-time on a grid-based environment.
- Install Required Dependencies:
pip install -r requirements.txt- Run Files in the Following Order (if issues arise):
python libraries.py
python node_grid.py
python algorithms.py
python gui.py
python main.pyRun algorithms.py to:
- Generate performance comparison of all algorithms.
- Create
algorithms_results.zipcontaining summaries and visualizations. - Display a comparison table in the terminal.
Run main.py to:
- Launch the interactive GUI.
- Select specific algorithms to visualize.
- View real-time execution results.
- Compare selected algorithms' performance.
algorithm_outputs/: Main output directory containing:- Algorithm execution videos
- Path visualization images
- Exploration process visualizations
- Performance statistics
- Multiple algorithm implementations (BFS, DFS, UCS, IDS, Greedy, A*, Hill Climbing, Simulated Annealing).
- Real-time visualization of algorithm execution.
- Comparison of performance metrics (e.g., execution time, explored nodes).
- User-friendly interactive GUI interface.
An implementation of Q-Learning-based Reinforcement Learning (RL) for autonomous navigation in a simulated grid environment. The agent learns to navigate efficiently while avoiding obstacles and reaching a goal state.
- Install Required Dependencies:
pip install -r requirements.txt- Run the Simulation:
python main.py- Observe Real-Time Visualization:
- The agent's learning behavior will be displayed on the screen.
- Metrics will update live during each episode.
-
learning_report.csv: A detailed report containing:- Episode Number
- Steps Taken per Episode
- Total Rewards Earned
- Success/Failure Status
-
Statistics Screen: After training, a final screen displays:
- Total Episodes
- Average Steps per Episode
- Average Reward per Episode
- Success Rate
- Total Training Time
- Real-time visualization of the learning process using Pygame.
- Adaptive learning parameters (
alpha,gamma,epsilon). - Post-training analytics through
learning_report.csv. - Visualization of agent behavior and policy convergence.
├── phase1/
│ ├── algorithms.py # Path planning algorithms
│ ├── gui.py # GUI implementation
│ ├── node_grid.py # Grid system setup
│ ├── main.py # Main GUI application
│ ├── libraries.py # Required dependencies
│ ├── algorithm_outputs/
│ ├── videos/ # Algorithm execution videos
│ ├── images/ # Path visualization images
│ ├── stats/ # Performance statistics
├── phase2/
│ ├── main.py # Main RL application
│ ├── agent.py # Q-Learning Agent logic
│ ├── environment.py # Environment setup
│ ├── visualization.py # Real-time visualization
│ ├── learning_report.csv # Training metrics output
├── requirements.txt # Required dependencies
├── README.md # Project documentation
- Python 3.12
- NumPy
- Matplotlib
- OpenCV
- Tkinter
- Imageio
- Pygame
- Phase 1: Focused on comparing and visualizing pathfinding algorithms with static search strategies.
- Phase 2: Explored dynamic learning behavior through Q-Learning in a grid environment.
- Performance analytics and visualizations are provided for both phases.