AI-powered workflow blueprint generator using Gemini AI and Model Context Protocol (MCP).
This system generates structured workflow blueprints from natural language descriptions. It provides a modern web interface for creating, visualizing, and exporting workflows in multiple formats (JSON, YAML, BPMN).
- 🧠 AI-Powered Generation: Converts text descriptions into structured workflows
- 🎨 Modern Web Interface: Clean, intuitive UI for workflow creation
- 📤 Multiple Export Formats: JSON, YAML, and BPMN support
- 🔄 Real-time Processing: Instant workflow generation and preview
- 🛠️ MCP Integration: Built on Model Context Protocol for extensibility
For Docker:
- Docker and Docker Compose installed
- Gemini API key (get one at Google AI Studio)
For Local Python:
- Python 3.11 or higher
- Gemini API key (get one at Google AI Studio)
-
Install Dependencies
pip install -r requirements.txt
-
Configure API Key
Create a
.envfile in the project root:GEMINI_API_KEY=your_api_key_here -
Verify Configuration
Ensure
config/data.jsonexists with required settings.
Build the image first:
docker build -t workflow-generator .Using Docker Compose (modern Docker/cloud environments):
docker compose up -dOr legacy docker-compose (if above doesn't work):
docker-compose up -dAlternative: Using docker run (for cloud environments without compose):
docker run -d \
--name workflow-generator \
-p 5000:5000 \
-e GEMINI_API_KEY=your_api_key_here \
-e GEMINI_MODEL=gemini-2.5-flash \
-v $(pwd)/data:/app/data \
-v $(pwd)/client:/app/client \
-v $(pwd)/server:/app/server \
--restart unless-stopped \
workflow-generatorThen open your browser to: http://localhost:5000
Stop the application:
# If using compose:
docker compose down
# If using docker run:
docker stop workflow-generator
docker rm workflow-generatorView logs:
# If using compose:
docker compose logs -f
# If using docker run:
docker logs -f workflow-generatorRebuild after changes:
# If using compose:
docker compose up --build -d
# If using docker run:
docker stop workflow-generator
docker rm workflow-generator
docker build -t workflow-generator .
# Then run the docker run command againStart the application:
start.batOr manually:
python server\api_server.pyThen open your browser to: http://localhost:5000
neu/
├── server/ # Server components
│ ├── server.py # MCP server backend
│ ├── api_server.py # REST API server
│ └── server_web.py # Static file server
├── client/ # Frontend files
│ ├── index.html # Web interface
│ ├── script.js # Frontend logic
│ └── styles.css # Styles
├── config/
│ └── data.json # Configuration
├── data/
│ └── workflows/ # Generated workflows
├── start.bat # Startup script
└── requirements.txt # Python dependencies
- Launch the application using
start.bat - Open the web interface at http://localhost:5000
- Enter a workflow description in natural language
- Select the domain (HR, IT, Customer Service, etc.)
- Generate and view your workflow
- Export in your preferred format (JSON, YAML, BPMN)
"Create an employee onboarding workflow with HR approval and IT setup steps"
"Design a customer support ticket escalation process with SLA checks"
"Build a purchase approval workflow with budget validation and multi-level approval"
- MCP Server (
server.py): Handles workflow generation logic using Gemini AI - REST API (
api_server.py): Bridges web interface and MCP server - Web Frontend: Modern HTML/JS interface for workflow creation
The system requires a Google Gemini API key:
- Visit Google AI Studio
- Create an API key
- Add it to your
.envfile
Generated workflows are saved to data/workflows/ in your selected format:
workflow_name.json- JSON formatworkflow_name.yaml- YAML formatworkflow_name.bpmn- BPMN XML format
"GEMINI_API_KEY not found"
- Ensure
.envfile exists in project root - Verify API key is correctly set
"Port 5000 already in use"
- Stop other applications using port 5000
- Or edit
api_server.pyto use a different port
"Module not found"
- Run
pip install -r requirements.txt
MIT License