A swarm-based investigation wall plus a persistent personal AI assistant.
Local-first storage, encrypted provider keys, model-provider switching, and scheduled automation.
- Node.js v18 or higher — nodejs.org
- Ollama for local-only use, or API keys for OpenAI / Anthropic / OpenRouter.
Cloud providers are optional. Local Ollama still works without external APIs.
# 1. Clone or unzip Mipler
git clone https://github.com/Chintanpatel24/mipler.git
cd mipler-main
# 2. Install once
npm install
# 3. Optional: make the launcher global
npm install -g .
# 4. Start Mipler
miplerIf you do not want the global command, run npm start inside the project folder instead.
On Windows, npm install -g . creates the mipler.cmd launcher automatically. For a local repo checkout you can also run start.cmd.
Then open your browser at http://localhost:3000
Mipler starts with two workspaces: Workspace A and Workspace B. They are completely independent:
- Each workspace has its own nodes, edges, and viewport.
- Importing a JSON file into Workspace A does not merge into Workspace B.
- "Clear Workspace" only clears the currently active one.
- You can add more workspaces with + New Investigation in the workspace picker.
Switching workspaces: Click the investigation name in the top-left toolbar to open the workspace picker.
| Button | Card Type | Description |
|---|---|---|
N |
Note | Free-text note card |
Img |
Image | Paste or embed an image |
PDF |
Embed a PDF document | |
WH |
WHOIS | WHOIS domain lookup |
DNS |
DNS | DNS record lookup |
RI |
Reverse Image | Reverse image search |
OS |
OSINT Framework | OSINT methodology viewer |
URL |
Custom URL | Embed any web tool in an iframe |
Click AI in the toolbar to open the File Analysis Panel.
- Upload files — drag-and-drop or click to browse. All file types accepted:
- JSON, CSV, TXT, MD, LOG, XML, YAML — read as text
- PDF, images — metadata used; content summarized
- Any other file — best-effort text read
- Ask a question — type what you want to know from the files.
- Analyze — Mipler sends the files and question to your local Ollama model.
- Get results:
- Answer — AI's written response to your question.
- Mindmap — Interactive collapsible mindmap tree based on the analysis.
- ↓ JSON — Download the full mindmap + answer as a structured JSON file.
Click API in the toolbar to open the Ollama chat workspace. This is a direct, conversation-style interface to your local Ollama model — useful for freeform OSINT queries, brainstorming, or research.
- Chat tab — send messages and get responses from Ollama.
- Settings tab — configure the Ollama URL and select from available models (auto-detected).
Click the ⋮ menu → Assistant Settings to configure:
- Provider —
ollama,openai,anthropic, oropenrouter - Base URL — optional override per provider
- Model — any model ID supported by the provider
- API key — encrypted in backend local storage
- Export — saves the current workspace state as a
.jsonfile (all investigations, nodes, edges, AI history). - Import — loads a workspace JSON. It loads into the current active workspace only — it does not overwrite other workspaces.
- AI Mindmap export — separate JSON download from the File Analysis Panel.
"Ollama not detected"
- Run
OLLAMA_ORIGINS=* ollama servein a terminal. - Make sure the URL in Assistant Settings matches where Ollama is running.
"Ollama error 404"
- The model you specified may not be pulled yet. Run
ollama pull llama3.
"AI analysis returns no mindmap"
- Try a smaller, faster model like
phi3ormistral. - Make sure your question is specific enough for the model to structure a response.
Workspace data lost after refresh
- This is by design — all data is in-memory. Use Export before closing the tab.
Port already in use
- Run
PORT=8080 bash start.sh



