Read any webpage with AI. A Chrome extension for instant page summaries, deep-dive analysis, and smart content chunking — with 31 AI providers, local LLM support, and EN/JA/ZH language support.
Chrome Web Store | Privacy Policy | 中文文档
VibeReader is a Chrome extension that turns your browser into an AI-powered reading assistant. It extracts the visible content from any webpage and sends it to a large language model for analysis — so you can understand long pages in seconds instead of minutes.
Unlike generic AI chatbots that require copy-pasting text, VibeReader works in-place: open the side panel on any page, ask a question, and get AI-powered insights with full page context — no tab-switching, no pasting.
- AI Page Summarizer — Auto-generate bullet-point digests on every page load (EN/JA/ZH)
- Ask Anything About Any Page — Query the current page like a document: "What are the breaking changes?", "Summarize this bug report"
- Multi-Tab AI Analysis — Load content from multiple tabs at once via a floating tab picker, merged with page separators
- Smart Content Chunking — Pages too long for the model? VibeReader automatically splits, analyzes, and merges — no manual intervention
- 15 Professional Prompt Templates — DevOps RCA, Code Review, Security Audit, Legal Analysis, Financial Analysis, News Fact-Check, and more — all in EN/JA/ZH
- 31 AI Providers — Ollama, LM Studio, DeepSeek, SiliconFlow, Moonshot, Zhipu, Tongyi, Doubao, OpenAI, Claude, Gemini, Groq, Mistral, xAI, OpenRouter, Together AI, and more — plus any custom endpoint
- Local LLM First — Run Ollama or LM Studio locally; zero data leaves your device
- Zero Build Step — Pure vanilla JS, no npm, no bundler. Load unpacked and go
| Feature | How It Works |
|---|---|
| One-Click Page Analysis | Extracts visible text, meta tags, title, and URL; strips ads/scripts/styles. Respects user text selection. |
| Multi-Tab Context | [+ Add Page] opens a floating tab picker — select multiple tabs, content is extracted in parallel with inline fallback, then merged with ══════ Page N/M ══════ separators. |
| Auto Summary Sidebar | A collapsible sidebar injected on every new page load with a localized digest — expand to read, collapse to hide. Retry button on failure. |
| RAW_TXT Editor | Editable page context panel with syntax highlighting, position markers (10k/20k/...), text search with prev/next navigation, and live stats (line count + token estimate). |
| Binary Split Strategy | Automatically chunks oversized content (2→4→8→16 segments) at natural paragraph/sentence boundaries, then synthesizes a unified answer. Visual progress bar with per-chunk timing and ETA. |
| 15 Prompt Templates | Professional presets for tech, legal, finance, news, research, writing, product, data — users can create custom templates in Options. |
| Multi-Turn Context | Each follow-up carries the previous AI response, enabling deeper conversational analysis. |
| 4 API Formats | OpenAI Chat Completions, Anthropic Messages, OpenAI Responses API, and Azure OpenAI — auto-routed by provider. |
- Google Chrome (or any Chromium-based browser) with Manifest V3 support
- An AI backend (local or cloud):
- Ollama (Local) — no API key needed. Install from ollama.com and run
ollama serve - LM Studio (Local) — no API key needed. Download models inside LM Studio, then start the server
- Cloud — API key from any supported provider (DeepSeek, SiliconFlow, OpenAI, Anthropic, Google AI Studio, etc.)
- Ollama (Local) — no API key needed. Install from ollama.com and run
-
Download / Clone the repository to your local machine.
-
Load as Unpacked Extension
- Open
chrome://extensions/ - Enable Developer mode (toggle in top-right)
- Click Load unpacked → select the project folder
- Open
-
Configure AI Settings
- Click the extension icon → right-click → Options
- Select your Provider (local or cloud)
- Base URL and models are auto-filled per provider; enter API Key if required
- Choose API Format if using a custom endpoint (OpenAI / Anthropic / Responses)
- Click Test Connection to verify
- (Optional) Enable Auto Summary
- (Optional) Add custom Prompt Templates
- (Optional) Edit the Default System Prompt
-
Pin the Extension to the toolbar for quick access.
- No
npm installor build step required — pure vanilla JS. - All settings sync across Chrome profiles via
chrome.storage.sync.
| Category | Providers |
|---|---|
| Local LLM | Ollama, LM Studio |
| China Cloud | DeepSeek, SiliconFlow, Moonshot / Kimi, Zhipu AI, Alibaba Tongyi (DashScope), Doubao / Volcengine |
| International Cloud | OpenAI, Anthropic Claude, Google Gemini, Groq, Mistral AI, xAI Grok |
| Routers / Aggregators | OpenRouter (200+ models), Together AI |
| Custom | Any OpenAI / Anthropic / Responses API compatible endpoint |
All templates are fully localized in EN/JA/ZH:
| Template | Use Case |
|---|---|
| DevOps Root Cause | SRE incident triage with <root_cause> highlighting |
| Code Review | Bug, security, performance, readability analysis |
| Tech Explainer | Explain technical content for a broad audience |
| Security Audit | OWASP Top 10 / CWE review |
| Legal Document Analysis | Contract, ToS, policy review |
| Legal Clause Comparison | Compare terms against market standards |
| Financial Analysis | Key metrics, trends, red flags, investment thesis |
| Business Strategy Brief | Management consulting framework |
| News Fact-Check & Bias | Source quality, bias indicators, reliability rating |
| Research Paper Critique | Peer review: methodology, findings, reproducibility |
| Copywriting Audit | Message clarity, persuasion, rewrite suggestions |
| Translation Review | EN↔CN / EN↔JA translation + terminology |
| Product Analysis | Value proposition, UX, competitive positioning |
| Meeting Notes → Action Items | Extract decisions, owners, deadlines |
| Data Insight Extraction | Patterns, statistical significance, follow-up analysis |
Open a CI/CD failure page or Kubernetes event log → select the DevOps Root Cause template → click SEND. The AI identifies the root cause, ranks potential causes, and suggests validation steps.
Reading a 20-page API reference or RFC? Enable Auto Summary for a hands-free localized overview on every page load. Or open the side panel and ask: "What are the breaking changes in this document?"
Open a bug report on Jira, GitHub Issues, or any tracker → the extension extracts all visible text (description, comments, metadata) → ask: "Summarize the bug and suggest which component to investigate."
Need to compare information across multiple pages? Click [+ Add Page] to open the tab picker → select the tabs you want to analyze → their content is merged into a single context. Ask: "Compare the approaches described in these pages."
Pages with massive logs or thread discussions that exceed model context limits are handled automatically. The Binary Split Strategy splits content at natural boundaries (2→4→8→16 chunks), analyzes each, then merges into a unified response.
vibe_reader/
├── manifest.json # Chrome extension manifest (MV3)
├── background.js # Service worker: auto summary, tab URL tracking
├── content.js # Content script: extract visible page text
├── popup.html # Side panel UI (main interface)
├── popup.js # Side panel logic: chat, split strategy, raw editor
├── popup.css # Side panel styles
├── options.html # Settings page UI
├── options.js # Settings page logic: provider config, templates
├── api-utils.js # Shared API layer: 31 providers, 4 formats
├── i18n.js # i18n: UI strings, templates, prompts (EN/JA/ZH)
├── autosum.js # Auto summary: sidebar injection + display
├── autosum.css # Auto summary sidebar styles
├── tab-picker.html # Multi-tab picker floating window
├── tab-picker.js # Tab picker logic: list, select, load
├── privacy.html # Privacy policy (EN/JA/ZH)
├── build.sh # Build & lint script (validate, check, package)
├── marked.min.js # Markdown renderer
└── icons/
├── icon16.png
├── icon48.png
└── icon128.png
A single build.sh handles validation, linting, and packaging. No external tools required beyond bash, node, and optionally jq.
# full build: validate + lint + create zip
./build.sh
# dry run: validate + lint only, no zip
./build.sh --check
# lint only
./build.sh --lintThe build pipeline runs 9 checks in sequence:
| Step | What It Does |
|---|---|
| 1. File check | Verify all required source files exist and are non-empty |
| 2. Manifest schema | Validate manifest_version, name, description length, service_worker |
| 3. Cross-reference | Ensure every file referenced in manifest.json actually exists on disk |
| 4. JS syntax | Run node --check on all .js files |
| 5. JSON validation | Parse manifest.json with jq or python3 |
| 6. CSS brace balance | Count { vs } in all .css files |
| 7. HTML structure | Check DOCTYPE, charset meta, script tag pairing |
| 8. Code hygiene | Scan for console.log, debugger, eval(), hardcoded API keys, TODO/FIXME |
| 9. File size sanity | Warn if any single file > 500 KB or total > 2 MB |
On success, outputs VibeReader-v{version}.zip (version read from manifest.json).
- Local-first: Ollama and LM Studio process everything on your machine
- API keys stored locally —
chrome.storage.local, never synced to cloud - No analytics, no tracking, no telemetry
- You choose when and where your data is sent
- Full transparency: review extracted content in the RAW_TXT editor before any API call
- Open source: inspect every line of code
See PRIVACY.md for the full privacy policy (EN/ZH).
| Component | Detail |
|---|---|
| Platform | Chrome Extension, Manifest V3, Side Panel API |
| Language | Vanilla JavaScript (no framework, no bundler) |
| UI | Editorial minimal CSS (shadcn/ui-inspired), Inter + Noto Sans JP/SC fonts |
| i18n | English, Japanese, Chinese — UI, templates, system prompts, auto-summary prompts |
| API Formats | OpenAI Chat Completions, Anthropic Messages, OpenAI Responses |
| Providers | 16 built-in + 15 custom-configured (local-first: Ollama, LM Studio) |
| Storage | chrome.storage.sync (settings) + chrome.storage.local (API keys) |
MIT License. See LICENSE for details.
Open source. No account required. No subscriptions.
v1.5 · Manifest V3 · Side Panel Interface


