AI-powered job matching for the European market. Upload your CV, and the app uses Google Gemini to analyze your profile, searches for relevant jobs via Google Jobs, and scores each listing against your skills and experience.
- CV Parsing — Supports PDF, DOCX, Markdown, and plain text
- AI Profile Extraction — Gemini analyzes your CV to extract skills, experience, languages, and more
- Smart Search — Generates optimized search queries in English and local languages
- Job Scoring — Each job is scored 0–100 against your profile with detailed reasoning
- European Market Focus — Accounts for local language requirements, location keywords, and market norms
- Daily Digest — Subscribe for daily email digests with new AI-matched jobs
- Privacy First — GDPR-compliant with auto-expiry, double opt-in, and full data deletion
- Caching — Intelligent caching minimizes API calls across sessions
- Python 3.10+
- A Google AI Studio API key (for Gemini)
- A SerpApi API key (for Google Jobs search)
git clone https://github.com/TheTrueAI/immermatch.git
cd immermatch
python -m venv .venv
source .venv/bin/activate
pip install -e .Copy the example environment file and add your API keys:
cp .env.example .env
# Edit .env with your keysstreamlit run immermatch/app.pyThe app uses four AI agent personas powered by Gemini:
- The Profiler — Extracts a structured candidate profile from raw CV text
- The Headhunter — Generates optimized job search queries based on the profile
- The Screener — Evaluates each job listing against the candidate profile (0–100 score)
- The Advisor — Generates a career summary with market insights and skill gap analysis
Jobs are fetched from Google Jobs via SerpApi, deduplicated, and scored in parallel.
Copy .env.example to .env and fill in your keys. The app also supports .streamlit/secrets.toml for Streamlit Cloud deployments.
| Variable | Required | Description |
|---|---|---|
GOOGLE_API_KEY |
Yes | Google AI Studio API key (get one) |
SERPAPI_KEY |
Yes | SerpApi key for Google Jobs search (get one) |
SUPABASE_URL |
For newsletter | Supabase project URL (dashboard) |
SUPABASE_KEY |
For newsletter | Supabase anon/publishable key |
SUPABASE_SERVICE_KEY |
For newsletter | Supabase service-role key (bypasses RLS) |
RESEND_API_KEY |
For newsletter | Resend API key (get one) |
RESEND_FROM |
For newsletter | Sender address, e.g. Immermatch <digest@yourdomain.com> |
APP_URL |
For newsletter | Public app URL for email verification links |
IMPRESSUM_NAME |
For newsletter | Legal notice: your full name (§ 5 DDG) |
IMPRESSUM_ADDRESS |
For newsletter | Legal notice: your postal address |
IMPRESSUM_EMAIL |
For newsletter | Legal notice: your contact email |
Note: The one-time job search only requires
GOOGLE_API_KEYandSERPAPI_KEY. The Supabase, Resend, and Impressum variables are only needed for the daily digest newsletter feature.
The daily digest feature requires a Supabase Postgres database. To set up the schema:
- Create a Supabase project and add
SUPABASE_URL,SUPABASE_KEY, andSUPABASE_SERVICE_KEYto your.env. - Run the schema checker:
python setup_db.py
- Copy the printed SQL into the Supabase SQL Editor and execute it.
The script creates three tables (subscribers, jobs, job_sent_logs) with appropriate indexes. RLS policies are applied to restrict anonymous access — see AGENTS.md §11 for details.
A GitHub Actions workflow runs daily_task.py every day at 07:00 UTC, sending personalized job digest emails to active subscribers.
How it works per subscriber:
- Loads the stored candidate profile and search queries
- Searches Google Jobs for new listings (shared across subscribers by location)
- Evaluates unseen jobs against the profile via Gemini
- Sends a digest email with matches above the subscriber's score threshold
Setup for self-hosting:
- Complete the Database Setup above.
- Add all environment variables from the table above as GitHub Actions secrets.
- The workflow at
.github/workflows/daily-digest.ymlhandles the rest.
See AGENTS.md §10 for the full email lifecycle (double opt-in, auto-expiry, unsubscribe).
pip install -e ".[test]"
pytest tests/ -v --cov=immermatch --cov-report=term-missingAll external services (Gemini, SerpApi, Supabase, Resend) are mocked — no API keys needed to run the test suite.
Linting and type checking:
ruff check . && ruff format --check .
mypy immermatch/ daily_task.pyPre-commit hooks are available for automatic quality gates:
pip install -e ".[dev]"
pre-commit install --hook-type pre-commit --hook-type pre-pushimmermatch/
app.py # Streamlit web UI
llm.py # Gemini client and retry logic
cv_parser.py # CV text extraction (PDF/DOCX/MD/TXT)
search_agent.py # Profile extraction and job search
evaluator_agent.py # Job scoring and career summary
models.py # Pydantic data models
cache.py # JSON-based result caching
db.py # Supabase database layer
emailer.py # Email templates and sending (Resend)
pages/
verify.py # Email verification endpoint
unsubscribe.py # One-click unsubscribe endpoint
impressum.py # Legal notice (§ 5 DDG)
privacy.py # Privacy policy
daily_task.py # Daily digest cron job (GitHub Actions)
setup_db.py # Database schema checker / migration helper
tests/ # tests (all mocked)
Immermatch is designed with GDPR compliance in mind:
- Session-scoped caching — CV data is cached locally per session and auto-cleaned after 24 hours
- Double opt-in — Newsletter subscriptions require email verification
- 30-day auto-expiry — Subscriber data is automatically deleted after 30 days
- Immediate data deletion — Unsubscribing immediately wipes stored profile data
- No tracking cookies — Only Streamlit's technically necessary session cookies are used
- Open source — Users can audit exactly what happens to their data
See the privacy policy at /privacy in the running app for full details.
See CONTRIBUTING.md for development setup, coding conventions, and how to submit pull requests.
AGPL-3.0 — You're free to use, modify, and self-host Immermatch. If you host a modified version, you must release your changes under the same license.
