Skip to content

Add support for podman as container runtime#26

Open
nezhar wants to merge 7 commits intomainfrom
podman
Open

Add support for podman as container runtime#26
nezhar wants to merge 7 commits intomainfrom
podman

Conversation

@nezhar
Copy link
Collaborator

@nezhar nezhar commented Mar 4, 2026

Adds first-class support for Podman as an alternative container runtime to Docker throughout VibePod. It updates documentation, configuration, and CLI commands to allow users to select between Docker and Podman, including auto-detection and explicit selection methods. Additionally, it introduces fixes for rootless Podman compatibility, such as permission handling for bind mounts.

Documentation updates:

  • Added a dedicated Podman setup and usage guide (docs/podman.md), linked from navigation and other docs. Updated Quickstart and main documentation to reflect Podman support and new configuration options.

Config and environment variable documentation:

  • Documented the new container_runtime config key and VP_CONTAINER_RUNTIME environment variable in docs/configuration.md, including example usage.

Summary by CodeRabbit

  • New Features

    • Podman support (rootless) with --runtime and --userns options, global runtime preference (vp config runtime) and VP_CONTAINER_RUNTIME / VP_CONTAINER_USERNS_MODE env vars; runtime auto-detection with configurable probe timeout; proxy URL may use proxy container IP when available.
  • Documentation

    • New "Using Podman" guide; quickstart, config, index and proxy docs updated to include Podman guidance and runtime examples.
  • Tests

    • Added/updated unit tests covering runtime detection, runtime selection, and userns forwarding.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds Podman support and runtime-agnostic container management: runtime detection/selection and persistence, a get_manager factory, runtime-aware DockerManager changes (sockets, volume/userns handling), new CLI --runtime/--userns options and config/env keys, docs, and tests updated to exercise runtime and userns behavior. (34 words)

Changes

Cohort / File(s) Summary
Documentation
docs/configuration.md, docs/index.md, docs/quickstart.md, docs/podman.md, mkdocs.yml, README.md
Add Podman docs and navigation; document container_runtime and container_userns_mode, env vars VP_CONTAINER_RUNTIME / VP_CONTAINER_USERNS_MODE, runtime CLI usage and examples.
Constants & Config
src/vibepod/constants.py, src/vibepod/core/config.py
Add runtime constants (auto/docker/podman), socket constants, default config keys, env mappings, and get_container_userns_mode() helper.
Runtime Detection & Persistence
src/vibepod/core/runtime.py
New module: probes candidate sockets with configurable timeout, detects available runtimes, supports interactive selection, and exposes resolve/save/get runtime preference APIs.
Docker / Manager Implementation
src/vibepod/core/docker.py
Add get_manager() factory; DockerManager now accepts base_url and runtime; add _prepare_volume_dir and _run_container; apply Podman-aware volume/user/userns handling; propagate userns_mode through run/ensure methods.
CLI Commands
src/vibepod/commands/...
list_cmd.py, logs.py, proxy.py, run.py, stop.py, update.py, config.py
Thread --runtime (and --userns where applicable) through commands; replace direct DockerManager usage with get_manager(...); add vp config runtime command; forward container_userns_mode into proxy/run flows; update version output to show runtime.
CLI Alias
src/vibepod/cli.py
Extend ui alias to accept and forward port, no_open, runtime, userns to logs_start.
Tests
tests/test_list.py, tests/test_proxy_permissions.py, tests/test_run.py, tests/test_config.py, tests/test_runtime.py
Update tests to patch get_manager instead of DockerManager; add runtime attribute on test managers; add runtime probe/selection tests; add userns forwarding assertions and proxy-IP-based proxy URL tests; update stubs to return proxy with IP.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/CLI
    participant Cmd as Command Handler
    participant Runtime as runtime.resolve_runtime
    participant Detector as detect_available_runtimes
    participant Sock as Runtime Socket(s)
    participant Factory as get_manager
    participant Manager as DockerManager

    User->>Cmd: invoke command (optional --runtime / --userns)
    Cmd->>Runtime: resolve_runtime(override, config)
    alt override or saved preference
        Runtime-->>Factory: (runtime, socket_url)
    else auto-detect
        Runtime->>Detector: detect_available_runtimes()
        Detector->>Sock: probe candidate sockets (docker/podman)
        Sock-->>Detector: success/fail
        Detector-->>Runtime: available runtimes
        Runtime->>User: prompt (if multiple & TTY)
        User-->>Runtime: selection
        Runtime-->>Factory: (chosen_runtime, socket_url)
    end
    Factory->>Manager: create DockerManager(base_url=socket_url, runtime)
    Manager-->>Cmd: manager instance
    Cmd->>Manager: run/stop/proxy/logs (userns_mode forwarded where applicable)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related issues

Poem

🐰 I hopped through sockets, sniffed each trail,
Podman and Docker both set sail.
I picked a runtime, saved it neat,
Mended volumes and userns complete.
Two runtimes now — what a treat!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add support for podman as container runtime' directly and clearly describes the main change: introducing Podman as a supported container runtime alongside Docker.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch podman
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.

Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)

269-292: Podman rootless permission fixup logic is sound.

The approach of injecting chmod -R a+rwX before init commands correctly addresses the subordinate UID permission problem in rootless Podman. The use of dict.fromkeys() for deduplication while preserving order is idiomatic.

One minor edge case: paths containing single quotes would break the shell command. This is unlikely for typical config paths but worth noting.

🛡️ Optional: More robust path quoting

If you want to handle paths with special characters more defensively:

-        quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix))
+        # Use printf %q style quoting for safety with special chars
+        quoted = " ".join(shlex.quote(p) for p in dict.fromkeys(paths_to_fix))

This would require adding import shlex at the top.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 269 - 292, The current podman_fixup
builds a shell command by joining paths into quoted using simple single-quote
wrapping, which breaks if any path contains a single quote; update the logic in
the block that builds podman_fixup (references: podman_fixup, paths_to_fix,
extra_volumes_pre, quoted) to use robust shell-quoting (e.g., shlex.quote) for
each path and import shlex at the top of the file so paths with special
characters or embedded quotes are safely quoted before constructing the chmod
command.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/podman.md`:
- Around line 86-90: Replace the overly permissive workspace permission
suggestion "chmod -R a+rwX ~/my-project" with owner/group-scoped guidance:
recommend "chmod -R u+rwX ~/my-project" and mention an optional "chmod -R g+rwX
~/my-project" line for collaborative setups; update the docs/podman.md snippet
accordingly and remove or discourage the world-writable (a+rwX) example.

In `@src/vibepod/commands/update.py`:
- Around line 34-39: The change to the info dict in update.py replaced the
single "docker" key with "runtime" and "runtime_version", which breaks consumers
of the --json output; either add a compatibility field "docker" (e.g., derive
from runtime or runtime_version and set info["docker"] =
"<runtime>/<runtime_version>" or original docker string) so both schemas are
present, or update release notes/changelog to document the breaking change;
locate the info dict in update.py (keys: "vibepod", "python", "runtime",
"runtime_version") and implement the chosen fix so existing tooling continues to
work or users are explicitly informed.

In `@src/vibepod/core/config.py`:
- Line 139: The VP_CONTAINER_RUNTIME value should be normalized (trimmed and
lowercased) before storing into container_runtime to avoid mismatches like "
Podman " or "PODMAN"; update the tuple for "VP_CONTAINER_RUNTIME" in the mapping
in src/vibepod/core/config.py to use a parser that performs v.strip().lower()
(or replace str with a named helper parser function) and optionally validate
against allowed runtimes (e.g., "docker","podman") so the parsed value assigned
to container_runtime is normalized and predictable.

In `@src/vibepod/core/runtime.py`:
- Around line 57-62: The _probe_socket function currently creates a Docker
client (client = _docker.DockerClient(...)) and may return False from the except
branch without closing it; update the function to ensure client.close() is
always called by moving the cleanup into a finally block (or by tracking the
client and closing it in both success and failure paths). Specifically, wrap the
client creation, client.ping(), and return logic in try/except/finally,
reference the existing client variable and ensure client.close() is invoked in
the finally so no DockerClient connection leaks occur.

---

Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 269-292: The current podman_fixup builds a shell command by
joining paths into quoted using simple single-quote wrapping, which breaks if
any path contains a single quote; update the logic in the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted)
to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex
at the top of the file so paths with special characters or embedded quotes are
safely quoted before constructing the chmod command.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 661f0184-e4fd-49f5-965b-613a0a9d7d83

📥 Commits

Reviewing files that changed from the base of the PR and between 944b5dd and e3638fe.

📒 Files selected for processing (18)
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py

Comment on lines +57 to +62
client = _docker.DockerClient(base_url=base_url, timeout=3)
client.ping()
client.close()
return True
except Exception:
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n src/vibepod/core/runtime.py | sed -n '50,70p'

Repository: VibePod/vibepod-cli

Length of output: 785


Ensure _probe_socket closes the Docker client on failure paths.

If client.ping() fails on line 58, the except block returns False without closing the client created at line 57, leaking the connection. Move cleanup to a finally block to ensure the client is properly closed regardless of success or failure.

Suggested patch
 def _probe_socket(base_url: str) -> bool:
     """Return True if a container engine responds on *base_url*."""
+    client = None
     try:
         import docker as _docker

         client = _docker.DockerClient(base_url=base_url, timeout=3)
         client.ping()
-        client.close()
         return True
     except Exception:
         return False
+    finally:
+        if client is not None:
+            try:
+                client.close()
+            except Exception:
+                pass
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
client = _docker.DockerClient(base_url=base_url, timeout=3)
client.ping()
client.close()
return True
except Exception:
return False
def _probe_socket(base_url: str) -> bool:
"""Return True if a container engine responds on *base_url*."""
client = None
try:
import docker as _docker
client = _docker.DockerClient(base_url=base_url, timeout=3)
client.ping()
return True
except Exception:
return False
finally:
if client is not None:
try:
client.close()
except Exception:
pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/runtime.py` around lines 57 - 62, The _probe_socket function
currently creates a Docker client (client = _docker.DockerClient(...)) and may
return False from the except branch without closing it; update the function to
ensure client.close() is always called by moving the cleanup into a finally
block (or by tracking the client and closing it in both success and failure
paths). Specifically, wrap the client creation, client.ping(), and return logic
in try/except/finally, reference the existing client variable and ensure
client.close() is invoked in the finally so no DockerClient connection leaks
occur.

@bobek
Copy link

bobek commented Mar 6, 2026

This is awesome. It doesn't work for me on Debian sid. strace reveled, that requests are timing out. And indeed they take more than current 3s timeout (src/vibepod/core/runtime.py:30). Using curl I can confirm that even simple /version takes almost 5s. Stracing podman shows that it does a lot of stuff on incoming request (e.g. running dpkg-query).

Anyway, setting timeout=10 "solved" the issue...

~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/v5.7.0/libpod/version
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from  port 0 
* using HTTP/1.x
> GET /v5.7.0/libpod/version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000518008
< Date: Fri, 06 Mar 2026 03:46:02 GMT
< Content-Length: 912
< 
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact

DNS:0.000022s Connect:0.000058s TLS:0.000000s StartTransfer:4.819876s Total:4.819928s Speed:189B/s Code:200
~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/version              
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from  port 0 
* using HTTP/1.x
> GET /version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000682000
< Date: Fri, 06 Mar 2026 03:46:26 GMT
< Content-Length: 912
< 
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact

DNS:0.000025s Connect:0.000063s TLS:0.000000s StartTransfer:4.676405s Total:4.676445s Speed:195B/s Code:200

@bobek
Copy link

bobek commented Mar 6, 2026

One more thing -- mapping UIDs/GIDs. It is quite typical with podman, that you would have ID mapping configured. E.g.

$ cat /etc/subuid
bobek:100000:65536
$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)

Now, let me run container in /tmp/xx as my user 1000(bobek):

/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  40 Mar  6 05:10 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
/tmp/xx ❯ podman run -it --rm -v .:/workspace debian:bookworm-slim /bin/bash
root@422b730a12eb:/# cd /workspace/
root@422b730a12eb:/workspace# id
uid=0(root) gid=0(root) groups=0(root)
root@422b730a12eb:/workspace# touch first
root@422b730a12eb:/workspace# useradd testuser
root@422b730a12eb:/workspace# su - testuser
su: warning: cannot change directory to /home/testuser: No such file or directory
$ bash
testuser@422b730a12eb:/workspace$ id
uid=1000(testuser) gid=1000(testuser) groups=1000(testuser)
testuser@422b730a12eb:/workspace$ touch second-testuser
testuser@422b730a12eb:/workspace$ 
exit
$ 
root@422b730a12eb:/workspace# 
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek  bobek   80 Mar  6 05:11 .
drwxrwxrwt 24 root   root   700 Mar  6 05:03 ..
-rw-r--r--  1 bobek  bobek    0 Mar  6 05:11 first
-rw-r--r--  1 100999 100999   0 Mar  6 05:11 second-testuser
/tmp/xx ❯ 

As you see, the container's root user was mapped to my normal UID (1000) while the container user 1000 was bumped accordingly. I can trivially solve this ad hoc with --userns keep-id:

/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  40 Mar  6 05:15 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
/tmp/xx ❯ podman run -it --userns keep-id --rm -v .:/workspace debian:bookworm-slim /bin/bash
bobek@3295edc546bf:/$ id 
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)
bobek@3295edc546bf:/$ cd /workspace/
bobek@3295edc546bf:/workspace$ touch keep-id
bobek@3295edc546bf:/workspace$ 
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  60 Mar  6 05:15 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
-rw-r--r--  1 bobek bobek   0 Mar  6 05:15 keep-id
/tmp/xx ❯

So it would be really great if we can get control over setting userns to containers.

@nezhar nezhar force-pushed the podman branch 2 times, most recently from f811416 to 019cb23 Compare March 12, 2026 19:58
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/test_run.py (1)

271-272: ⚠️ Potential issue | 🟠 Major

The auto-pull tests are still stubbing the old construction path.

run() now reads get_manager(...).runtime, but these cases still patch run_cmd.DockerManager, so the stub is bypassed entirely. That matches the CI failures at Lines 328, 352, 380, and 394. After switching the monkeypatch to run_cmd.get_manager, _StubDockerManager also needs a runtime attribute or these tests will fail on manager.runtime.

🧪 Minimal fix
 class _StubDockerManager:
     """Minimal DockerManager stub that records pull_image calls."""

     def __init__(self) -> None:
+        self.runtime = "docker"
         self.pulled: list[str] = []
 ...
-    monkeypatch.setattr(run_cmd, "DockerManager", lambda: stub)
+    monkeypatch.setattr(run_cmd, "get_manager", lambda **kwargs: stub)

Apply the same monkeypatch replacement in each auto-pull test.

Also applies to: 321-394

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_run.py` around lines 271 - 272, Tests are patching
run_cmd.DockerManager but run() now uses get_manager(...).runtime, so update the
auto-pull tests to monkeypatch run_cmd.get_manager instead of
run_cmd.DockerManager and have the replacement return the existing
_StubDockerManager instance; additionally add a runtime attribute to
_StubDockerManager (e.g., set manager.runtime = <expected runtime stub> or
implement a runtime property on _StubDockerManager) so manager.runtime access in
run() does not fail; update all affected test cases around the auto-pull blocks
(those failing at lines ~328, 352, 380, 394) to use get_manager monkeypatch and
ensure the stub exposes runtime.
src/vibepod/commands/list_cmd.py (1)

59-66: ⚠️ Potential issue | 🟠 Major

Don't suppress explicit runtime-resolution failures here.

This fallback now also swallows bad or unavailable explicit runtime selections and returns running: [] / “No running agents” instead of surfacing the error. That is misleading for --runtime ..., VP_CONTAINER_RUNTIME, or a saved container_runtime preference. Please only keep the silent fallback for the pure auto-detect, non---running case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/list_cmd.py` around lines 59 - 66, The try/except around
get_manager/list_managed is currently swallowing DockerClientError even when an
explicit runtime was requested; change it so DockerClientError is only silently
handled for the pure auto-detect case (no explicit runtime selection) and only
when not in the `running` path. Specifically, in the except DockerClientError as
exc block, check the `runtime` (the runtime_override passed to get_manager) and
`running` flags: if runtime is None/empty (auto-detect) and not running, set
containers = [] as the fallback; otherwise log the error (using error(str(exc))
or re-raise) and exit with typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit
runtime failures surface. Ensure you reference the existing symbols get_manager,
runtime_override/runtime, DockerClientError, running, list_managed, typer.Exit,
and EXIT_DOCKER_NOT_RUNNING when making the change.
♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)

33-39: ⚠️ Potential issue | 🟡 Minor

version --json still has a breaking schema change.

Replacing the historical docker field with runtime / runtime_version will break existing parsers. Please either keep a compatibility key for one release cycle or call this out explicitly in release notes/changelog.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/update.py` around lines 33 - 39, The JSON schema change
removed the historical "docker" field causing breakage; in the info dict in
update.py (where _runtime_version(runtime_override=runtime) is used to set
"runtime" and "runtime_version"), add back the legacy keys "docker" and
"docker_version" populated from the same rt_name and rt_version values (or map
appropriately) so existing parsers keep working for one release cycle, and mark
them as deprecated in a comment near the info construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/commands/run.py`:
- Around line 274-296: The Podman fixup list currently only adds
spec.config_mount_path and extra agent volumes but misses the user workspace
bind (mounted as /workspace), so add the workspace mount path to paths_to_fix
when manager.runtime == RUNTIME_PODMAN; update the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, spec.config_mount_path,
_agent_extra_volumes, agent_config_dir) to also include the workspace mount
(e.g. '/workspace' or spec.workspace_mount if available) before quoting and
creating podman_fixup, so that the chmod command covers the project workspace
the DockerManager.run_agent binds to /workspace and prevents rootless Podman
permission issues; keep the rest of the flow (manager.resolve_launch_command,
_init_entrypoint) unchanged.

In `@src/vibepod/core/docker.py`:
- Around line 93-116: The current _prepare_volume_dir makes VibePod-managed dirs
world-readable/writable (0o777/0o666) which exposes sensitive state; instead
when runtime == RUNTIME_PODMAN change ownership and restrictive perms rather
than broad chmod: determine the container-side UID/GID mapping for rootless
Podman (query Podman or derive subordinate uid range), chown the path and its
children to that mapped UID:GID and set restrictive perms (e.g. 0o700 for dirs,
0o600 for files); if you cannot resolve a mapped UID:GID, fall back to applying
POSIX ACLs (setfacl) granting the mapped container UID access while keeping
owner-only perms for others; update _prepare_volume_dir to perform chown +
0o700/0o600 (or setfacl) and remove any 0o777/0o666 usage so agent config dirs,
log/proxy state, and proxy CA remain private.

In `@src/vibepod/core/runtime.py`:
- Around line 52-62: The _probe_socket function currently uses a 3-second
DockerClient timeout which can produce false negatives for slower Podman
responses; update the timeout passed to _docker.DockerClient in _probe_socket
(and any related client creation there) to a higher value such as 6 seconds (or
8s) so Podman's ~4.7–4.8s startup ping/version latency is accommodated, keeping
the rest of the try/except behavior the same.

---

Outside diff comments:
In `@src/vibepod/commands/list_cmd.py`:
- Around line 59-66: The try/except around get_manager/list_managed is currently
swallowing DockerClientError even when an explicit runtime was requested; change
it so DockerClientError is only silently handled for the pure auto-detect case
(no explicit runtime selection) and only when not in the `running` path.
Specifically, in the except DockerClientError as exc block, check the `runtime`
(the runtime_override passed to get_manager) and `running` flags: if runtime is
None/empty (auto-detect) and not running, set containers = [] as the fallback;
otherwise log the error (using error(str(exc)) or re-raise) and exit with
typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure
you reference the existing symbols get_manager, runtime_override/runtime,
DockerClientError, running, list_managed, typer.Exit, and
EXIT_DOCKER_NOT_RUNNING when making the change.

In `@tests/test_run.py`:
- Around line 271-272: Tests are patching run_cmd.DockerManager but run() now
uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch
run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement
return the existing _StubDockerManager instance; additionally add a runtime
attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime
stub> or implement a runtime property on _StubDockerManager) so manager.runtime
access in run() does not fail; update all affected test cases around the
auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager
monkeypatch and ensure the stub exposes runtime.

---

Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 33-39: The JSON schema change removed the historical "docker"
field causing breakage; in the info dict in update.py (where
_runtime_version(runtime_override=runtime) is used to set "runtime" and
"runtime_version"), add back the legacy keys "docker" and "docker_version"
populated from the same rt_name and rt_version values (or map appropriately) so
existing parsers keep working for one release cycle, and mark them as deprecated
in a comment near the info construction.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6c6e202f-a858-456e-8570-c4fe9db2d44d

📥 Commits

Reviewing files that changed from the base of the PR and between e3638fe and 019cb23.

📒 Files selected for processing (18)
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • docs/configuration.md
  • docs/quickstart.md
  • tests/test_proxy_permissions.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • docs/podman.md

Comment on lines +93 to +116
def _prepare_volume_dir(self, path: Path) -> None:
"""Ensure *path* is accessible inside a rootless Podman container.

Rootless Podman maps the host UID to root inside the container, but
a non-root process started via ``su`` / ``gosu`` in the entrypoint
gets a subordinate UID that cannot read host files with standard
permissions. Making VibePod-managed directories world-accessible
(``0o777``) avoids "Permission denied" errors without needing
``userns_mode=keep-id`` (which breaks entrypoints that call ``su``).

This is only applied to directories VibePod itself creates — never
to the user's workspace.
"""
if self.runtime != RUNTIME_PODMAN:
return
try:
path.chmod(0o777)
for child in path.rglob("*"):
try:
child.chmod(0o777 if child.is_dir() else 0o666)
except OSError:
pass
except OSError:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid making VibePod state world-readable and world-writable.

_prepare_volume_dir() recursively flips managed trees to 0777/0666 for Podman. In this PR that helper is used for agent config directories, log/proxy state, and the proxy CA directory, so on a multi-user machine other local users can read or tamper with credentials, session data, and CA material. Please use a runtime-specific ownership/ACL strategy instead of relaxing host permissions this broadly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 93 - 116, The current
_prepare_volume_dir makes VibePod-managed dirs world-readable/writable
(0o777/0o666) which exposes sensitive state; instead when runtime ==
RUNTIME_PODMAN change ownership and restrictive perms rather than broad chmod:
determine the container-side UID/GID mapping for rootless Podman (query Podman
or derive subordinate uid range), chown the path and its children to that mapped
UID:GID and set restrictive perms (e.g. 0o700 for dirs, 0o600 for files); if you
cannot resolve a mapped UID:GID, fall back to applying POSIX ACLs (setfacl)
granting the mapped container UID access while keeping owner-only perms for
others; update _prepare_volume_dir to perform chown + 0o700/0o600 (or setfacl)
and remove any 0o777/0o666 usage so agent config dirs, log/proxy state, and
proxy CA remain private.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/vibepod/commands/run.py (1)

285-291: ⚠️ Potential issue | 🟠 Major

Podman permission fix still misses the /workspace bind mount.

Line 286 builds paths_to_fix without the main workspace mount, so rootless Podman can still fail on project files written/read under /workspace.

🔧 Suggested patch
     if manager.runtime == RUNTIME_PODMAN:
-        paths_to_fix = [spec.config_mount_path]
+        paths_to_fix = [spec.config_mount_path, "/workspace"]
         extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent))
         if extra_volumes_pre:
             paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre)
         quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix))
         podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 285 - 291, The podman permission
fix builds paths_to_fix only from spec.config_mount_path and
_agent_extra_volumes, missing the main workspace bind mount (/workspace); update
the block where manager.runtime == RUNTIME_PODMAN to also include the primary
workspace mount in paths_to_fix (e.g. append spec.workspace_mount_path if that
attribute exists, or fall back to adding the literal '/workspace' before
deduping), so that quoted and podman_fixup cover the workspace as well as the
config and extra volumes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/core/runtime.py`:
- Around line 172-179: The prompt currently uses SUPPORTED_RUNTIMES as choices
which can cause a KeyError when accessing available[choice]; change Prompt.ask
to present only actual detected runtimes (use available.keys() or
list(available)) so users can only pick an entry present in the available
mapping, and after selection use available.get(choice) (or validate existence)
before calling save_runtime_preference and returning to avoid KeyError; update
the block around sys.stdin.isatty(), Prompt.ask, save_runtime_preference, and
the return to use these safeguards.

---

Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 285-291: The podman permission fix builds paths_to_fix only from
spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind
mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to
also include the primary workspace mount in paths_to_fix (e.g. append
spec.workspace_mount_path if that attribute exists, or fall back to adding the
literal '/workspace' before deduping), so that quoted and podman_fixup cover the
workspace as well as the config and extra volumes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b8af64c2-7ead-4138-b984-ad82e6eabf78

📥 Commits

Reviewing files that changed from the base of the PR and between 019cb23 and 7f040ac.

📒 Files selected for processing (13)
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/cli.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • docs/configuration.md
  • docs/podman.md

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/vibepod/core/docker.py (1)

287-294: ⚠️ Potential issue | 🟠 Major

Recreate reused service containers when launch options change.

Both methods return any running service container before comparing it to the requested userns_mode. After a user updates config or reruns vp proxy start --userns ... / vp ui --userns ..., the old container keeps running with the previous namespace setting until it is manually stopped or removed.

Also applies to: 350-358

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 287 - 294, The current logic in
find_datasette() handling existing containers returns a running container
without checking if its launch options (notably userns_mode) match the requested
setting; update the check after existing.reload() to read the container's
current user namespace from existing.attrs (e.g., HostConfig.UsernsMode) and
compare it to the requested userns_mode parameter (and keep the existing
PROXY_DB_PATH Env check), and if they differ remove the container
(existing.remove(force=True)) so it will be recreated with the correct
userns_mode; apply the same comparison/fix in the other similar block around the
code referenced at 350-358.
♻️ Duplicate comments (3)
src/vibepod/core/runtime.py (2)

66-81: ⚠️ Potential issue | 🟡 Minor

Always close probe clients on failure paths.

If client.ping() throws, the function returns False without closing the client first. Put cleanup in a finally block so repeated runtime detection doesn't leak open connections or file descriptors.

🔧 Proposed fix
 def _probe_socket(base_url: str) -> bool:
     """Return True if a container engine responds on *base_url*."""
+    client = None
     try:
         import docker as _docker
 
         # Podman's Docker-compatible API can take several seconds to answer
         # even simple probes on some hosts, so use a wider timeout here.
-        client = _docker.DockerClient(
+        client = _docker.DockerClient(
             base_url=base_url,
             timeout=_runtime_probe_timeout(),
         )
         client.ping()
-        client.close()
         return True
     except Exception:
         return False
+    finally:
+        if client is not None:
+            try:
+                client.close()
+            except Exception:
+                pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/runtime.py` around lines 66 - 81, The _probe_socket function
may leak the Docker client if client.ping() raises; ensure the Docker client
created in _probe_socket is always closed by moving client.close() into a
finally block (or use a context manager) and only call client.ping() between
creation and cleanup; update the code around the DockerClient creation and the
client.ping()/client.close() calls in _probe_socket so any exception path still
closes the client before returning False.

137-149: ⚠️ Potential issue | 🟡 Minor

Trim explicit runtime values before validating them.

lower() alone makes values like " Podman " become " podman ", which still falls into the unknown-runtime branch for CLI, env, or saved-config inputs.

🔧 Proposed fix
     if explicit is not None:
-        explicit = explicit.lower()
+        explicit = explicit.strip().lower()
         if explicit not in SUPPORTED_RUNTIMES:
             raise RuntimeError(
                 f"Unknown container runtime '{explicit}'. "
                 f"Supported: {', '.join(SUPPORTED_RUNTIMES)}"
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/runtime.py` around lines 137 - 149, The explicit runtime
string (variable explicit set from override,
os.environ.get("VP_CONTAINER_RUNTIME"), or cfg.get("container_runtime")) must be
trimmed of surrounding whitespace before validation; modify the runtime
resolution code so that after assigning explicit from any source you call
explicit = explicit.strip() (or explicit = explicit.strip() if not None) and
then explicit = explicit.lower() before checking membership in
SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update the block around
explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.
src/vibepod/core/docker.py (1)

93-117: ⚠️ Potential issue | 🟠 Major

Don't recursively make mounted paths world-writable.

For Podman this turns VibePod-managed config, logs, proxy state, and CA material into 0777/0666, and run_agent() now applies the same helper to every extra_volumes host path. On a multi-user machine that exposes sensitive local state to read/tamper by other users. Use a runtime-specific ownership/ACL strategy instead of broad chmod.

Also applies to: 214-216, 300-301, 364-365

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 93 - 117, The current
_prepare_volume_dir implementation recursively chmods paths (and similar helper
uses in run_agent/extra_volumes) which makes mounted host paths world-writable;
change it to avoid touching bind-mounted or user-provided volumes and to set
safe ownership/permissions only on directories VibePod created: detect mounts
(use os.path.ismount or compare st_dev/st_ino with parent) and skip them, only
operate on paths that we created (or are under our managed data directories),
and instead of 0o777/0666 use restrictive perms (e.g., 0o750 for dirs, 0o640 for
files) and prefer chown/chacl to set appropriate owner/ACL for the runtime UID
where available; update all call sites (e.g., run_agent handling of
extra_volumes) to not call the helper for external mounts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/core/docker.py`:
- Around line 244-247: The code unconditionally forwards userns_mode into
run_kwargs causing Docker client failures; update the places where run_kwargs is
built (the blocks that set run_kwargs["userns_mode"] alongside
run_kwargs["user"]) to validate/translate userns_mode against self.runtime: if
self.runtime indicates Docker only allow or translate to 'host' (or omit it) and
for Podman allow keep-id/nomap/auto, and apply the same validation at all
occurrences where userns_mode is assigned (the three run_kwargs construction
sites in this module) so that Docker never receives unsupported Podman modes.
Ensure checks reference self.runtime and run_kwargs (and the local userns_mode
variable) and skip or map the value before adding run_kwargs["userns_mode"].

---

Outside diff comments:
In `@src/vibepod/core/docker.py`:
- Around line 287-294: The current logic in find_datasette() handling existing
containers returns a running container without checking if its launch options
(notably userns_mode) match the requested setting; update the check after
existing.reload() to read the container's current user namespace from
existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested
userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if
they differ remove the container (existing.remove(force=True)) so it will be
recreated with the correct userns_mode; apply the same comparison/fix in the
other similar block around the code referenced at 350-358.

---

Duplicate comments:
In `@src/vibepod/core/docker.py`:
- Around line 93-117: The current _prepare_volume_dir implementation recursively
chmods paths (and similar helper uses in run_agent/extra_volumes) which makes
mounted host paths world-writable; change it to avoid touching bind-mounted or
user-provided volumes and to set safe ownership/permissions only on directories
VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino
with parent) and skip them, only operate on paths that we created (or are under
our managed data directories), and instead of 0o777/0666 use restrictive perms
(e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set
appropriate owner/ACL for the runtime UID where available; update all call sites
(e.g., run_agent handling of extra_volumes) to not call the helper for external
mounts.

In `@src/vibepod/core/runtime.py`:
- Around line 66-81: The _probe_socket function may leak the Docker client if
client.ping() raises; ensure the Docker client created in _probe_socket is
always closed by moving client.close() into a finally block (or use a context
manager) and only call client.ping() between creation and cleanup; update the
code around the DockerClient creation and the client.ping()/client.close() calls
in _probe_socket so any exception path still closes the client before returning
False.
- Around line 137-149: The explicit runtime string (variable explicit set from
override, os.environ.get("VP_CONTAINER_RUNTIME"), or
cfg.get("container_runtime")) must be trimmed of surrounding whitespace before
validation; modify the runtime resolution code so that after assigning explicit
from any source you call explicit = explicit.strip() (or explicit =
explicit.strip() if not None) and then explicit = explicit.lower() before
checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update
the block around explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: de41294f-e499-4085-80f7-2cef77783a79

📥 Commits

Reviewing files that changed from the base of the PR and between 7f040ac and ace32f6.

📒 Files selected for processing (13)
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/cli.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • docs/configuration.md
  • tests/test_runtime.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/logs.py
  • docs/podman.md

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/test_runtime.py (1)

110-122: Consider adding a test for _probe_socket failure cleanup.

The current tests verify successful probing, but there's no test verifying that the Docker client is properly closed when ping() fails. This would help catch the resource leak identified in _probe_socket.

Example test
def test_probe_socket_closes_client_on_failure(monkeypatch) -> None:
    recorded: dict[str, bool] = {}

    class _FakeClient:
        def __init__(self, *, base_url: str, timeout: float) -> None:
            pass

        def ping(self) -> None:
            raise ConnectionError("Daemon not responding")

        def close(self) -> None:
            recorded["closed"] = True

    monkeypatch.delenv("VP_RUNTIME_PROBE_TIMEOUT", raising=False)
    monkeypatch.setitem(sys.modules, "docker", SimpleNamespace(DockerClient=_FakeClient))

    assert runtime._probe_socket("unix:///tmp/unavailable.sock") is False
    assert recorded.get("closed") is True, "Client should be closed even on failure"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_runtime.py` around lines 110 - 122, Add a test that asserts Docker
client is closed when _probe_socket fails: mock a fake DockerClient class with
ping() raising (e.g., ConnectionError) and close() recording a flag, monkeypatch
docker.DockerClient to that fake, call
runtime._probe_socket("unix:///tmp/unavailable.sock") and assert it returns
False and that the recorded flag shows close() was called; ensure
VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test (monkeypatch.delenv(...,
raising=False)) so probe uses the default timeout and the test references
runtime._probe_socket, docker.DockerClient, ping, and close to locate relevant
code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/test_runtime.py`:
- Around line 110-122: Add a test that asserts Docker client is closed when
_probe_socket fails: mock a fake DockerClient class with ping() raising (e.g.,
ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient
to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and
assert it returns False and that the recorded flag shows close() was called;
ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test
(monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and
the test references runtime._probe_socket, docker.DockerClient, ping, and close
to locate relevant code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4c23591-6a3e-4d48-a14f-db09f906be91

📥 Commits

Reviewing files that changed from the base of the PR and between ace32f6 and 7613ef1.

📒 Files selected for processing (8)
  • README.md
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/commands/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_runtime.py
✅ Files skipped from review due to trivial changes (2)
  • README.md
  • docs/podman.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_config.py

@nezhar
Copy link
Collaborator Author

nezhar commented Mar 13, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 13, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@greptile-apps
Copy link

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR introduces first-class Podman support as an alternative to Docker across the entire VibePod CLI: auto-detection of available runtimes, a new vp config runtime command, --runtime/--userns flags on every subcommand, rootless Podman permission fixes, and updated documentation. The architecture is sound and the integration test matrix is comprehensive.

Key issues found:

  • _resolved_userns_mode silently drops valid parametric Podman modes (src/vibepod/core/docker.py:138–139). The strict set check normalized in {"auto", "keep-id", "nomap"} discards any extended form such as keep-id:uid=1000 or auto:uidmapping=0:1000:1 without raising an error or warning, so users who configure these modes get silent no-ops.
  • OSError from save_runtime_preference is uncaught in get_manager (src/vibepod/core/docker.py:60–63). When stdin is a TTY and both runtimes are available, resolve_runtime calls save_runtime_preference, which can raise OSError (e.g., read-only config directory). get_manager only catches RuntimeError and ValueError, so the OSError propagates to the command handler as an unhandled exception instead of a friendly DockerClientError.
  • In-container chmod -R a+rwX Podman fixup (src/vibepod/commands/run.py:330–331). The injected entrypoint chmod runs inside the container on bind-mounted directories, propagating world-writable permissions back to the host. Combined with _prepare_volume_dir's 0o777 host-side chmod, API keys and credentials in managed directories become world-readable on the host. See also the related _prepare_volume_dir thread.
  • Runtime detection runs eagerly before checking explicit overrides (src/vibepod/core/runtime.py:185, noted in a previous thread). detect_available_runtimes() probes all sockets before the explicit-override branch is evaluated, adding unnecessary latency on every command when a runtime is already pinned.

Confidence Score: 2/5

  • Not safe to merge as-is: credential directories can be made world-readable on the host, a valid Podman userns mode config is silently ignored, and an uncaught OSError produces a raw traceback in an interactive flow.
  • Three actionable issues of varying severity: the chmod a+rwX host permission bleed-through on credential dirs is a real security concern, _resolved_userns_mode silently drops valid parametric Podman modes (user misconfiguration is invisible), and the uncaught OSError from save_runtime_preference causes a raw Python traceback instead of a graceful error. The runtime detection latency issue (noted in a previous thread) is a performance concern. The overall architecture and test coverage are otherwise solid.
  • src/vibepod/core/docker.py (userns mode filtering and OSError handling) and src/vibepod/commands/run.py (in-container chmod entrypoint fixup) need attention before merging.

Important Files Changed

Filename Overview
src/vibepod/core/runtime.py New module implementing runtime detection, selection, and preference persistence; resolve_runtime always calls detect_available_runtimes() before checking explicit overrides (noted in previous threads), and the non-TTY both-runtimes path silently assumes Docker is always present in available.
src/vibepod/core/docker.py Core Docker/Podman management; _resolved_userns_mode silently drops valid parametric Podman modes (e.g. keep-id:uid=1000); get_manager does not catch OSError from save_runtime_preference; _prepare_volume_dir sets 0o777 world-writable on credential dirs (noted in previous threads).
src/vibepod/commands/run.py Run command updated to support --runtime and --userns flags; Podman fixup injects chmod -R a+rwX into the container entrypoint which propagates world-writable permissions back to host bind mounts containing API keys; proxy URL is now resolved from container IP with hostname fallback.
src/vibepod/commands/config.py New vp config runtime subcommand added to view/set the global container runtime preference; input validation is thorough and errors are handled cleanly.
src/vibepod/core/config.py New container_runtime and container_userns_mode config keys added with env-var overrides; get_container_userns_mode helper added; normalisation is correct.
tests/test_runtime.py New unit tests for runtime detection and selection; covers probe timeout, interactive prompting, env-var normalisation, and error wrapping; good coverage.
tests/test_proxy_permissions.py Extended proxy permissions tests; covers userns mode forwarding for Podman, omission for Docker, and network reconnection of existing proxy containers.
.github/workflows/runtime-integration.yml New CI workflow running integration tests against 7 runtime scenarios; matrix is well-structured with fail-fast disabled; probe timeout set to 20s for CI.
docs/podman.md New comprehensive Podman setup guide; covers prerequisites, runtime selection methods, known limitations (interactive attach, userns mapping, volume permissions, rootful mode).

Sequence Diagram

sequenceDiagram
    participant CLI as CLI Command
    participant GM as get_manager()
    participant RR as resolve_runtime()
    participant DAR as detect_available_runtimes()
    participant PS as _probe_socket()
    participant SRP as save_runtime_preference()
    participant DM as DockerManager

    CLI->>GM: get_manager(runtime_override, config)
    GM->>RR: resolve_runtime(override, config)
    Note over RR: Check override → env → config<br/>for explicit runtime
    RR->>DAR: detect_available_runtimes()
    DAR->>PS: probe docker socket
    PS-->>DAR: True/False
    DAR->>PS: probe podman socket(s)
    PS-->>DAR: True/False
    DAR-->>RR: {docker: url, podman: url}

    alt explicit runtime set
        RR-->>GM: (runtime_name, socket_url)
    else single runtime found
        RR-->>GM: (runtime_name, socket_url)
    else both runtimes, TTY
        RR->>RR: Prompt.ask()
        RR->>SRP: save_runtime_preference(choice)
        Note over SRP: OSError not caught by get_manager!
        RR-->>GM: (choice, socket_url)
    else both runtimes, non-TTY
        RR-->>GM: (RUNTIME_DOCKER, docker_url)
    end

    GM->>DM: DockerManager(base_url, runtime)
    DM-->>GM: manager
    GM-->>CLI: manager
Loading

Last reviewed commit: "Add integration test..."

Comment on lines +138 to +139
if self.runtime == RUNTIME_PODMAN:
return normalized if normalized in {"auto", "keep-id", "nomap"} else None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 _resolved_userns_mode silently drops valid Podman modes with parameters

The strict set membership check normalized in {"auto", "keep-id", "nomap"} only allows three exact strings. Podman's --userns flag supports many additional modes beyond these three, including parametric forms:

  • keep-id:uid=1000,gid=1000
  • auto:uidmapping=0:1000:1
  • ns:/proc/pid/ns/user
  • private

If a user configures VP_CONTAINER_USERNS_MODE=keep-id:uid=1000 (a valid and documented Podman mode), _resolved_userns_mode returns None with no error or warning. The userns mode is silently ignored and the user has no way to know their setting wasn't applied.

For Podman, the intent is to allow Podman-compatible modes and block Docker-incompatible ones. The safer approach is to pass any mode through to Podman and only filter out known Podman-specific prefixes on the Docker path:

if self.runtime == RUNTIME_PODMAN:
    return normalized  # pass all modes through to Podman

# For Docker, drop known Podman-only modes
if normalized in {"auto", "keep-id", "nomap"} or normalized.startswith(("keep-id:", "auto:")):
    return None

return normalized

Comment on lines +60 to +63
try:
runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError) as exc:
raise DockerClientError(str(exc)) from exc
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 OSError from save_runtime_preference propagates uncaught

get_manager wraps RuntimeError and ValueError into DockerClientError, but OSError is not included. resolve_runtime calls save_runtime_preference (line 226 in runtime.py) when a user interactively selects a runtime. save_runtime_preference can raise OSError if the config directory is not writable (e.g., permissions issue, read-only filesystem).

Since OSError is not caught here and CLI command handlers only catch DockerClientError, this results in an unhandled Python traceback instead of a friendly error message:

try:
    runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError) as exc:
    raise DockerClientError(str(exc)) from exc

The fix is to also catch OSError:

Suggested change
try:
runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError) as exc:
raise DockerClientError(str(exc)) from exc
try:
runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError, OSError) as exc:
raise DockerClientError(str(exc)) from exc

Comment on lines +330 to +331
quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix))
podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 chmod a+rwX Podman fixup is also applied inside the container

The chmod -R a+rwX shell command runs inside the container at startup on bind-mounted directories. Because these are bind mounts, the permission changes propagate back to the host filesystem. Combined with _prepare_volume_dir (which already sets 0o777 on the host before container start), agent config directories — which may contain API keys and authentication tokens — end up world-readable and world-writable on the host after every Podman run.

This is a different path from the _prepare_volume_dir host-side chmod: this chmod runs inside the container via the injected entrypoint script. For rootless Podman, a+rwX inside the container maps to subordinate UIDs on the host side, but for Docker users who happen to run --runtime=docker on a shared machine with a Podman-patched config, neither guard fires but the host permissions are already set by a prior Podman run.

Consider scoping the in-container fixup to g+rwX rather than a+rwX, which is consistent with the workspace guidance in docs/podman.md that advises against world-writable directories.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants