Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Podman support and runtime-agnostic container management: runtime detection/selection and persistence, a get_manager factory, runtime-aware DockerManager changes (sockets, volume/userns handling), new CLI --runtime/--userns options and config/env keys, docs, and tests updated to exercise runtime and userns behavior. (34 words) Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/CLI
participant Cmd as Command Handler
participant Runtime as runtime.resolve_runtime
participant Detector as detect_available_runtimes
participant Sock as Runtime Socket(s)
participant Factory as get_manager
participant Manager as DockerManager
User->>Cmd: invoke command (optional --runtime / --userns)
Cmd->>Runtime: resolve_runtime(override, config)
alt override or saved preference
Runtime-->>Factory: (runtime, socket_url)
else auto-detect
Runtime->>Detector: detect_available_runtimes()
Detector->>Sock: probe candidate sockets (docker/podman)
Sock-->>Detector: success/fail
Detector-->>Runtime: available runtimes
Runtime->>User: prompt (if multiple & TTY)
User-->>Runtime: selection
Runtime-->>Factory: (chosen_runtime, socket_url)
end
Factory->>Manager: create DockerManager(base_url=socket_url, runtime)
Manager-->>Cmd: manager instance
Cmd->>Manager: run/stop/proxy/logs (userns_mode forwarded where applicable)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present. |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)
269-292: Podman rootless permission fixup logic is sound.The approach of injecting
chmod -R a+rwXbefore init commands correctly addresses the subordinate UID permission problem in rootless Podman. The use ofdict.fromkeys()for deduplication while preserving order is idiomatic.One minor edge case: paths containing single quotes would break the shell command. This is unlikely for typical config paths but worth noting.
🛡️ Optional: More robust path quoting
If you want to handle paths with special characters more defensively:
- quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix)) + # Use printf %q style quoting for safety with special chars + quoted = " ".join(shlex.quote(p) for p in dict.fromkeys(paths_to_fix))This would require adding
import shlexat the top.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 269 - 292, The current podman_fixup builds a shell command by joining paths into quoted using simple single-quote wrapping, which breaks if any path contains a single quote; update the logic in the block that builds podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted) to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex at the top of the file so paths with special characters or embedded quotes are safely quoted before constructing the chmod command.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/podman.md`:
- Around line 86-90: Replace the overly permissive workspace permission
suggestion "chmod -R a+rwX ~/my-project" with owner/group-scoped guidance:
recommend "chmod -R u+rwX ~/my-project" and mention an optional "chmod -R g+rwX
~/my-project" line for collaborative setups; update the docs/podman.md snippet
accordingly and remove or discourage the world-writable (a+rwX) example.
In `@src/vibepod/commands/update.py`:
- Around line 34-39: The change to the info dict in update.py replaced the
single "docker" key with "runtime" and "runtime_version", which breaks consumers
of the --json output; either add a compatibility field "docker" (e.g., derive
from runtime or runtime_version and set info["docker"] =
"<runtime>/<runtime_version>" or original docker string) so both schemas are
present, or update release notes/changelog to document the breaking change;
locate the info dict in update.py (keys: "vibepod", "python", "runtime",
"runtime_version") and implement the chosen fix so existing tooling continues to
work or users are explicitly informed.
In `@src/vibepod/core/config.py`:
- Line 139: The VP_CONTAINER_RUNTIME value should be normalized (trimmed and
lowercased) before storing into container_runtime to avoid mismatches like "
Podman " or "PODMAN"; update the tuple for "VP_CONTAINER_RUNTIME" in the mapping
in src/vibepod/core/config.py to use a parser that performs v.strip().lower()
(or replace str with a named helper parser function) and optionally validate
against allowed runtimes (e.g., "docker","podman") so the parsed value assigned
to container_runtime is normalized and predictable.
In `@src/vibepod/core/runtime.py`:
- Around line 57-62: The _probe_socket function currently creates a Docker
client (client = _docker.DockerClient(...)) and may return False from the except
branch without closing it; update the function to ensure client.close() is
always called by moving the cleanup into a finally block (or by tracking the
client and closing it in both success and failure paths). Specifically, wrap the
client creation, client.ping(), and return logic in try/except/finally,
reference the existing client variable and ensure client.close() is invoked in
the finally so no DockerClient connection leaks occur.
---
Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 269-292: The current podman_fixup builds a shell command by
joining paths into quoted using simple single-quote wrapping, which breaks if
any path contains a single quote; update the logic in the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted)
to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex
at the top of the file so paths with special characters or embedded quotes are
safely quoted before constructing the chmod command.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 661f0184-e4fd-49f5-965b-613a0a9d7d83
📒 Files selected for processing (18)
docs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlsrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.py
src/vibepod/core/runtime.py
Outdated
| client = _docker.DockerClient(base_url=base_url, timeout=3) | ||
| client.ping() | ||
| client.close() | ||
| return True | ||
| except Exception: | ||
| return False |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/vibepod/core/runtime.py | sed -n '50,70p'Repository: VibePod/vibepod-cli
Length of output: 785
Ensure _probe_socket closes the Docker client on failure paths.
If client.ping() fails on line 58, the except block returns False without closing the client created at line 57, leaking the connection. Move cleanup to a finally block to ensure the client is properly closed regardless of success or failure.
Suggested patch
def _probe_socket(base_url: str) -> bool:
"""Return True if a container engine responds on *base_url*."""
+ client = None
try:
import docker as _docker
client = _docker.DockerClient(base_url=base_url, timeout=3)
client.ping()
- client.close()
return True
except Exception:
return False
+ finally:
+ if client is not None:
+ try:
+ client.close()
+ except Exception:
+ pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| client = _docker.DockerClient(base_url=base_url, timeout=3) | |
| client.ping() | |
| client.close() | |
| return True | |
| except Exception: | |
| return False | |
| def _probe_socket(base_url: str) -> bool: | |
| """Return True if a container engine responds on *base_url*.""" | |
| client = None | |
| try: | |
| import docker as _docker | |
| client = _docker.DockerClient(base_url=base_url, timeout=3) | |
| client.ping() | |
| return True | |
| except Exception: | |
| return False | |
| finally: | |
| if client is not None: | |
| try: | |
| client.close() | |
| except Exception: | |
| pass |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/vibepod/core/runtime.py` around lines 57 - 62, The _probe_socket function
currently creates a Docker client (client = _docker.DockerClient(...)) and may
return False from the except branch without closing it; update the function to
ensure client.close() is always called by moving the cleanup into a finally
block (or by tracking the client and closing it in both success and failure
paths). Specifically, wrap the client creation, client.ping(), and return logic
in try/except/finally, reference the existing client variable and ensure
client.close() is invoked in the finally so no DockerClient connection leaks
occur.
|
This is awesome. It doesn't work for me on Debian sid. Anyway, setting ~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/v5.7.0/libpod/version
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from port 0
* using HTTP/1.x
> GET /v5.7.0/libpod/version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000518008
< Date: Fri, 06 Mar 2026 03:46:02 GMT
< Content-Length: 912
<
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact
DNS:0.000022s Connect:0.000058s TLS:0.000000s StartTransfer:4.819876s Total:4.819928s Speed:189B/s Code:200
~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/version
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from port 0
* using HTTP/1.x
> GET /version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000682000
< Date: Fri, 06 Mar 2026 03:46:26 GMT
< Content-Length: 912
<
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact
DNS:0.000025s Connect:0.000063s TLS:0.000000s StartTransfer:4.676405s Total:4.676445s Speed:195B/s Code:200 |
|
One more thing -- mapping UIDs/GIDs. It is quite typical with $ cat /etc/subuid
bobek:100000:65536
$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)Now, let me run container in /tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 40 Mar 6 05:10 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
/tmp/xx ❯ podman run -it --rm -v .:/workspace debian:bookworm-slim /bin/bash
root@422b730a12eb:/# cd /workspace/
root@422b730a12eb:/workspace# id
uid=0(root) gid=0(root) groups=0(root)
root@422b730a12eb:/workspace# touch first
root@422b730a12eb:/workspace# useradd testuser
root@422b730a12eb:/workspace# su - testuser
su: warning: cannot change directory to /home/testuser: No such file or directory
$ bash
testuser@422b730a12eb:/workspace$ id
uid=1000(testuser) gid=1000(testuser) groups=1000(testuser)
testuser@422b730a12eb:/workspace$ touch second-testuser
testuser@422b730a12eb:/workspace$
exit
$
root@422b730a12eb:/workspace#
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 80 Mar 6 05:11 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
-rw-r--r-- 1 bobek bobek 0 Mar 6 05:11 first
-rw-r--r-- 1 100999 100999 0 Mar 6 05:11 second-testuser
/tmp/xx ❯ As you see, the container's /tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 40 Mar 6 05:15 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
/tmp/xx ❯ podman run -it --userns keep-id --rm -v .:/workspace debian:bookworm-slim /bin/bash
bobek@3295edc546bf:/$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)
bobek@3295edc546bf:/$ cd /workspace/
bobek@3295edc546bf:/workspace$ touch keep-id
bobek@3295edc546bf:/workspace$
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 60 Mar 6 05:15 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
-rw-r--r-- 1 bobek bobek 0 Mar 6 05:15 keep-id
/tmp/xx ❯So it would be really great if we can get control over setting |
f811416 to
019cb23
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/test_run.py (1)
271-272:⚠️ Potential issue | 🟠 MajorThe auto-pull tests are still stubbing the old construction path.
run()now readsget_manager(...).runtime, but these cases still patchrun_cmd.DockerManager, so the stub is bypassed entirely. That matches the CI failures at Lines 328, 352, 380, and 394. After switching the monkeypatch torun_cmd.get_manager,_StubDockerManageralso needs aruntimeattribute or these tests will fail onmanager.runtime.🧪 Minimal fix
class _StubDockerManager: """Minimal DockerManager stub that records pull_image calls.""" def __init__(self) -> None: + self.runtime = "docker" self.pulled: list[str] = [] ... - monkeypatch.setattr(run_cmd, "DockerManager", lambda: stub) + monkeypatch.setattr(run_cmd, "get_manager", lambda **kwargs: stub)Apply the same monkeypatch replacement in each auto-pull test.
Also applies to: 321-394
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_run.py` around lines 271 - 272, Tests are patching run_cmd.DockerManager but run() now uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement return the existing _StubDockerManager instance; additionally add a runtime attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime stub> or implement a runtime property on _StubDockerManager) so manager.runtime access in run() does not fail; update all affected test cases around the auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager monkeypatch and ensure the stub exposes runtime.src/vibepod/commands/list_cmd.py (1)
59-66:⚠️ Potential issue | 🟠 MajorDon't suppress explicit runtime-resolution failures here.
This fallback now also swallows bad or unavailable explicit runtime selections and returns
running: []/ “No running agents” instead of surfacing the error. That is misleading for--runtime ...,VP_CONTAINER_RUNTIME, or a savedcontainer_runtimepreference. Please only keep the silent fallback for the pure auto-detect, non---runningcase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/list_cmd.py` around lines 59 - 66, The try/except around get_manager/list_managed is currently swallowing DockerClientError even when an explicit runtime was requested; change it so DockerClientError is only silently handled for the pure auto-detect case (no explicit runtime selection) and only when not in the `running` path. Specifically, in the except DockerClientError as exc block, check the `runtime` (the runtime_override passed to get_manager) and `running` flags: if runtime is None/empty (auto-detect) and not running, set containers = [] as the fallback; otherwise log the error (using error(str(exc)) or re-raise) and exit with typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure you reference the existing symbols get_manager, runtime_override/runtime, DockerClientError, running, list_managed, typer.Exit, and EXIT_DOCKER_NOT_RUNNING when making the change.
♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)
33-39:⚠️ Potential issue | 🟡 Minor
version --jsonstill has a breaking schema change.Replacing the historical
dockerfield withruntime/runtime_versionwill break existing parsers. Please either keep a compatibility key for one release cycle or call this out explicitly in release notes/changelog.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/update.py` around lines 33 - 39, The JSON schema change removed the historical "docker" field causing breakage; in the info dict in update.py (where _runtime_version(runtime_override=runtime) is used to set "runtime" and "runtime_version"), add back the legacy keys "docker" and "docker_version" populated from the same rt_name and rt_version values (or map appropriately) so existing parsers keep working for one release cycle, and mark them as deprecated in a comment near the info construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/commands/run.py`:
- Around line 274-296: The Podman fixup list currently only adds
spec.config_mount_path and extra agent volumes but misses the user workspace
bind (mounted as /workspace), so add the workspace mount path to paths_to_fix
when manager.runtime == RUNTIME_PODMAN; update the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, spec.config_mount_path,
_agent_extra_volumes, agent_config_dir) to also include the workspace mount
(e.g. '/workspace' or spec.workspace_mount if available) before quoting and
creating podman_fixup, so that the chmod command covers the project workspace
the DockerManager.run_agent binds to /workspace and prevents rootless Podman
permission issues; keep the rest of the flow (manager.resolve_launch_command,
_init_entrypoint) unchanged.
In `@src/vibepod/core/docker.py`:
- Around line 93-116: The current _prepare_volume_dir makes VibePod-managed dirs
world-readable/writable (0o777/0o666) which exposes sensitive state; instead
when runtime == RUNTIME_PODMAN change ownership and restrictive perms rather
than broad chmod: determine the container-side UID/GID mapping for rootless
Podman (query Podman or derive subordinate uid range), chown the path and its
children to that mapped UID:GID and set restrictive perms (e.g. 0o700 for dirs,
0o600 for files); if you cannot resolve a mapped UID:GID, fall back to applying
POSIX ACLs (setfacl) granting the mapped container UID access while keeping
owner-only perms for others; update _prepare_volume_dir to perform chown +
0o700/0o600 (or setfacl) and remove any 0o777/0o666 usage so agent config dirs,
log/proxy state, and proxy CA remain private.
In `@src/vibepod/core/runtime.py`:
- Around line 52-62: The _probe_socket function currently uses a 3-second
DockerClient timeout which can produce false negatives for slower Podman
responses; update the timeout passed to _docker.DockerClient in _probe_socket
(and any related client creation there) to a higher value such as 6 seconds (or
8s) so Podman's ~4.7–4.8s startup ping/version latency is accommodated, keeping
the rest of the try/except behavior the same.
---
Outside diff comments:
In `@src/vibepod/commands/list_cmd.py`:
- Around line 59-66: The try/except around get_manager/list_managed is currently
swallowing DockerClientError even when an explicit runtime was requested; change
it so DockerClientError is only silently handled for the pure auto-detect case
(no explicit runtime selection) and only when not in the `running` path.
Specifically, in the except DockerClientError as exc block, check the `runtime`
(the runtime_override passed to get_manager) and `running` flags: if runtime is
None/empty (auto-detect) and not running, set containers = [] as the fallback;
otherwise log the error (using error(str(exc)) or re-raise) and exit with
typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure
you reference the existing symbols get_manager, runtime_override/runtime,
DockerClientError, running, list_managed, typer.Exit, and
EXIT_DOCKER_NOT_RUNNING when making the change.
In `@tests/test_run.py`:
- Around line 271-272: Tests are patching run_cmd.DockerManager but run() now
uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch
run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement
return the existing _StubDockerManager instance; additionally add a runtime
attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime
stub> or implement a runtime property on _StubDockerManager) so manager.runtime
access in run() does not fail; update all affected test cases around the
auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager
monkeypatch and ensure the stub exposes runtime.
---
Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 33-39: The JSON schema change removed the historical "docker"
field causing breakage; in the info dict in update.py (where
_runtime_version(runtime_override=runtime) is used to set "runtime" and
"runtime_version"), add back the legacy keys "docker" and "docker_version"
populated from the same rt_name and rt_version values (or map appropriately) so
existing parsers keep working for one release cycle, and mark them as deprecated
in a comment near the info construction.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c6e202f-a858-456e-8570-c4fe9db2d44d
📒 Files selected for processing (18)
docs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlsrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.py
🚧 Files skipped from review as they are similar to previous changes (6)
- docs/configuration.md
- docs/quickstart.md
- tests/test_proxy_permissions.py
- src/vibepod/constants.py
- src/vibepod/core/config.py
- docs/podman.md
| def _prepare_volume_dir(self, path: Path) -> None: | ||
| """Ensure *path* is accessible inside a rootless Podman container. | ||
|
|
||
| Rootless Podman maps the host UID to root inside the container, but | ||
| a non-root process started via ``su`` / ``gosu`` in the entrypoint | ||
| gets a subordinate UID that cannot read host files with standard | ||
| permissions. Making VibePod-managed directories world-accessible | ||
| (``0o777``) avoids "Permission denied" errors without needing | ||
| ``userns_mode=keep-id`` (which breaks entrypoints that call ``su``). | ||
|
|
||
| This is only applied to directories VibePod itself creates — never | ||
| to the user's workspace. | ||
| """ | ||
| if self.runtime != RUNTIME_PODMAN: | ||
| return | ||
| try: | ||
| path.chmod(0o777) | ||
| for child in path.rglob("*"): | ||
| try: | ||
| child.chmod(0o777 if child.is_dir() else 0o666) | ||
| except OSError: | ||
| pass | ||
| except OSError: | ||
| pass |
There was a problem hiding this comment.
Avoid making VibePod state world-readable and world-writable.
_prepare_volume_dir() recursively flips managed trees to 0777/0666 for Podman. In this PR that helper is used for agent config directories, log/proxy state, and the proxy CA directory, so on a multi-user machine other local users can read or tamper with credentials, session data, and CA material. Please use a runtime-specific ownership/ACL strategy instead of relaxing host permissions this broadly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/vibepod/core/docker.py` around lines 93 - 116, The current
_prepare_volume_dir makes VibePod-managed dirs world-readable/writable
(0o777/0o666) which exposes sensitive state; instead when runtime ==
RUNTIME_PODMAN change ownership and restrictive perms rather than broad chmod:
determine the container-side UID/GID mapping for rootless Podman (query Podman
or derive subordinate uid range), chown the path and its children to that mapped
UID:GID and set restrictive perms (e.g. 0o700 for dirs, 0o600 for files); if you
cannot resolve a mapped UID:GID, fall back to applying POSIX ACLs (setfacl)
granting the mapped container UID access while keeping owner-only perms for
others; update _prepare_volume_dir to perform chown + 0o700/0o600 (or setfacl)
and remove any 0o777/0o666 usage so agent config dirs, log/proxy state, and
proxy CA remain private.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/vibepod/commands/run.py (1)
285-291:⚠️ Potential issue | 🟠 MajorPodman permission fix still misses the
/workspacebind mount.Line 286 builds
paths_to_fixwithout the main workspace mount, so rootless Podman can still fail on project files written/read under/workspace.🔧 Suggested patch
if manager.runtime == RUNTIME_PODMAN: - paths_to_fix = [spec.config_mount_path] + paths_to_fix = [spec.config_mount_path, "/workspace"] extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent)) if extra_volumes_pre: paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre) quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix)) podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 285 - 291, The podman permission fix builds paths_to_fix only from spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to also include the primary workspace mount in paths_to_fix (e.g. append spec.workspace_mount_path if that attribute exists, or fall back to adding the literal '/workspace' before deduping), so that quoted and podman_fixup cover the workspace as well as the config and extra volumes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/core/runtime.py`:
- Around line 172-179: The prompt currently uses SUPPORTED_RUNTIMES as choices
which can cause a KeyError when accessing available[choice]; change Prompt.ask
to present only actual detected runtimes (use available.keys() or
list(available)) so users can only pick an entry present in the available
mapping, and after selection use available.get(choice) (or validate existence)
before calling save_runtime_preference and returning to avoid KeyError; update
the block around sys.stdin.isatty(), Prompt.ask, save_runtime_preference, and
the return to use these safeguards.
---
Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 285-291: The podman permission fix builds paths_to_fix only from
spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind
mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to
also include the primary workspace mount in paths_to_fix (e.g. append
spec.workspace_mount_path if that attribute exists, or fall back to adding the
literal '/workspace' before deduping), so that quoted and podman_fixup cover the
workspace as well as the config and extra volumes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8af64c2-7ead-4138-b984-ad82e6eabf78
📒 Files selected for processing (13)
docs/configuration.mddocs/podman.mdsrc/vibepod/cli.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/configuration.md
- docs/podman.md
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/vibepod/core/docker.py (1)
287-294:⚠️ Potential issue | 🟠 MajorRecreate reused service containers when launch options change.
Both methods return any running service container before comparing it to the requested
userns_mode. After a user updates config or rerunsvp proxy start --userns .../vp ui --userns ..., the old container keeps running with the previous namespace setting until it is manually stopped or removed.Also applies to: 350-358
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/docker.py` around lines 287 - 294, The current logic in find_datasette() handling existing containers returns a running container without checking if its launch options (notably userns_mode) match the requested setting; update the check after existing.reload() to read the container's current user namespace from existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if they differ remove the container (existing.remove(force=True)) so it will be recreated with the correct userns_mode; apply the same comparison/fix in the other similar block around the code referenced at 350-358.
♻️ Duplicate comments (3)
src/vibepod/core/runtime.py (2)
66-81:⚠️ Potential issue | 🟡 MinorAlways close probe clients on failure paths.
If
client.ping()throws, the function returnsFalsewithout closing the client first. Put cleanup in afinallyblock so repeated runtime detection doesn't leak open connections or file descriptors.🔧 Proposed fix
def _probe_socket(base_url: str) -> bool: """Return True if a container engine responds on *base_url*.""" + client = None try: import docker as _docker # Podman's Docker-compatible API can take several seconds to answer # even simple probes on some hosts, so use a wider timeout here. - client = _docker.DockerClient( + client = _docker.DockerClient( base_url=base_url, timeout=_runtime_probe_timeout(), ) client.ping() - client.close() return True except Exception: return False + finally: + if client is not None: + try: + client.close() + except Exception: + pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/runtime.py` around lines 66 - 81, The _probe_socket function may leak the Docker client if client.ping() raises; ensure the Docker client created in _probe_socket is always closed by moving client.close() into a finally block (or use a context manager) and only call client.ping() between creation and cleanup; update the code around the DockerClient creation and the client.ping()/client.close() calls in _probe_socket so any exception path still closes the client before returning False.
137-149:⚠️ Potential issue | 🟡 MinorTrim explicit runtime values before validating them.
lower()alone makes values like" Podman "become" podman ", which still falls into the unknown-runtime branch for CLI, env, or saved-config inputs.🔧 Proposed fix
if explicit is not None: - explicit = explicit.lower() + explicit = explicit.strip().lower() if explicit not in SUPPORTED_RUNTIMES: raise RuntimeError( f"Unknown container runtime '{explicit}'. " f"Supported: {', '.join(SUPPORTED_RUNTIMES)}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/runtime.py` around lines 137 - 149, The explicit runtime string (variable explicit set from override, os.environ.get("VP_CONTAINER_RUNTIME"), or cfg.get("container_runtime")) must be trimmed of surrounding whitespace before validation; modify the runtime resolution code so that after assigning explicit from any source you call explicit = explicit.strip() (or explicit = explicit.strip() if not None) and then explicit = explicit.lower() before checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update the block around explicit, detect_available_runtimes(), and the explicit not in SUPPORTED_RUNTIMES check to use the stripped+lowercased value.src/vibepod/core/docker.py (1)
93-117:⚠️ Potential issue | 🟠 MajorDon't recursively make mounted paths world-writable.
For Podman this turns VibePod-managed config, logs, proxy state, and CA material into
0777/0666, andrun_agent()now applies the same helper to everyextra_volumeshost path. On a multi-user machine that exposes sensitive local state to read/tamper by other users. Use a runtime-specific ownership/ACL strategy instead of broadchmod.Also applies to: 214-216, 300-301, 364-365
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/docker.py` around lines 93 - 117, The current _prepare_volume_dir implementation recursively chmods paths (and similar helper uses in run_agent/extra_volumes) which makes mounted host paths world-writable; change it to avoid touching bind-mounted or user-provided volumes and to set safe ownership/permissions only on directories VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino with parent) and skip them, only operate on paths that we created (or are under our managed data directories), and instead of 0o777/0666 use restrictive perms (e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set appropriate owner/ACL for the runtime UID where available; update all call sites (e.g., run_agent handling of extra_volumes) to not call the helper for external mounts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/core/docker.py`:
- Around line 244-247: The code unconditionally forwards userns_mode into
run_kwargs causing Docker client failures; update the places where run_kwargs is
built (the blocks that set run_kwargs["userns_mode"] alongside
run_kwargs["user"]) to validate/translate userns_mode against self.runtime: if
self.runtime indicates Docker only allow or translate to 'host' (or omit it) and
for Podman allow keep-id/nomap/auto, and apply the same validation at all
occurrences where userns_mode is assigned (the three run_kwargs construction
sites in this module) so that Docker never receives unsupported Podman modes.
Ensure checks reference self.runtime and run_kwargs (and the local userns_mode
variable) and skip or map the value before adding run_kwargs["userns_mode"].
---
Outside diff comments:
In `@src/vibepod/core/docker.py`:
- Around line 287-294: The current logic in find_datasette() handling existing
containers returns a running container without checking if its launch options
(notably userns_mode) match the requested setting; update the check after
existing.reload() to read the container's current user namespace from
existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested
userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if
they differ remove the container (existing.remove(force=True)) so it will be
recreated with the correct userns_mode; apply the same comparison/fix in the
other similar block around the code referenced at 350-358.
---
Duplicate comments:
In `@src/vibepod/core/docker.py`:
- Around line 93-117: The current _prepare_volume_dir implementation recursively
chmods paths (and similar helper uses in run_agent/extra_volumes) which makes
mounted host paths world-writable; change it to avoid touching bind-mounted or
user-provided volumes and to set safe ownership/permissions only on directories
VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino
with parent) and skip them, only operate on paths that we created (or are under
our managed data directories), and instead of 0o777/0666 use restrictive perms
(e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set
appropriate owner/ACL for the runtime UID where available; update all call sites
(e.g., run_agent handling of extra_volumes) to not call the helper for external
mounts.
In `@src/vibepod/core/runtime.py`:
- Around line 66-81: The _probe_socket function may leak the Docker client if
client.ping() raises; ensure the Docker client created in _probe_socket is
always closed by moving client.close() into a finally block (or use a context
manager) and only call client.ping() between creation and cleanup; update the
code around the DockerClient creation and the client.ping()/client.close() calls
in _probe_socket so any exception path still closes the client before returning
False.
- Around line 137-149: The explicit runtime string (variable explicit set from
override, os.environ.get("VP_CONTAINER_RUNTIME"), or
cfg.get("container_runtime")) must be trimmed of surrounding whitespace before
validation; modify the runtime resolution code so that after assigning explicit
from any source you call explicit = explicit.strip() (or explicit =
explicit.strip() if not None) and then explicit = explicit.lower() before
checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update
the block around explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: de41294f-e499-4085-80f7-2cef77783a79
📒 Files selected for processing (13)
docs/configuration.mddocs/podman.mdsrc/vibepod/cli.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (5)
- docs/configuration.md
- tests/test_runtime.py
- src/vibepod/commands/run.py
- src/vibepod/commands/logs.py
- docs/podman.md
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_runtime.py (1)
110-122: Consider adding a test for_probe_socketfailure cleanup.The current tests verify successful probing, but there's no test verifying that the Docker client is properly closed when
ping()fails. This would help catch the resource leak identified in_probe_socket.Example test
def test_probe_socket_closes_client_on_failure(monkeypatch) -> None: recorded: dict[str, bool] = {} class _FakeClient: def __init__(self, *, base_url: str, timeout: float) -> None: pass def ping(self) -> None: raise ConnectionError("Daemon not responding") def close(self) -> None: recorded["closed"] = True monkeypatch.delenv("VP_RUNTIME_PROBE_TIMEOUT", raising=False) monkeypatch.setitem(sys.modules, "docker", SimpleNamespace(DockerClient=_FakeClient)) assert runtime._probe_socket("unix:///tmp/unavailable.sock") is False assert recorded.get("closed") is True, "Client should be closed even on failure"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_runtime.py` around lines 110 - 122, Add a test that asserts Docker client is closed when _probe_socket fails: mock a fake DockerClient class with ping() raising (e.g., ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and assert it returns False and that the recorded flag shows close() was called; ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test (monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and the test references runtime._probe_socket, docker.DockerClient, ping, and close to locate relevant code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_runtime.py`:
- Around line 110-122: Add a test that asserts Docker client is closed when
_probe_socket fails: mock a fake DockerClient class with ping() raising (e.g.,
ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient
to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and
assert it returns False and that the recorded flag shows close() was called;
ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test
(monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and
the test references runtime._probe_socket, docker.DockerClient, ping, and close
to locate relevant code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a4c23591-6a3e-4d48-a14f-db09f906be91
📒 Files selected for processing (8)
README.mddocs/configuration.mddocs/podman.mdsrc/vibepod/commands/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_runtime.py
✅ Files skipped from review due to trivial changes (2)
- README.md
- docs/podman.md
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_config.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Greptile SummaryThis PR introduces first-class Podman support as an alternative to Docker across the entire VibePod CLI: auto-detection of available runtimes, a new Key issues found:
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant CLI as CLI Command
participant GM as get_manager()
participant RR as resolve_runtime()
participant DAR as detect_available_runtimes()
participant PS as _probe_socket()
participant SRP as save_runtime_preference()
participant DM as DockerManager
CLI->>GM: get_manager(runtime_override, config)
GM->>RR: resolve_runtime(override, config)
Note over RR: Check override → env → config<br/>for explicit runtime
RR->>DAR: detect_available_runtimes()
DAR->>PS: probe docker socket
PS-->>DAR: True/False
DAR->>PS: probe podman socket(s)
PS-->>DAR: True/False
DAR-->>RR: {docker: url, podman: url}
alt explicit runtime set
RR-->>GM: (runtime_name, socket_url)
else single runtime found
RR-->>GM: (runtime_name, socket_url)
else both runtimes, TTY
RR->>RR: Prompt.ask()
RR->>SRP: save_runtime_preference(choice)
Note over SRP: OSError not caught by get_manager!
RR-->>GM: (choice, socket_url)
else both runtimes, non-TTY
RR-->>GM: (RUNTIME_DOCKER, docker_url)
end
GM->>DM: DockerManager(base_url, runtime)
DM-->>GM: manager
GM-->>CLI: manager
Last reviewed commit: "Add integration test..." |
| if self.runtime == RUNTIME_PODMAN: | ||
| return normalized if normalized in {"auto", "keep-id", "nomap"} else None |
There was a problem hiding this comment.
_resolved_userns_mode silently drops valid Podman modes with parameters
The strict set membership check normalized in {"auto", "keep-id", "nomap"} only allows three exact strings. Podman's --userns flag supports many additional modes beyond these three, including parametric forms:
keep-id:uid=1000,gid=1000auto:uidmapping=0:1000:1ns:/proc/pid/ns/userprivate
If a user configures VP_CONTAINER_USERNS_MODE=keep-id:uid=1000 (a valid and documented Podman mode), _resolved_userns_mode returns None with no error or warning. The userns mode is silently ignored and the user has no way to know their setting wasn't applied.
For Podman, the intent is to allow Podman-compatible modes and block Docker-incompatible ones. The safer approach is to pass any mode through to Podman and only filter out known Podman-specific prefixes on the Docker path:
if self.runtime == RUNTIME_PODMAN:
return normalized # pass all modes through to Podman
# For Docker, drop known Podman-only modes
if normalized in {"auto", "keep-id", "nomap"} or normalized.startswith(("keep-id:", "auto:")):
return None
return normalized| try: | ||
| runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config) | ||
| except (RuntimeError, ValueError) as exc: | ||
| raise DockerClientError(str(exc)) from exc |
There was a problem hiding this comment.
OSError from save_runtime_preference propagates uncaught
get_manager wraps RuntimeError and ValueError into DockerClientError, but OSError is not included. resolve_runtime calls save_runtime_preference (line 226 in runtime.py) when a user interactively selects a runtime. save_runtime_preference can raise OSError if the config directory is not writable (e.g., permissions issue, read-only filesystem).
Since OSError is not caught here and CLI command handlers only catch DockerClientError, this results in an unhandled Python traceback instead of a friendly error message:
try:
runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError) as exc:
raise DockerClientError(str(exc)) from excThe fix is to also catch OSError:
| try: | |
| runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config) | |
| except (RuntimeError, ValueError) as exc: | |
| raise DockerClientError(str(exc)) from exc | |
| try: | |
| runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config) | |
| except (RuntimeError, ValueError, OSError) as exc: | |
| raise DockerClientError(str(exc)) from exc |
| quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix)) | ||
| podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"] |
There was a problem hiding this comment.
chmod a+rwX Podman fixup is also applied inside the container
The chmod -R a+rwX shell command runs inside the container at startup on bind-mounted directories. Because these are bind mounts, the permission changes propagate back to the host filesystem. Combined with _prepare_volume_dir (which already sets 0o777 on the host before container start), agent config directories — which may contain API keys and authentication tokens — end up world-readable and world-writable on the host after every Podman run.
This is a different path from the _prepare_volume_dir host-side chmod: this chmod runs inside the container via the injected entrypoint script. For rootless Podman, a+rwX inside the container maps to subordinate UIDs on the host side, but for Docker users who happen to run --runtime=docker on a shared machine with a Podman-patched config, neither guard fires but the host permissions are already set by a prior Podman run.
Consider scoping the in-container fixup to g+rwX rather than a+rwX, which is consistent with the workspace guidance in docs/podman.md that advises against world-writable directories.
Adds first-class support for Podman as an alternative container runtime to Docker throughout VibePod. It updates documentation, configuration, and CLI commands to allow users to select between Docker and Podman, including auto-detection and explicit selection methods. Additionally, it introduces fixes for rootless Podman compatibility, such as permission handling for bind mounts.
Documentation updates:
docs/podman.md), linked from navigation and other docs. Updated Quickstart and main documentation to reflect Podman support and new configuration options.Config and environment variable documentation:
container_runtimeconfig key andVP_CONTAINER_RUNTIMEenvironment variable indocs/configuration.md, including example usage.Summary by CodeRabbit
New Features
Documentation
Tests