add AGENTS.md to repo; fix .gitignore fox NextJS files#40
add AGENTS.md to repo; fix .gitignore fox NextJS files#403clyp50 wants to merge 2 commits intoTerminallyLazy:new-dev-1-31-26from
Conversation
Review Summary by QodoAdd comprehensive AGENTS.md documentation and fix Next.js .gitignore
WalkthroughsDescription• Add comprehensive AGENTS.md documentation for Novion platform • Document full tech stack, project structure, and development patterns • Include API documentation, testing strategy, and troubleshooting guide • Fix .gitignore to properly exclude Next.js build artifacts Diagramflowchart LR
A["Repository"] -->|Add| B["AGENTS.md<br/>1873 lines"]
B -->|Contains| C["Quick Reference<br/>& Commands"]
B -->|Contains| D["Project Structure<br/>& Patterns"]
B -->|Contains| E["API Documentation<br/>& Examples"]
B -->|Contains| F["Testing Strategy<br/>& Troubleshooting"]
A -->|Update| G[".gitignore<br/>Next.js files"]
File Changes1. AGENTS.md
|
Summary of ChangesHello @3clyp50, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's documentation by introducing a detailed guide for developers and contributors. This new document covers everything from the project's technical stack and core commands to development conventions, API specifications, and troubleshooting, aiming to streamline onboarding and maintain consistency. Concurrently, the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThe pull request updates the Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~15 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive AGENTS.md guide for developers and corrects an incorrect path in the .gitignore file for Next.js build artifacts. The new documentation is very detailed and will be a great resource. I've identified a few inconsistencies and areas for clarification within AGENTS.md, such as inconsistent project and Docker image naming, an ambiguous component naming convention, and a confusing code example. Addressing these points will improve the clarity and consistency of the documentation for new contributors. The .gitignore change is correct and necessary for the monorepo structure.
| @@ -0,0 +1,1873 @@ | |||
| # Novion - Medical Research and Analysis Platform | |||
There was a problem hiding this comment.
There's an inconsistency in the project name used across the documentation. This file (AGENTS.md) refers to the project as "Novion", while other documentation files like DEPLOY_GPU.md and DEPLOY_LOCAL.md refer to it as "RadSysX". The frontend package name is "radx" and some backend classes also use RadSysX. To avoid confusion for new developers, it would be beneficial to standardize on a single project name throughout all documentation.
There was a problem hiding this comment.
No worries Gem, project name's actually RadSysX.
| docker build -t novion-backend:gpu -f backend/Dockerfile . | ||
|
|
||
| # Run with GPU support | ||
| docker run --gpus all -p 8000:8000 \ | ||
| -e BP3D_CKPT=/weights/biomedparse_3D_AllData_MultiView_edge.ckpt \ | ||
| -e BP_TMP_TTL=7200 -e BP_TMP_SWEEP=1800 -e BP_VALIDATE_HEATMAP=1 \ | ||
| -v /opt/weights:/weights \ | ||
| novion-backend:gpu |
There was a problem hiding this comment.
The Docker image name novion-backend:gpu used here is inconsistent with the name radsysx-backend:gpu used in DEPLOY_GPU.md. This can lead to confusion and errors during deployment. Please ensure the Docker image name is consistent across all documentation.
| docker build -t novion-backend:gpu -f backend/Dockerfile . | |
| # Run with GPU support | |
| docker run --gpus all -p 8000:8000 \ | |
| -e BP3D_CKPT=/weights/biomedparse_3D_AllData_MultiView_edge.ckpt \ | |
| -e BP_TMP_TTL=7200 -e BP_TMP_SWEEP=1800 -e BP_VALIDATE_HEATMAP=1 \ | |
| -v /opt/weights:/weights \ | |
| novion-backend:gpu | |
| docker build -t radsysx-backend:gpu -f backend/Dockerfile . | |
| # Run with GPU support | |
| docker run --gpus all -p 8000:8000 \ | |
| -e BP3D_CKPT=/weights/biomedparse_3D_AllData_MultiView_edge.ckpt \ | |
| -e BP_TMP_TTL=7200 -e BP_TMP_SWEEP=1800 -e BP_VALIDATE_HEATMAP=1 \ | |
| -v /opt/weights:/weights \ | |
| radsysx-backend:gpu |
| - **Trailing commas**: Always in multiline | ||
|
|
||
| #### Naming Conventions | ||
| - **Files**: kebab-case for components (`dicom-viewer.tsx`), PascalCase for component files when standard (`DicomViewer.tsx`) |
There was a problem hiding this comment.
The file naming convention for components is ambiguous. It states to use kebab-case but also PascalCase 'when standard'. The examples in the project structure (DicomViewer.tsx, AdvancedViewer.tsx) suggest PascalCase is the primary convention for component files. To improve clarity, consider simplifying this rule.
| - **Files**: kebab-case for components (`dicom-viewer.tsx`), PascalCase for component files when standard (`DicomViewer.tsx`) | |
| - **Files**: PascalCase for component files (e.g., `DicomViewer.tsx`). |
| ❌ **BAD - Mixing type and value imports**: | ||
| ```tsx | ||
| // Don't mix when importing types | ||
| import { DicomImage, CoreViewer } from '@/lib/types'; // BAD | ||
| // Instead: | ||
| import type { DicomImage } from '@/lib/types'; | ||
| import { CoreViewer } from '@/components/core/CoreViewer'; | ||
| ``` |
There was a problem hiding this comment.
The 'BAD' example for mixing type and value imports is confusing because it includes CoreViewer, which isn't from lib/types. This distracts from the main point about using import type for types. Simplifying the example would make the rule clearer.
| ❌ **BAD - Mixing type and value imports**: | |
| ```tsx | |
| // Don't mix when importing types | |
| import { DicomImage, CoreViewer } from '@/lib/types'; // BAD | |
| // Instead: | |
| import type { DicomImage } from '@/lib/types'; | |
| import { CoreViewer } from '@/components/core/CoreViewer'; | |
| ``` | |
| ❌ **BAD - Mixing type and value imports**: | |
| ```tsx | |
| // Don't use a value import for a type-only file. | |
| import { DicomImage } from '@/lib/types'; // BAD: DicomImage is a type. | |
| // Instead: | |
| import type { DicomImage } from '@/lib/types'; | |
| import { CoreViewer } from '@/components/core/CoreViewer'; |
Code Review by Qodo
1. Wrong tool endpoint
|
| ##### POST `/tools/execute` - Execute MCP Tool | ||
| Request: | ||
| ```json | ||
| { | ||
| "tool_name": "query_fhir", | ||
| "params": { | ||
| "resource_type": "Patient", | ||
| "search_params": {"name": "John"} | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
1. Wrong tool endpoint 🐞 Bug ✓ Correctness
AGENTS.md instructs clients to call POST /tools/execute, but the backend implements POST /execute_tool. Following the doc (and the existing test client) will cause 404s and prevent MCP tool execution.
Agent Prompt
## Issue description
`AGENTS.md` (and `tests/test_client.py`) call `POST /tools/execute`, but the FastAPI backend only exposes `POST /execute_tool`, causing 404s and breaking tool execution.
## Issue Context
This repo already has a test client wired to `/tools/execute`, so the likely intent is that the backend should serve that route (or at least provide an alias).
## Fix Focus Areas
- AGENTS.md[874-885]
- backend/server.py[273-305]
- tests/test_client.py[35-45]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| ##### POST `/chat` - Direct LLM Chat | ||
| Request: | ||
| ```json | ||
| { | ||
| "message": "Explain the mechanism of aspirin", | ||
| "model": "gpt-4", // or "gemini-pro" | ||
| "agent_type": null // null for general chat | ||
| } | ||
| ``` | ||
|
|
||
| Response: | ||
| ```json | ||
| { | ||
| "response": "Aspirin works by...", | ||
| "model": "gpt-4" | ||
| } | ||
| ``` | ||
|
|
||
| ##### POST `/chat/stream` - Streaming Chat | ||
| Request: Same as `/chat` | ||
|
|
||
| Response: Server-Sent Events (SSE) stream | ||
| ``` | ||
| data: {"chunk": "Aspirin"} | ||
| data: {"chunk": " works"} | ||
| data: {"chunk": " by..."} | ||
| ``` | ||
|
|
||
| ##### POST `/chat/ask` - Specialized Agent Consultation | ||
| Request: | ||
| ```json | ||
| { | ||
| "query": "What are the side effects of ibuprofen?", | ||
| "agent_type": "pharmacist", // or "researcher", "medical_analyst" | ||
| "model": "gpt-4" | ||
| } | ||
| ``` | ||
|
|
||
| Response: | ||
| ```json | ||
| { | ||
| "response": "<think>Analyzing ibuprofen side effects...</think>\n\nCommon side effects include...", | ||
| "agent_type": "pharmacist" | ||
| } | ||
| ``` |
There was a problem hiding this comment.
2. Nonexistent /chat/ask 🐞 Bug ✓ Correctness
AGENTS.md documents POST /chat/ask and a /chat payload with model and agent_type, but the backend only implements /chat and /chat/stream with a different request schema. Clients built from the doc will fail or send ignored fields.
Agent Prompt
## Issue description
`AGENTS.md` documents a `/chat/ask` endpoint and request fields (`model`, `agent_type`) that do not exist in the backend. This will mislead developers and break clients.
## Issue Context
The backend currently exposes `/chat` and `/chat/stream` only, using `ChatRequest(message, model_provider, model_name)`.
## Fix Focus Areas
- AGENTS.md[800-852]
- backend/server.py[46-50]
- backend/server.py[225-270]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| #### BiomedParse Endpoints (GPU Backend) | ||
|
|
||
| **Base URL**: `http://localhost:8000/api/biomedparse/v1` | ||
|
|
||
| ##### GET `/health` - Health Check | ||
| Response: | ||
| ```json | ||
| { | ||
| "status": "healthy", | ||
| "gpu_available": true, | ||
| "checkpoint_loaded": true | ||
| } | ||
| ``` | ||
|
|
||
| ##### POST `/predict-2d` - 2D Image Segmentation | ||
| Request: `multipart/form-data` | ||
| - `file`: Image file (PNG, JPG) | ||
| - `prompts`: Comma-separated prompts (e.g., "liver, tumor") | ||
| - `threshold`: Float (default: 0.5) | ||
| - `return_heatmap`: Boolean (default: false) | ||
|
|
||
| Response: | ||
| ```json | ||
| { | ||
| "seg_url": "/files/seg_abc123.npz", | ||
| "prob_url": "/files/prob_abc123.npz", | ||
| "prompts": ["liver", "tumor"], | ||
| "threshold": 0.5 | ||
| } | ||
| ``` | ||
|
|
||
| ##### POST `/predict-3d-nifti` - 3D Volume Segmentation | ||
| Request: `multipart/form-data` | ||
| - `file`: NIfTI file (.nii, .nii.gz) | ||
| - `prompts`: Comma-separated prompts | ||
| - `threshold`: Float (default: 0.5) | ||
| - `return_heatmap`: Boolean (default: false) | ||
| - `slice_batch_size`: Integer (optional, auto-tuned by GPU VRAM) | ||
|
|
||
| Response: | ||
| ```json | ||
| { | ||
| "mask_url": "/files/mask_xyz789.nii.gz", | ||
| "heatmap_url": "/files/heatmap_xyz789.nii.gz", | ||
| "prompts": ["liver"], | ||
| "threshold": 0.5 | ||
| } | ||
| ``` |
There was a problem hiding this comment.
3. Biomedparse docs incorrect 🐞 Bug ✓ Correctness
AGENTS.md claims BiomedParse endpoints are available under the main FastAPI server and shows response shapes that don’t match the implemented router models. The backend defines a BiomedParse router, but it is not mounted in backend/server.py, so the documented endpoints will be unreachable via python backend/server.py.
Agent Prompt
## Issue description
`AGENTS.md` documents BiomedParse endpoints under the main backend, but the router is not included in the FastAPI app started by `backend/server.py`. Response examples also don’t match the implemented schema.
## Issue Context
The BiomedParse implementation exists as a FastAPI `APIRouter` with prefix `/api/biomedparse/v1`, but the app doesn’t `include_router` it.
## Fix Focus Areas
- AGENTS.md[937-984]
- backend/biomedparse_api.py[45-46]
- backend/biomedparse_api.py[397-432]
- backend/server.py[1-37]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Bumps [unstructured](https://github.com/Unstructured-IO/unstructured) from 0.16.19 to 0.18.18. - [Release notes](https://github.com/Unstructured-IO/unstructured/releases) - [Changelog](https://github.com/Unstructured-IO/unstructured/blob/main/CHANGELOG.md) - [Commits](Unstructured-IO/unstructured@0.16.19...0.18.18) --- updated-dependencies: - dependency-name: unstructured dependency-version: 0.18.18 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
Summary by CodeRabbit