Objective
Reinstitute Ollama as an option so a user could choose to run the advisor app using their own locally hosted LLM. Currently, we have Gemini available to power models, and our own Neon AI models will be available after implementing some commits from this PR - NeonClary#2 . The original build had Ollama functioning at least partially, but the functionality was overwritten when acces to the Neon models was added.
Initial Implementation Requirements
- add Ollama functionality back in fully
Other Considerations
No response
Objective
Reinstitute Ollama as an option so a user could choose to run the advisor app using their own locally hosted LLM. Currently, we have Gemini available to power models, and our own Neon AI models will be available after implementing some commits from this PR - NeonClary#2 . The original build had Ollama functioning at least partially, but the functionality was overwritten when acces to the Neon models was added.
Initial Implementation Requirements
Other Considerations
No response