Local ML enthusiast, LLM and diffusion finetuner, hobbyist developer
- United States
-
07:20
(UTC -08:00) - https://huggingface.co/Doctor-Shotgun
- https://huggingface.co/DS-Archive
- https://ko-fi.com/doctorshotgun
Pinned Loading
-
Guide to optimizing inference perfor...
Guide to optimizing inference performance of large MoE models across CPU+GPU using llama.cpp and its derivatives 1# CPUmaxxing with GPU acceleration in llama.cpp23## Introduction45So you want to try one of those fancy huge mixture-of-experts (MoE) models locally? Well, whether you've got a gaming PC or a large multi-GPU workstation, we've got you covered. As long as you've downloaded enough RAM beforehand.
-
ds-llm-webui
ds-llm-webui PublicA simple tool-use assistant for local LLMs powered by TabbyAPI
TypeScript 10
-
ds-med-helper
ds-med-helper PublicA Streamlit-based web UI for physician medical documentation assistance using OpenAI API compatible LLM and ASR endpoints.
Python
-
-
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.



