Description
Hi there! I’ve been using the Cactus CLI to run a finetuned Qwen 3 0.6B (finetuned with Unsloth and converted via cactus convert). It works flawlessly in the terminal.
However, I’ve hit a wall when trying to move this into a mobile app using the cactus_flutter SDK. I'm facing two main issues: a hardcoded network error and a lack of documentation for custom models.
1. Hardcoded Supabase SocketException
When I run the downloadModel() function from the basic_completion.dart example, the app crashes with a SocketException. It seems the SDK is trying to reach a specific Supabase instance that is either private or unavailable.
The Error:
I/flutter (12407): Error fetching models: SocketException: Failed host lookup: 'vlqqczxwyaodtcdmdmlw.supabase.co' (OS Error: No address associated with hostname, errno = 7)
Even if I have a stable internet connection, the app cannot resolve this host, which prevents the model initialization flow from completing.
2. No path to load custom/finetuned models
The current Flutter implementation seems built around downloading a pre-set list of models from your backend. Since I have a custom finetuned model (ryuma007/qwen3-0.6B-hishab_json), I cannot find any documentation or API methods to:
- Point the Flutter SDK to my own Hugging Face repo.
- Load the converted model files directly from the local device storage (after transferring them manually or via a custom download).
Request for Help/Fix
- Fix the Supabase Dependency: Can we have a way to bypass the model-fetching logic if we already have the model files or a direct link?
- Custom Model Documentation: Could you provide an example or a method (e.g.,
lm.loadFromPath) that allows us to use models converted via the CLI rather than just the ones in the default gallery?
My Setup:
- Model: Qwen 3 0.6B
- Tools: Unsloth for finetuning -> Cactus CLI for conversion.
- Platform: Flutter (Android)
Thanks for the great work on the inference engine, hope we can get the Flutter side working just as smoothly as the CLI!
Description
Hi there! I’ve been using the Cactus CLI to run a finetuned Qwen 3 0.6B (finetuned with Unsloth and converted via
cactus convert). It works flawlessly in the terminal.However, I’ve hit a wall when trying to move this into a mobile app using the
cactus_flutterSDK. I'm facing two main issues: a hardcoded network error and a lack of documentation for custom models.1. Hardcoded Supabase SocketException
When I run the
downloadModel()function from thebasic_completion.dartexample, the app crashes with aSocketException. It seems the SDK is trying to reach a specific Supabase instance that is either private or unavailable.The Error:
Even if I have a stable internet connection, the app cannot resolve this host, which prevents the model initialization flow from completing.
2. No path to load custom/finetuned models
The current Flutter implementation seems built around downloading a pre-set list of models from your backend. Since I have a custom finetuned model (
ryuma007/qwen3-0.6B-hishab_json), I cannot find any documentation or API methods to:Request for Help/Fix
lm.loadFromPath) that allows us to use models converted via the CLI rather than just the ones in the default gallery?My Setup:
Thanks for the great work on the inference engine, hope we can get the Flutter side working just as smoothly as the CLI!