Fix RWTHDBIS inference device mismatch and add accelerate #302
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
acceleratedependency required bytransformers.Trainer.cudaby default.examples/results/andresults/.Background
Running RWTHDBIS examples on GPU triggered a runtime error during inference:
Expected all tensors to be on the same devicebecause inputs were sent toself.devicewhile the model had already been moved to GPU by the trainer.
Changes
ontolearner/learner/term_typing/rwthdbis.py: move inference inputs tomodel_device.ontolearner/learner/taxonomy_discovery/rwthdbis.py: same device alignment fix.accelerate>=0.26.0torequirements.txt,pyproject.toml, andsetup.py.examples/llm_learner_rwthdbis_term_typing.py: setdevice="cuda".examples/llm_learner_rwthdbis_taxonomy_discovery.py: setdevice="cuda"..gitignore: ignoreexamples/results/andresults/.Impact
Test plan
python examples/llm_learner_rwthdbis_term_typing.pypython examples/llm_learner_rwthdbis_taxonomy_discovery.py