Skip to content

Conversation

@Kleinpenny
Copy link
Contributor

Summary

  • Fix RWTHDBIS inference device mismatch by moving inputs to the model’s actual device.
  • Apply the same fix to both term typing and taxonomy discovery inference paths.
  • Add missing accelerate dependency required by transformers.Trainer.
  • Switch RWTHDBIS term typing and taxonomy discovery examples to use cuda by default.
  • Ignore runtime outputs under examples/results/ and results/.

Background

Running RWTHDBIS examples on GPU triggered a runtime error during inference:
Expected all tensors to be on the same device because inputs were sent to self.device
while the model had already been moved to GPU by the trainer.

Changes

  • ontolearner/learner/term_typing/rwthdbis.py: move inference inputs to model_device.
  • ontolearner/learner/taxonomy_discovery/rwthdbis.py: same device alignment fix.
  • Add accelerate>=0.26.0 to requirements.txt, pyproject.toml, and setup.py.
  • examples/llm_learner_rwthdbis_term_typing.py: set device="cuda".
  • examples/llm_learner_rwthdbis_taxonomy_discovery.py: set device="cuda".
  • .gitignore: ignore examples/results/ and results/.

Impact

  • Prevents CPU/GPU mismatch during inference in RWTHDBIS examples.
  • Ensures required dependency is available for Trainer-based training.
  • Examples default to GPU for faster training on CUDA machines.
  • Avoids accidentally committing training artifacts.

Test plan

  • python examples/llm_learner_rwthdbis_term_typing.py
  • python examples/llm_learner_rwthdbis_taxonomy_discovery.py

@HamedBabaei HamedBabaei self-requested a review February 4, 2026 13:55
@HamedBabaei HamedBabaei self-assigned this Feb 4, 2026
@HamedBabaei
Copy link
Member

Dear @Kleinpenny, thanks for your contribution. To proceed, can you update your branch with the latest version of the ontolearner/dev branch? As I see unwanted comments to metadata file, which is being automatically generated via our CI/CD pipeline, and it seems that there are a few changes on this PR.

Moreover, I can certenly understood why you have modified the .gitingore but this change is only for your local development, and it shouldn't be committed for PR. Additionally, I really appreciate your observation regarding the accelerate library; however, we intentionally left this up to the user since variant OS (like Mac OS) might not have support for such a library, which may cause an error in installation. So, I might as well request a revert on that.

Thank You.

I am looking forward to your next commits.

@HamedBabaei HamedBabaei assigned Kleinpenny and unassigned HamedBabaei Feb 4, 2026
@Kleinpenny
Copy link
Contributor Author

Dear Hamed,

Thanks for your suggestions and reviewing! I will edit this PR and follow your instructions to avoid conflict. I'm looking into the trainer of our approach inside, and i will make another PR soon. Hopefully i can back to you within this week.

@Kleinpenny
Copy link
Contributor Author

Can be closed coz the changes has applied in PR #306

@Kleinpenny Kleinpenny closed this Feb 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants