Hi,
We are trying to replicate the results in your paper: Demonstration-Free: Towards More Practical Log Parsing with Large Language Models. As documented in the paper, the default LLM used is GPT-3.5-Turbo-0125. However, the default model used in the code is GPT-4o-mini. Can you please indicate which model we should use to replicate the results?
Thanks and have a nice day:-)