I have read issue 11, but still have some questions.
During testing, you utilize the forward function for testing
|
result = model.forward(**data, return_dict=True) |
But as far as I know, the forward function cannot perform the next token generation, resulting in struggling to output answer and [DET] token. Why don't you use the generate function? Is it possible that the inputs ids in your test contain the answer?
When will you make the training and testing datasets and dataloader public?
|
dataset = eval_dataloader.dataset |
I have read issue 11, but still have some questions.
During testing, you utilize the forward function for testing
VisionLLM/VisionLLMv2/visionllmv2/eval/eval_det.py
Line 119 in 028f8b3
But as far as I know, the forward function cannot perform the next token generation, resulting in struggling to output answer and [DET] token. Why don't you use the generate function? Is it possible that the inputs ids in your test contain the answer?
When will you make the training and testing datasets and dataloader public?
VisionLLM/VisionLLMv2/visionllmv2/eval/eval_det.py
Line 111 in 028f8b3