-
Notifications
You must be signed in to change notification settings - Fork 192
Description
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Transcribe audio or video
Expected behavior
The model should load and I should be able to transcribe audio/video
Please complete the following information:
Version 0.4.5.0
Additional context
Type: OnnxRuntimeException
Message: [ErrorCode:RuntimeException] Exception during initialization: D:\a_work\1\s\WCRforIntel\win-onnxruntime\onnxruntime\core\providers\openvino\ov_interface.cc:81 class std::shared_ptr __cdecl onnxruntime::openvino_ep::OVCore::ReadModel(class std::basic_string<char,struct std::char_traits,class std::allocator > &&,const class std::basic_string<char,struct std::char_traits,class std::allocator > &) [OpenVINO-EP] [OpenVINO-EP] Exception while Reading network: Check 'onnx_node.get_outputs_size() <= outputs_size' failed at src\frontends\onnx\frontend\src\core\graph.cpp:399:
FrontEnd API failed with GeneralFailure:
Expected output number of SkipLayerNormalization node is 4 while the implementation provides 1 outputs
StackTrace: at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options, PrePackedWeightsContainer prepackedWeightsContainer)
at AIDevGallery.Samples.SharedCode.WhisperWrapper.CreateAsync(String modelPath, Nullable`1 policy, String device, Boolean compileModel)
at AIDevGallery.Samples.OpenSourceModels.Whisper.WhisperAudioTranscription.LoadModelAsync(SampleNavigationParameters sampleParams)
