From c6a6a974f343a726a3e7e00e8998281f72d6dd6e Mon Sep 17 00:00:00 2001 From: Jiyuan Liu Date: Wed, 4 Mar 2026 12:51:05 +0800 Subject: [PATCH] Change img link to avoid iframe. --- docs/toolchain/appendix/app_flow_manual.md | 2 +- docs/toolchain/appendix/converters.md | 2 +- docs/toolchain/appendix/fx_report.md | 16 ++++++++-------- docs/toolchain/appendix/toolchain_webgui.md | 2 +- docs/toolchain/appendix/yolo_example.md | 8 ++++---- .../yolo_example_InModelPreproc_trick.md | 2 +- docs/toolchain/manual_1_overview.md | 4 ++-- ..._Introdution_to_Post-training_Quantization.md | 2 +- .../1.3_Optimizing_Quantization_Modes.md | 4 ++-- 9 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/toolchain/appendix/app_flow_manual.md b/docs/toolchain/appendix/app_flow_manual.md index 7b4b426..8735f14 100644 --- a/docs/toolchain/appendix/app_flow_manual.md +++ b/docs/toolchain/appendix/app_flow_manual.md @@ -91,7 +91,7 @@ The memory layout for the output node data after CSIM inference is different bet Let's look at an example where c = 4, h = 12, and w = 12. Indexing starts at 0 for this example.
- +

Memory layouts

diff --git a/docs/toolchain/appendix/converters.md b/docs/toolchain/appendix/converters.md index 8bcf85d..ed1108e 100644 --- a/docs/toolchain/appendix/converters.md +++ b/docs/toolchain/appendix/converters.md @@ -436,7 +436,7 @@ the output, then the model is transposed into channel first. We can use the mode [section 6](#6-onnx-to-onnx-onnx-optimization).
- +

Figure 4. Pre-edited model

diff --git a/docs/toolchain/appendix/fx_report.md b/docs/toolchain/appendix/fx_report.md index dc4c02b..741a74f 100644 --- a/docs/toolchain/appendix/fx_report.md +++ b/docs/toolchain/appendix/fx_report.md @@ -9,22 +9,22 @@ information table. The summary will show the ip evaluator information. Below are some examples of report:
- +

Figure 1. Summary for platform 520, mode 0 (ip evaluator only)

- +

Figure 2. Summary for platform 530, mode 0 (ip evaluator only)

- +

Figure 3. Summary for platform 520, mode 1 (with fix model generated)

- +

Figure 4. Summary for platform 730, model 2 (with fix model generated and snr check.)

@@ -56,22 +56,22 @@ The summary will show the ip evaluator information. Below are some examples of r ## Node information table
- +

Figure 5. Node details for platform 520, mode 0 (ip evaluator only).

- +

Figure 6. Node details for platform 530, mode 0 (ip evaluator only).

- +

Figure 7. Node details for platform 520, mode 1 (with fix model generated).

- +

Figure 8. NOde details for platform 730, mode 2 (with fix model generated and snr check).

diff --git a/docs/toolchain/appendix/toolchain_webgui.md b/docs/toolchain/appendix/toolchain_webgui.md index 7894092..dba71ad 100644 --- a/docs/toolchain/appendix/toolchain_webgui.md +++ b/docs/toolchain/appendix/toolchain_webgui.md @@ -74,7 +74,7 @@ After running the command above successfully, you can access the web GUI at - + Notes: diff --git a/docs/toolchain/appendix/yolo_example.md b/docs/toolchain/appendix/yolo_example.md index cedbf97..9f93f3e 100644 --- a/docs/toolchain/appendix/yolo_example.md +++ b/docs/toolchain/appendix/yolo_example.md @@ -95,7 +95,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python * Run "python" or 'ipython'to open to Python shell:
- +

Figure 1. python shell

@@ -381,14 +381,14 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF. 2. Modify `kneron_plus/python/example/KL720DemoGenericInferencePostYolo.py` line 20. Change input image from "bike_cars_street_224x224.bmp" to "bike_cars_street_416x416.bmp"
- +

Figure 2. modify input image in example

3. Modify line 105. change normaization method in preprocess config from "Kneron" mode to "Yolo" mode
- +

Figure 3. modify normalization method in example

@@ -402,7 +402,7 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF. Then, you should see the YOLO NEF detection result is saved to "./output_bike_cars_street_416x416.bmp" :
- +

Figure 4. detection result

diff --git a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md index 57af7bd..719dcf7 100644 --- a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md +++ b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md @@ -72,7 +72,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python * Run "python" to open to Python shell:
- +

Figure 1. python shell

diff --git a/docs/toolchain/manual_1_overview.md b/docs/toolchain/manual_1_overview.md index 06898d2..f22e058 100644 --- a/docs/toolchain/manual_1_overview.md +++ b/docs/toolchain/manual_1_overview.md @@ -1,5 +1,5 @@
- +
# 1. Toolchain Overview @@ -39,7 +39,7 @@ In the following parts of this page, you can go through the basic toolchain work Below is a breif diagram showing the workflow of how to generate the binary from a floating-point model using the toolchain.
- +

Figure 1. Diagram of working flow

diff --git a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md index 84013eb..8c47fce 100644 --- a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md +++ b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md @@ -3,6 +3,6 @@ Post-training quantization(PTQ) uses a batch of calibration data to calibrate the trained model, and directly converts the trained FP32 model into a fixed-point computing model without any training on the original model. The quantization process can be completed by only adjusting a few hyperparameters, and the process is simple and fast without training. Therefore, this method has been widely used in a large number of device-side and cloud-side deployment scenarios. We recommend that you try the PTQ method to see if it meets the requirements.
- +

Figure 1. PTQ Chart

diff --git a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md index b1e4818..6c17de7 100644 --- a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md +++ b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md @@ -112,7 +112,7 @@ bie_path = km.analysis(
- +

Figure 1. SNR-FPS Chart

@@ -134,7 +134,7 @@ export MIXBW_DEBUG=True
- +

Figure 2. Sentivity Analysis