From 07742b05926c6f21273fa9875bf2c28824e8f9d3 Mon Sep 17 00:00:00 2001 From: Jiyuan Liu Date: Wed, 4 Mar 2026 13:26:17 +0800 Subject: [PATCH] Try avoiding the link rewrite logic. --- docs/toolchain/appendix/app_flow_manual.md | 2 +- docs/toolchain/appendix/converters.md | 2 +- docs/toolchain/appendix/fx_report.md | 16 ++++++++-------- docs/toolchain/appendix/toolchain_webgui.md | 2 +- docs/toolchain/appendix/yolo_example.md | 8 ++++---- .../yolo_example_InModelPreproc_trick.md | 2 +- docs/toolchain/manual_1_overview.md | 4 ++-- ..._Introdution_to_Post-training_Quantization.md | 2 +- .../1.3_Optimizing_Quantization_Modes.md | 4 ++-- 9 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/toolchain/appendix/app_flow_manual.md b/docs/toolchain/appendix/app_flow_manual.md index 8735f14..cd4716a 100644 --- a/docs/toolchain/appendix/app_flow_manual.md +++ b/docs/toolchain/appendix/app_flow_manual.md @@ -91,7 +91,7 @@ The memory layout for the output node data after CSIM inference is different bet Let's look at an example where c = 4, h = 12, and w = 12. Indexing starts at 0 for this example.
- +

Memory layouts

diff --git a/docs/toolchain/appendix/converters.md b/docs/toolchain/appendix/converters.md index ed1108e..aa2e199 100644 --- a/docs/toolchain/appendix/converters.md +++ b/docs/toolchain/appendix/converters.md @@ -436,7 +436,7 @@ the output, then the model is transposed into channel first. We can use the mode [section 6](#6-onnx-to-onnx-onnx-optimization).
- +

Figure 4. Pre-edited model

diff --git a/docs/toolchain/appendix/fx_report.md b/docs/toolchain/appendix/fx_report.md index b64d1e6..970dc65 100644 --- a/docs/toolchain/appendix/fx_report.md +++ b/docs/toolchain/appendix/fx_report.md @@ -9,22 +9,22 @@ information table. The summary will show the IP evaluator information. Below are some examples of report:
- +

Figure 1. Summary for platform 520, mode 0 (IP evaluator only)

- +

Figure 2. Summary for platform 530, mode 0 (IP evaluator only)

- +

Figure 3. Summary for platform 520, mode 1 (with fixed-point model generated)

- +

Figure 4. Summary for platform 730, mode 2 (with fixed-point model generated and snr check.)

@@ -56,22 +56,22 @@ The summary will show the IP evaluator information. Below are some examples of r ## Node information table
- +

Figure 5. Node details for platform 520, mode 0 (IP evaluator only).

- +

Figure 6. Node details for platform 530, mode 0 (IP evaluator only).

- +

Figure 7. Node details for platform 520, mode 1 (with fixed-point model generated).

- +

Figure 8. Node details for platform 730, mode 2 (with fixed-point model generated and SNR check).

diff --git a/docs/toolchain/appendix/toolchain_webgui.md b/docs/toolchain/appendix/toolchain_webgui.md index dba71ad..eec983a 100644 --- a/docs/toolchain/appendix/toolchain_webgui.md +++ b/docs/toolchain/appendix/toolchain_webgui.md @@ -74,7 +74,7 @@ After running the command above successfully, you can access the web GUI at - + Notes: diff --git a/docs/toolchain/appendix/yolo_example.md b/docs/toolchain/appendix/yolo_example.md index 9f93f3e..94ad2c4 100644 --- a/docs/toolchain/appendix/yolo_example.md +++ b/docs/toolchain/appendix/yolo_example.md @@ -95,7 +95,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python * Run "python" or 'ipython'to open to Python shell:
- +

Figure 1. python shell

@@ -381,14 +381,14 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF. 2. Modify `kneron_plus/python/example/KL720DemoGenericInferencePostYolo.py` line 20. Change input image from "bike_cars_street_224x224.bmp" to "bike_cars_street_416x416.bmp"
- +

Figure 2. modify input image in example

3. Modify line 105. change normaization method in preprocess config from "Kneron" mode to "Yolo" mode
- +

Figure 3. modify normalization method in example

@@ -402,7 +402,7 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF. Then, you should see the YOLO NEF detection result is saved to "./output_bike_cars_street_416x416.bmp" :
- +

Figure 4. detection result

diff --git a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md index 719dcf7..57d3c54 100644 --- a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md +++ b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md @@ -72,7 +72,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python * Run "python" to open to Python shell:
- +

Figure 1. python shell

diff --git a/docs/toolchain/manual_1_overview.md b/docs/toolchain/manual_1_overview.md index f22e058..bf66af7 100644 --- a/docs/toolchain/manual_1_overview.md +++ b/docs/toolchain/manual_1_overview.md @@ -1,5 +1,5 @@
- +
# 1. Toolchain Overview @@ -39,7 +39,7 @@ In the following parts of this page, you can go through the basic toolchain work Below is a breif diagram showing the workflow of how to generate the binary from a floating-point model using the toolchain.
- +

Figure 1. Diagram of working flow

diff --git a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md index 8c47fce..3365d26 100644 --- a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md +++ b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md @@ -3,6 +3,6 @@ Post-training quantization(PTQ) uses a batch of calibration data to calibrate the trained model, and directly converts the trained FP32 model into a fixed-point computing model without any training on the original model. The quantization process can be completed by only adjusting a few hyperparameters, and the process is simple and fast without training. Therefore, this method has been widely used in a large number of device-side and cloud-side deployment scenarios. We recommend that you try the PTQ method to see if it meets the requirements.
- +

Figure 1. PTQ Chart

diff --git a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md index 6c17de7..6c0fd2b 100644 --- a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md +++ b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md @@ -112,7 +112,7 @@ bie_path = km.analysis(
- +

Figure 1. SNR-FPS Chart

@@ -134,7 +134,7 @@ export MIXBW_DEBUG=True
- +

Figure 2. Sentivity Analysis