diff --git a/docs/toolchain/appendix/app_flow_manual.md b/docs/toolchain/appendix/app_flow_manual.md index 8735f14..cd4716a 100644 --- a/docs/toolchain/appendix/app_flow_manual.md +++ b/docs/toolchain/appendix/app_flow_manual.md @@ -91,7 +91,7 @@ The memory layout for the output node data after CSIM inference is different bet Let's look at an example where c = 4, h = 12, and w = 12. Indexing starts at 0 for this example.
diff --git a/docs/toolchain/appendix/converters.md b/docs/toolchain/appendix/converters.md index ed1108e..aa2e199 100644 --- a/docs/toolchain/appendix/converters.md +++ b/docs/toolchain/appendix/converters.md @@ -436,7 +436,7 @@ the output, then the model is transposed into channel first. We can use the mode [section 6](#6-onnx-to-onnx-onnx-optimization). diff --git a/docs/toolchain/appendix/fx_report.md b/docs/toolchain/appendix/fx_report.md index b64d1e6..970dc65 100644 --- a/docs/toolchain/appendix/fx_report.md +++ b/docs/toolchain/appendix/fx_report.md @@ -9,22 +9,22 @@ information table. The summary will show the IP evaluator information. Below are some examples of report: @@ -56,22 +56,22 @@ The summary will show the IP evaluator information. Below are some examples of r ## Node information table
+
Figure 8. Node details for platform 730, mode 2 (with fixed-point model generated and SNR check).
+
Notes:
diff --git a/docs/toolchain/appendix/yolo_example.md b/docs/toolchain/appendix/yolo_example.md
index 9f93f3e..94ad2c4 100644
--- a/docs/toolchain/appendix/yolo_example.md
+++ b/docs/toolchain/appendix/yolo_example.md
@@ -95,7 +95,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python
* Run "python" or 'ipython'to open to Python shell:
@@ -381,14 +381,14 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF.
2. Modify `kneron_plus/python/example/KL720DemoGenericInferencePostYolo.py` line 20. Change input image from "bike_cars_street_224x224.bmp" to "bike_cars_street_416x416.bmp"
3. Modify line 105. change normaization method in preprocess config from "Kneron" mode to "Yolo" mode
@@ -402,7 +402,7 @@ We leverage the provided the example code in Kneron PLUS to run our YOLO NEF.
Then, you should see the YOLO NEF detection result is saved to "./output_bike_cars_street_416x416.bmp" :
diff --git a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md
index 719dcf7..57d3c54 100644
--- a/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md
+++ b/docs/toolchain/appendix/yolo_example_InModelPreproc_trick.md
@@ -72,7 +72,7 @@ Now, we go through all toolchain flow by KTC (Kneron Toolchain) using the Python
* Run "python" to open to Python shell:
diff --git a/docs/toolchain/manual_1_overview.md b/docs/toolchain/manual_1_overview.md
index f22e058..bf66af7 100644
--- a/docs/toolchain/manual_1_overview.md
+++ b/docs/toolchain/manual_1_overview.md
@@ -1,5 +1,5 @@
# 1. Toolchain Overview
@@ -39,7 +39,7 @@ In the following parts of this page, you can go through the basic toolchain work
Below is a breif diagram showing the workflow of how to generate the binary from a floating-point model using the toolchain.
diff --git a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md
index 8c47fce..3365d26 100644
--- a/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md
+++ b/docs/toolchain/quantization/1.1_Introdution_to_Post-training_Quantization.md
@@ -3,6 +3,6 @@
Post-training quantization(PTQ) uses a batch of calibration data to calibrate the trained model, and directly converts the trained FP32 model into a fixed-point computing model without any training on the original model. The quantization process can be completed by only adjusting a few hyperparameters, and the process is simple and fast without training. Therefore, this method has been widely used in a large number of device-side and cloud-side deployment scenarios. We recommend that you try the PTQ method to see if it meets the requirements.
diff --git a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md
index 6c17de7..6c0fd2b 100644
--- a/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md
+++ b/docs/toolchain/quantization/1.3_Optimizing_Quantization_Modes.md
@@ -112,7 +112,7 @@ bie_path = km.analysis(
@@ -134,7 +134,7 @@ export MIXBW_DEBUG=True