将模型转换为 Google Coral 的 Edge TPU TFlite 格式
本文档由 AI 翻译。如您发现内容有误或有改进建议,欢迎通过页面下方的评论区,或在以下 Issue 页面中告诉我们:https://github.com/Seeed-Studio/wiki-documents/issues
简介
Coral M.2 Accelerator 是一款带有双 Edge TPU 的 M.2 模块,可通过 M.2 E-key 插槽为现有系统和产品提供 Edge TPU 协处理器。Tensorflow 和 Pytorch 是最流行的深度学习框架。因此,为了使用 Edge TPU,我们需要将模型编译为 Edge TPU 格式。
本文档将指导您完成模型编译和在 Google Coral TPU 上运行的过程,使您能够利用其高性能机器学习应用的能力。
准备硬件
安装硬件
转换模型
在开始之前,请确保您已按照安装指南将 Google Coral TPU 安装到 Pi 5。
- 针对 Tensorflow 模型
- 针对 Pytorch 模型
- 针对 Yolo 模型
所有过程均已在 Python 3.11.9 上测试。
安装 Tensorflow
pip install tensorflow
检查 tflite_converter
tflite_convert -h
结果应如下所示:
2024-07-23 10:41:03.750087: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-23 10:41:04.276520: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
usage: tflite_convert [-h] --output_file OUTPUT_FILE [--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE] [--saved_model_tag_set SAVED_MODEL_TAG_SET]
[--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY] [--enable_v1_converter] [--experimental_new_converter [EXPERIMENTAL_NEW_CONVERTER]]
[--experimental_new_quantizer [EXPERIMENTAL_NEW_QUANTIZER]]
Command line tool to run TensorFlow Lite Converter.
optional arguments:
-h, --help show this help message and exit
--output_file OUTPUT_FILE
Full filepath of the output file.
--saved_model_dir SAVED_MODEL_DIR
Full path of the directory containing the SavedModel.
--keras_model_file KERAS_MODEL_FILE
Full filepath of HDF5 file containing tf.Keras model.
--saved_model_tag_set SAVED_MODEL_TAG_SET
Comma-separated set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags must be present. In order to pass in an empty tag set, pass in "". (default "serve")
--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY
Key identifying the SignatureDef containing inputs and outputs. (default DEFAULT_SERVING_SIGNATURE_DEF_KEY)
--enable_v1_converter
Enables the TensorFlow V1 converter in 2.0
--experimental_new_converter [EXPERIMENTAL_NEW_CONVERTER]
Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True)
--experimental_new_quantizer [EXPERIMENTAL_NEW_QUANTIZER]
Experimental flag, subject to change. Enables MLIR-based quantizer instead of flatbuffer conversion. (default True)
将 Tensorflow 模型转换为 TFlite 模型
tflite_convert --saved_model_dir=YOUR_MODEL_PATH --output_file=YOUR_MODEL_NAME.tflite
将 TFlite 模型转换为 Edge TPU 模型
在将 tflite 模型转换为 edge tpu 模型之前,您应该优化您的模型,请查看优化 Tensorflow 模型。
安装 edgetpu 编译器
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
sudo apt-get update
sudo apt-get install edgetpu-compiler
将 TFlite 模型转换为 Edge TPU 模型
edgetpu_compiler YOUR_MODEL_NAME.tflite
然后您应该会得到一个名为 YOUR_MODEL_NAME_edgetpu.tflite
的新文件。
我们不推荐这种方法,因为在实际过程中存在许多冲突的包。此外,TensorFlow Lite 支持的操作集有限,一些 PyTorch 操作可能不被支持。
将 Pytorch 模型转换为 tflite 模型
安装依赖项
pip install -r https://github.com/google-ai-edge/ai-edge-torch/releases/download/v0.1.1/requirements.txt
pip install ai-edge-torch==0.1.1
转换
import ai_edge_torch
import numpy
import torch
import torchvision
resnet18 = torchvision.models.resnet18(torchvision.models.ResNet18_Weights.IMAGENET1K_V1).eval()
sample_inputs = (torch.randn(1, 3, 224, 224),)
torch_output = resnet18(*sample_inputs)
edge_model = ai_edge_torch.convert(resnet18.eval(), sample_inputs)
edge_model.export('resnet.tflite')
您将获得 resnet.tflite
检查 tflite_converter
在将 tflite 模型转换为 edge tpu 模型之前,您应该优化您的模型,请查看优化 Tensorflow 模型。
tflite_convert -h
结果应如下所示:
2024-07-23 10:41:03.750087: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-23 10:41:04.276520: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
usage: tflite_convert [-h] --output_file OUTPUT_FILE [--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE] [--saved_model_tag_set SAVED_MODEL_TAG_SET]
[--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY] [--enable_v1_converter] [--experimental_new_converter [EXPERIMENTAL_NEW_CONVERTER]]
[--experimental_new_quantizer [EXPERIMENTAL_NEW_QUANTIZER]]
Command line tool to run TensorFlow Lite Converter.
optional arguments:
-h, --help show this help message and exit
--output_file OUTPUT_FILE
Full filepath of the output file.
--saved_model_dir SAVED_MODEL_DIR
Full path of the directory containing the SavedModel.
--keras_model_file KERAS_MODEL_FILE
Full filepath of HDF5 file containing tf.Keras model.
--saved_model_tag_set SAVED_MODEL_TAG_SET
Comma-separated set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags must be present. In order to pass in an empty tag set, pass in "". (default "serve")
--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY
Key identifying the SignatureDef containing inputs and outputs. (default DEFAULT_SERVING_SIGNATURE_DEF_KEY)
--enable_v1_converter
Enables the TensorFlow V1 converter in 2.0
--experimental_new_converter [EXPERIMENTAL_NEW_CONVERTER]
Experimental flag, subject to change. Enables MLIR-based conversion instead of TOCO conversion. (default True)
--experimental_new_quantizer [EXPERIMENTAL_NEW_QUANTIZER]
Experimental flag, subject to change. Enables MLIR-based quantizer instead of flatbuffer conversion. (default True)
将 TFlite 模型转换为 Edge TPU 模型
安装 edgetpu 编译器
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
sudo apt-get update
sudo apt-get install edgetpu-compiler
将 TFlite 模型转换为 Edge TPU 模型
edgetpu_compiler resnet18.tflite
然后你应该会得到一个名为 resnet18_edgetpu.tflite
的新文件。
安装 Ultralytics
pip install ultralytics
将 YOLO 模型转换为 Edge TPU 模型
# 例如,如果你想将 yolov8n.pt 转换为 yolov8n_integer_quant_edgetpu.tflite
yolo export model=yolov8n.pt format=edge int8=True
结果应该如下所示:
jiahao@PC:~/yolov8s_saved_model$ ls
assets saved_model.pb yolov8s_float32.tflite yolov8s_full_integer_quant.tflite
fingerprint.pb variables yolov8s_full_integer_quant_edgetpu.log yolov8s_int8.tflite
metadata.yaml yolov8s_float16.tflite yolov8s_full_integer_quant_edgetpu.tflite yolov8s_integer_quant.tflite
文件 yolov8s_full_integer_quant_edgetpu.tflite
就是你需要的模型。
你可以使用以下命令将其他 tflite 模型转换为 Edge TPU 模型:
# 例如,你可以将 yolov8s_int8.tflite 转换为 Edge TPU 模型
edgetpu_compiler yolov8s_int8.tflite
技术支持与产品讨论
感谢您选择我们的产品!我们致力于为您提供多种支持,确保您使用我们的产品时体验顺畅。我们提供多种沟通渠道,以满足不同的偏好和需求。