onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request. Since I am adding challenging model optimizations and fixing bugs almost daily, I frequently embed potential bugs that would otherwise break through CI's regression testing. Therefore, if you encounter new problems, I recommend that you try a package that is a few versions older, or try the latest package that will be released in a few days.
Incidentally, I have never used this tool in practice myself since I started working on it. It doesn't matter.
Note
-
The torch.script-based
torch.onnx.export
has already been moved to maintenance mode, and we recommend moving to the FX graph-basedtorch.onnx.dynamo_export
starting with PyTorch v2.2.0. -
The greatest advantage of ONNX generated by
torch.onnx.dynamo_export
would be that it directly references the PyTorch implementation, allowing for the conversion of any OP that was previously difficult to convert to ONNX. -
The maintainers of ONNX and PyTorch have assured us that they will not add new OPs after
opset=18
to the existingtorch.onnx.export
. -
https://pytorch.org/docs/stable/onnx_dynamo.html#torch.onnx.dynamo_export
-
This can be converted directly into an ONNX graph using Pythonic code using
onnxscript
. -
For future model versatility, it would be a good idea to consider moving to
torch.onnx.dynamo_export
at an early stage. -
Google AI Edge Torch AI Edge Torch is a python library that supports converting PyTorch models into a .tflite format, which can then be run with TensorFlow Lite and MediaPipe. This enables applications for Android, iOS and IOT that can run models completely on-device. AI Edge Torch offers broad CPU coverage, with initial GPU and NPU support. AI Edge Torch seeks to closely integrate with PyTorch, building on top of torch.export() and providing good coverage of Core ATen operators.
https://github.com/google-ai-edge/ai-edge-torch?tab=readme-ov-file#pytorch-converter
import torch import torchvision import ai_edge_torch # Use resnet18 with pre-trained weights. resnet18 = torchvision.models.resnet18(torchvision.models.ResNet18_Weights.IMAGENET1K_V1) sample_inputs = (torch.randn(1, 3, 224, 224),) # Convert and serialize PyTorch model to a tflite flatbuffer. Note that we # are setting the model to evaluation mode prior to conversion. edge_model = ai_edge_torch.convert(resnet18.eval(), sample_inputs) edge_model.export("resnet18.tflite")
-
Google for Developers Blog MAY 14, 2024 - AI Edge Torch: High Performance Inference of PyTorch Models on Mobile Devices
-
Considering the compatibility of Pythonic code with TensorFlow/Keras/TFLite and the beauty of the conversion workflow, nobuco is the most optimal choice going forward.
-
The role of
onnx2tf
will end within the next one to two years. I don't intend to stop the maintenance ofonnx2tf
itself anytime soon, but I will continue to maintain it little by little as long as there is demand for it from everyone. The end ofonnx2tf
will be whenTensorRT
and other runtimes support porting from FX Graph based models.
Model Conversion Status
https://github.com/PINTO0309/onnx2tf/wiki/model_status
Supported layers
-
:heavy_check_mark:: Supported :white_check_mark:: Partial support Help wanted: Pull Request are welcome
See the list of supported layers
OP Status Abs :heavy_check_mark: Acosh :heavy_check_mark: Acos :heavy_check_mark: Add :heavy_check_mark: And :heavy_check_mark: ArgMax :heavy_check_mark: ArgMin :heavy_check_mark: Asinh :heavy_check_mark: Asin :heavy_check_mark: Atanh :heavy_check_mark: Atan :heavy_check_mark: AveragePool :heavy_check_mark: BatchNormalization :heavy_check_mark: Bernoulli :heavy_check_mark: BitShift :heavy_check_mark: BitwiseAnd Help wanted BitwiseNot Help wanted BitwiseOr Help wanted BitwiseXor Help wanted Cast :heavy_check_mark: Ceil :heavy_check_mark: Celu :heavy_check_mark: CenterCropPad Help wanted Clip :heavy_check_mark: Col2Im :white_check_mark: Compress :heavy_check_mark: ConcatFromSequence :heavy_check_mark: Concat :heavy_check_mark: ConstantOfShape :heavy_check_mark: Constant :heavy_check_mark: Conv :heavy_check_mark: ConvInteger :white_check_mark: ConvTranspose :heavy_check_mark: Cosh :heavy_check_mark: Cos :heavy_check_mark: CumSum :heavy_check_mark: DeformConv Help wanted DepthToSpace :heavy_check_mark: Det :heavy_check_mark: DequantizeLinear :heavy_check_mark: DFT Help wanted Div :heavy_check_mark: Dropout :heavy_check_mark: DynamicQuantizeLinear :heavy_check_mark: Einsum :heavy_check_mark: Elu :heavy_check_mark: Equal :heavy_check_mark: Erf :heavy_check_mark: Expand :heavy_check_mark: Exp :heavy_check_mark: EyeLike :heavy_check_mark: Flatten :heavy_check_mark: Floor :heavy_check_mark: FusedConv :heavy_check_mark: GatherElements :heavy_check_mark: GatherND :heavy_check_mark: Gather :heavy_check_mark: Gelu :heavy_check_mark: Gemm :heavy_check_mark: GlobalAveragePool :heavy_check_mark: GlobalLpPool :heavy_check_mark: GlobalMaxPool :heavy_check_mark: GreaterOrEqual :heavy_check_mark: Greater :heavy_check_mark: GridSample :white_check_mark: GroupNormalization Help wanted GRU :heavy_check_mark: HammingWindow :white_check_mark: HannWindow :white_check_mark: Hardmax :heavy_check_mark: HardSigmoid :heavy_check_mark: HardSwish :heavy_check_mark: Identity :heavy_check_mark: If :heavy_check_mark: Input :heavy_check_mark: InstanceNormalization :heavy_check_mark: Inverse :heavy_check_mark: IsInf :heavy_check_mark: IsNaN :heavy_check_mark: LayerNormalization :heavy_check_mark: LeakyRelu :heavy_check_mark: LessOrEqual :heavy_check_mark: Less :heavy_check_mark: Log :heavy_check_mark: LogSoftmax :heavy_check_mark: Loop Help wanted LpNormalization :heavy_check_mark: LRN :heavy_check_mark: LSTM :heavy_check_mark: MatMul :heavy_check_mark: MatMulInteger :heavy_check_mark: MaxPool :heavy_check_mark: Max :heavy_check_mark: MaxRoiPool Help wanted MaxUnpool :heavy_check_mark: Mean :heavy_check_mark: MeanVarianceNormalization :heavy_check_mark: MelWeightMatrix :heavy_check_mark: Min :heavy_check_mark: Mish :heavy_check_mark: Mod :heavy_check_mark: Mul :heavy_check_mark: Multinomial :heavy_check_mark: Neg :heavy_check_mark: NonMaxSuppression :heavy_check_mark: NonZero :heavy_check_mark: Optional Help wanted OptionalGetElement :heavy_check_mark: OptionalHasElement :heavy_check_mark: Not :heavy_check_mark: OneHot :heavy_check_mark: Or :heavy_check_mark: Pad :heavy_check_mark: Pow :heavy_check_mark: PRelu :heavy_check_mark: QLinearAdd :heavy_check_mark: QLinearConcat :heavy_check_mark: QLinearConv :heavy_check_mark: QLinearLeakyRelu :heavy_check_mark: QLinearMatMul :heavy_check_mark: QLinearMul :heavy_check_mark: QLinearSigmoid :heavy_check_mark: QLinearSoftmax :heavy_check_mark: QuantizeLinear :heavy_check_mark: RandomNormalLike :heavy_check_mark: RandomNormal :heavy_check_mark: RandomUniformLike :heavy_check_mark: RandomUniform :heavy_check_mark: Range :heavy_check_mark: Reciprocal :heavy_check_mark: ReduceL1 :heavy_check_mark: ReduceL2 :heavy_check_mark: ReduceLogSum :heavy_check_mark: ReduceLogSumExp :heavy_check_mark: ReduceMax :heavy_check_mark: ReduceMean :heavy_check_mark: ReduceMin :heavy_check_mark: ReduceProd :heavy_check_mark: ReduceSum :heavy_check_mark: ReduceSumSquare :heavy_check_mark: Relu :heavy_check_mark: Reshape :heavy_check_mark: Resize :heavy_check_mark: ReverseSequence :heavy_check_mark: RNN :heavy_check_mark: RoiAlign :heavy_check_mark: Round :heavy_check_mark: ScaleAndTranslate :heavy_check_mark: Scatter :heavy_check_mark: ScatterElements :heavy_check_mark: ScatterND :heavy_check_mark: Scan Help wanted Selu :heavy_check_mark: SequenceAt :heavy_check_mark: SequenceConstruct :heavy_check_mark: SequenceEmpty :heavy_check_mark: SequenceErase :heavy_check_mark: SequenceInsert :heavy_check_mark: SequenceLength :heavy_check_mark: Shape :heavy_check_mark: Shrink :heavy_check_mark: Sigmoid :heavy_check_mark: Sign :heavy_check_mark: Sinh :heavy_check_mark: Sin :heavy_check_mark: Size :heavy_check_mark: Slice :heavy_check_mark: Softmax :heavy_check_mark: Softplus :heavy_check_mark: Softsign :heavy_check_mark: SpaceToDepth :heavy_check_mark: Split :heavy_check_mark: SplitToSequence :heavy_check_mark: Sqrt :heavy_check_mark: Squeeze :heavy_check_mark: STFT :white_check_mark: StringNormalizer :white_check_mark: Sub :heavy_check_mark: Sum :heavy_check_mark: Tanh :heavy_check_mark: Tan :heavy_check_mark: TfIdfVectorizer Help wanted ThresholdedRelu :heavy_check_mark: Tile :heavy_check_mark: TopK :heavy_check_mark: Transpose :heavy_check_mark: Trilu :heavy_check_mark: Unique :heavy_check_mark: Unsqueeze :heavy_check_mark: Upsample :heavy_check_mark: Where :heavy_check_mark: Xor :heavy_check_mark:
Demo
Video speed is adjusted approximately 50 times slower than actual speed.
Environment
- Linux / Windows
- onnx==1.16.1
- onnxruntime==1.18.1
- onnx-simplifier==0.4.33 or 0.4.30
(onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] (op_type:Slice, node name: /xxxx/Slice): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (x) vs (y))
- onnx_graphsurgeon
- simple_onnx_processing_tools
- tensorflow==2.17.0, Special bugs: #436
- psutil==5.9.5
- ml_dtypes==0.3.2
- flatbuffers-compiler (Optional, Only when using the
-coion
option. Executable file namedflatc
.) - flatbuffers>=23.5.26
# Custom flatc v23.5.26 binary for Ubuntu 20.04+ # https://github.com/PINTO0309/onnx2tf/issues/196 wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz \ && tar -zxvf flatc.tar.gz \ && sudo chmod +x flatc \ && sudo mv flatc /usr/bin/
Sample Usage
1. Install
Note:
1. If you are using TensorFlow v2.13.0 or earlier, use a version older than onnx2tf v1.17.5. onnx2tf v1.17.6 or later will not work properly due to changes in TensorFlow's API.
**2. The latest onnx2tf implementation is based on Keras API 3 and will not work properly if you install TensorFlow v2.15.0 or