Project Icon

Tensorflow-bin

适用于RaspberryPi的Tensorflow Lite预构建二进制文件,支持XNNPACK和半精度推理功能

提供适用于RaspberryPi的Tensorflow Lite预构建二进制文件,支持XNNPACK和半精度推理功能。兼容多个操作系统和Python版本,支持Tensorflow v1到v2的多版本安装。通过简便的安装脚本,用户可以快速部署和运行Tensorflow模型,实现高效的设备端推理。

Tensorflow-bin

Older versions of Wheel files can be obtained from the Previous version download script (GoogleDrive).

Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi. Since the 64-bit OS for RaspberryPi has been officially released, I stopped building Wheel in armhf. If you need Wheel for armhf, please use this. TensorflowLite-bin

  • Support for Flex Delegate.
  • Support for XNNPACK.
  • Support for XNNPACK Half-precision Inference Doubles On-Device Inference Performance.

Python API packages

DeviceOSDistributionArchitecturePython verNote
RaspberryPi3/4Raspbian/DebianStretcharmhf / armv7l3.5.332bit, glibc2.24
RaspberryPi3/4Raspbian/DebianBusterarmhf / armv7l3.7.3 / 2.7.1632bit, glibc2.28
RaspberryPi3/4RaspberryPiOS/DebianBusteraarch64 / armv83.7.364bit, glibc2.28
RaspberryPi3/4Ubuntu 18.04Bionicaarch64 / armv83.6.964bit, glibc2.27
RaspberryPi3/4Ubuntu 20.04Focalaarch64 / armv83.8.264bit, glibc2.31
RaspberryPi3/4,PiZeroUbuntu 21.04/Debian/RaspberryPiOSHirsute/Bullseyeaarch64 / armv83.9.x64bit, glibc2.33/glibc2.31
RaspberryPi3/4Ubuntu 22.04Jammyaarch64 / armv83.10.x64bit, glibc2.35
RaspberryPi4/5,PiZeroDebian/RaspberryPiOSBookwormaarch64 / armv83.11.x64bit, glibc2.36

Minimal configuration stand-alone installer for Tensorflow Lite. https://github.com/PINTO0309/TensorflowLite-bin.git

Binary type

Python 2.x / 3.x + Tensorflow v1.15.0

.whl4ThreadsNote
tensorflow-1.15.0-cp35-cp35m-linux_armv7l.whlRaspbian/Debian Stretch, glibc 2.24
tensorflow-1.15.0-cp27-cp27mu-linux_armv7l.whlRaspbian/Debian Buster, glibc 2.28
tensorflow-1.15.0-cp37-cp37m-linux_armv7l.whlRaspbian/Debian Buster, glibc 2.28
tensorflow-1.15.0-cp37-cp37m-linux_aarch64.whlDebian Buster, glibc 2.28

Python 3.x + Tensorflow v2

*FD = FlexDelegate, **XP = XNNPACK Float16 boost, ***MP = MediaPipe CustomOP, ****NP = Numpy

.whlFDXPMPNPNote
tensorflow-2.15.0.post1-cp39-none-linux_aarch64.whl1.26Ubuntu 21.04 glibc 2.33, Debian Bullseye glibc 2.31
tensorflow-2.15.0.post1-cp310-none-linux_aarch64.whl1.26Ubuntu 22.04 glibc 2.35
tensorflow-2.15.0.post1-cp311-none-linux_aarch64.whl1.26Debian Bookworm glibc 2.36

【Appendix】 C Library + Tensorflow v1.x.x / v2.x.x

The behavior is unconfirmed because I do not have C language implementation skills. Official tutorial on Tensorflow C binding generation

Appx1. C-API build procedure Native build procedure of Tensorflow v2.0.0 C API for RaspberryPi / arm64 devices (armhf / aarch64)

Appx2. C-API Usage

$ wget https://raw.githubusercontent.com/PINTO0309/Tensorflow-bin/main/C-library/2.2.0-armhf/install-buster.sh
$ ./install-buster.sh
VersionBinaryNote
v1.15.0C-library/1.15.0-armhf/install-buster.shRaspbian/Debian Buster, glibc 2.28
v1.15.0C-library/1.15.0-aarch64/install-buster.shRaspbian/Debian Buster, glibc 2.28
v2.2.0C-library/2.2.0-armhf/install-buster.shRaspbian/Debian Buster, glibc 2.28
v2.3.0C-library/2.3.0-aarch64/install-buster.shRaspberryPiOS/Raspbian/Debian Buster, glibc 2.28

Usage

Example of Python 3.x + Tensorflow v1 series

$ sudo apt-get install -y \
    libhdf5-dev libc-ares-dev libeigen3-dev gcc gfortran \
    libgfortran5 libatlas3-base libatlas-base-dev \
    libopenblas-dev libopenblas-base libblas-dev \
    liblapack-dev cython3 openmpi-bin libopenmpi-dev \
    libatlas-base-dev python3-dev
$ sudo pip3 install pip --upgrade
$ sudo pip3 install keras_applications==1.0.8 --no-deps
$ sudo pip3 install keras_preprocessing==1.1.0 --no-deps
$ sudo pip3 install h5py==2.9.0
$ sudo pip3 install pybind11
$ pip3 install -U --user six wheel mock
$ sudo pip3 uninstall tensorflow
$ wget "https://raw.githubusercontent.com/PINTO0309/Tensorflow-bin/master/previous_versions/download_tensorflow-1.15.0-cp37-cp37m-linux_armv7l.sh"
$ ./download_tensorflow-1.15.0-cp37-cp37m-linux_armv7l.sh
$ sudo pip3 install tensorflow-1.15.0-cp37-cp37m-linux_armv7l.whl

Example of Python 3.x + Tensorflow v2 series

##### Bullseye, Ubuntu22.04
sudo apt update && sudo apt upgrade -y && \
sudo apt install -y \
    libhdf5-dev \
    unzip \
    pkg-config \
    python3-pip \
    cmake \
    make \
    git \
    python-is-python3 \
    wget \
    patchelf && \
pip install -U pip && \
pip install numpy==1.26.2 && \
pip install keras_applications==1.0.8 --no-deps && \
pip install keras_preprocessing==1.1.2 --no-deps && \
pip install h5py==3.6.0 && \
pip install pybind11==2.9.2 && \
pip install packaging && \
pip install protobuf==3.20.3 && \
pip install six wheel mock gdown
##### Bookworm
sudo apt update && sudo apt upgrade -y && \
sudo apt install -y \
    libhdf5-dev \
    unzip \
    pkg-config \
    python3-pip \
    cmake \
    make \
    git \
    python-is-python3 \
    wget \
    patchelf && \
pip install -U pip --break-system-packages && \
pip install numpy==1.26.2 --break-system-packages && \
pip install keras_applications==1.0.8 --no-deps --break-system-packages && \
pip install keras_preprocessing==1.1.2 --no-deps --break-system-packages && \
pip install h5py==3.10.0 --break-system-packages && \
pip install pybind11==2.9.2 --break-system-packages && \
pip install packaging --break-system-packages && \
pip install protobuf==3.20.3 --break-system-packages && \
pip install six wheel mock gdown --break-system-packages
pip uninstall tensorflow

TFVER=2.15.0.post1

PYVER=39
or
PYVER=310
or
PYVER=311

ARCH=`python -c 'import platform; print(platform.machine())'`
echo CPU ARCH: ${ARCH}

pip install \
--no-cache-dir \
https://github.com/PINTO0309/Tensorflow-bin/releases/download/v${TFVER}/tensorflow-${TFVER}-cp${PYVER}-none-linux_${ARCH}.whl

Operation check

Example of Python 3.x series

$ python -c 'import tensorflow as tf;print(tf.__version__)'
2.15.0.post1

Sample of MultiThread x4

  • Preparation of test environment
$ cd ~;mkdir test
$ curl https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/examples/label_image/testdata/grace_hopper.bmp > ~/test/grace_hopper.bmp
$ curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz | tar xzv -C ~/test mobilenet_v1_1.0_224/labels.txt
$ mv ~/test/mobilenet_v1_1.0_224/labels.txt ~/test/
$ curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C ~/test
$ cp tensorflow/tensorflow/contrib/lite/examples/python/label_image.py ~/test
[Sample Code] label_image.py
import argparse
import numpy as np
import time

from PIL import Image

# Tensorflow -v1.12.0
#from tensorflow.contrib.lite.python import interpreter as interpreter_wrapper

# Tensorflow v1.13.0+, v2.x.x
from tensorflow.lite.python import interpreter as interpreter_wrapper

def load_labels(filename):
  my_labels = []
  input_file = open(filename, 'r')
  for l in input_file:
    my_labels.append(l.strip())
  return my_labels
if __name__ == "__main__":
  floating_model = False
  parser = argparse.ArgumentParser()
  parser.add_argument("-i", "--image", default="/tmp/grace_hopper.bmp", \
    help="image to be classified")
  parser.add_argument("-m", "--model_file", \
    default="/tmp/mobilenet_v1_1.0_224_quant.tflite", \
    help=".tflite model to be executed")
  parser.add_argument("-l", "--label_file", default="/tmp/labels.txt", \
    help="name of file containing labels")
  parser.add_argument("--input_mean", default=127.5, help="input_mean")
  parser.add_argument("--input_std", default=127.5, \
    help="input standard deviation")
  parser.add_argument("--num_threads", default=1, help="number of threads")
  args = parser.parse_args()

  ### Tensorflow -v2.2.0
  #interpreter = interpreter_wrapper.Interpreter(model_path=args.model_file)
  ### Tensorflow v2.3.0+
  interpreter = interpreter_wrapper.Interpreter(model_path=args.model_file, num_threads=int(args.num_threads))

  interpreter.allocate_tensors()
  input_details = interpreter.get_input_details()
  output_details = interpreter.get_output_details()
  # check the type of the input tensor
  if input_details[0]['dtype'] == np.float32:
    floating_model = True
  # NxHxWxC, H:1, W:2
  height = input_details[0]['shape'][1]
  width = input_details[0]['shape'][2]
  img = Image.open(args.image)
  img = img.resize((width, height))
  # add N dim
  input_data = np.expand_dims(img, axis=0)
  if floating_model:
    input_data = (np.float32(input_data) - args.input_mean) / args.input_std

  ### Tensorflow -v2.2.0
  #interpreter.set_num_threads(int(args.num_threads))
  interpreter.set_tensor(input_details[0]['index'], input_data)

  start_time = time.time()
  interpreter.invoke()
  stop_time = time.time()

  output_data = interpreter.get_tensor(output_details[0]['index'])
  results = np.squeeze(output_data)
  top_k = results.argsort()[-5:][::-1]
  labels = load_labels(args.label_file)
  for i in top_k:
    if floating_model:
      print('{0:08.6f}'.format(float(results[i]))+":", labels[i])
    else:
      print('{0:08.6f}'.format(float(results[i]/255.0))+":", labels[i])

  print("time: ", stop_time - start_time)

  • Run test
$ cd ~/test
$ python3 label_image.py \
--num_threads 1 \
--image grace_hopper.bmp \
--model_file mobilenet_v1_1.0_224_quant.tflite \
--label_file labels.txt

0.415686: 653:military uniform
0.352941: 907:Windsor tie
0.058824: 668:mortarboard
0.035294: 458:bow tie, bow-tie, bowtie
0.035294: 835:suit, suit of clothes
time:  0.4152982234954834
$ cd ~/test
$ python3 label_image.py \
--num_threads 4 \
--image grace_hopper.bmp \
--model_file mobilenet_v1_1.0_224_quant.tflite \
--label_file labels.txt

0.415686: 653:military uniform
0.352941: 907:Windsor tie
0.058824: 668:mortarboard
0.035294: 458:bow tie, bow-tie, bowtie
0.035294: 835:suit, suit of clothes
time:  0.1647195816040039

Sample of MultiThread x4 - Real-time inference with a USB camera

002

Build Parameter

Tensorflow v1.11.0

============================================================

Tensorflow v1.11.0

============================================================

Python2.x - Bazel 0.17.2

$ sudo apt-get install -y openmpi-bin libopenmpi-dev libhdf5-dev

$ cd ~
$ git clone https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ git checkout -b v1.11.0
$ ./configure

Please specify the location of python. [Default is /usr/bin/python]:


Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/local/lib
  /home/pi/tensorflow/tensorflow/contrib/lite/tools/make/gen/rpi_armv7l/lib
  /usr/lib/python2.7/dist-packages
  /opt/movidius/caffe/python
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y
No jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: n
No Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
No Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: n
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: n
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with nGraph support? [y/N]: n
No nGraph support will be enabled for TensorFlow.

Do you wish to
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号