<div style="font-size: 16px;">
<p>机器学习 部署 嵌入式</p>
<h2 id="introduction"> 介绍 <span style="font-weight: bold;">(</span>Introduction<span style="font-weight: bold;">)</span></h2>
<p>Thanks to libraries such as Pandas, scikit-learn, and Matplotlib, it is relatively easy to start exploring datasets and make some first predictions using simple Machine Learning (ML) algorithms in Python. Although, to make these trained models useful in the real world, it is necessary to make them available to make predictions on either the Web or Portable devices.</p>
<p> 多亏了Pandas,scikit-learn和Matplotlib等库,使用Python中的简单机器学习(ML)算法开始探索数据集并做出一些初步预测相对容易。 虽然,为了使这些训练有素的模型在现实世界中有用,有必要使它们可用于在Web或便携式设备上进行预测。 </p>
<p>In two of my previous articles, I explained how to create and deploy a simple Machine Learning model using <a href="https://towardsdatascience.com/flask-and-heroku-for-online-machine-learning-deployment-425beb54a274">Heroku/Flask</a> and <a href="https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901">Tensorflow.js</a>. Today, I will instead explain to you how to deploy Machine Learning models on Smartphones and Embedded Devices using TensorFlow Lite.</p>
<p> 在前两篇文章中,我解释了如何使用<a href="https://towardsdatascience.com/flask-and-heroku-for-online-machine-learning-deployment-425beb54a274">Heroku / Flask</a>和<a href="https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901">Tensorflow.js</a>创建和部署简单的机器学习模型。 今天,我将向您解释如何使用TensorFlow Lite在智能手机和嵌入式设备上部署机器学习模型。 </p>
<h2 id="tensorflow-lite"> TensorFlow Lite <span style="font-weight: bold;">(</span>TensorFlow Lite<span style="font-weight: bold;">)</span></h2>
<p>TensorFlow Lite is a platform developed by Google to train Machine Learning models on mobile, IoT (Interned of Things) and embedded devices.</p>
<p> TensorFlow Lite是Google开发的一个平台,用于在移动,IoT(物联网)和嵌入式设备上训练机器学习模型。 </p>
<p>Using TensorFlow Lite, all the workflow is executed within the device, which avoids having to send data back and forth from a server. Some of the main advantages of doing this are:</p>
<p> 使用TensorFlow Lite,所有工作流程都在设备内执行,从而避免了必须从服务器来回发送数据的情况。 这样做的一些主要优点是: </p>
<ul><li><p>Increased privacy since the data doesn't have to leave the device (this can allow you to apply techniques such as <a href="https://towardsdatascience.com/ai-differential-privacy-and-federated-learning-523146d46b85">Differential Privacy and Federated Learning</a>).</p><p> 由于数据不必离开设备,因此提高了隐私性(这可以使您应用<a href="https://towardsdatascience.com/ai-differential-privacy-and-federated-learning-523146d46b85">差分隐私和联合学习之类的技术</a> )。 </p></li><li>Reduced power consumption, because no internet connection is required.<p class="nodelete"></p> 降低了功耗,因为不需要互联网连接。 </li><li>Decreased latency, since there is no communication with the server.<p class="nodelete"></p> 延迟减少,因为与服务器之间没有通信。 </li></ul>
<p>TensorFlow Lite offers API support for different languages such as Python, Java, Swift and C++. </p>
<p> TensorFlow Lite为不同的语言提供API支持,例如Python,Java,Swift和C ++。 </p>
<p>A typical workflow using TensorFlow Lite would consist of:</p>
<p> 使用TensorFlow Lite的典型工作流程包括: </p>
<ol><li>Creating and training a Machine Learning model in Python using TensorFlow.<p class="nodelete"></p> 使用TensorFlow在Python中创建和训练机器学习模型。 </li><li><p>Converting our model in a suitable format for TensorFlow Lite using <a href="https://www.tensorflow.org/lite/convert/index">TensorFlow Lite converter</a>.</p><p> 使用<a href="https://www.tensorflow.org/lite/convert/index">TensorFlow Lite转换器</a>将模型转换为适用于TensorFlow Lite的格式。 </p></li><li><p>Deploying our Machine Learning model on our mobile device using <a href="https://www.tensorflow.org/lite/guide/inference">TensorFlow Lite interpreter</a>.</p><p> 使用<a href="https://www.tensorflow.org/lite/guide/inference">TensorFlow Lite解释器</a>在移动设备上部署机器学习模型。 </p></li><li>Optimising the model memory consumption and accuracy.<p class="nodelete"></p> 优化模型的内存消耗和准确性。 </li></ol>
<p>There are several techniques which have been developed during the last few years in order to reduce the memory consumption of Machine Learning models [1]. One example is Model Quantization.</p>
<p> 在过去的几年中,已经开发了几种技术来减少机器学习模型的内存消耗[1]。 一个例子是模型量化。 </p>
<p>Model Quantization aims to reduce: </p>
<p> 模型量化旨在减少: </p>
<ol><li>The precision representations of Artificial Neural Networks weights (eg. converting 34.3456657 |
|