Deep Learning with TensorFlow

Learn how to train your first TensorFlow model and how to use it on Android

What is Tensor Flow?

TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the cloud to running models locally on an embedded system like your phone.

TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

Let’s see what we can do with it. We will start by running an existing demo.

Running our first Tensor Flow app

Google’s open source TensorFlow project includes a wonderfully documented demo Android app (GitHub). The quickest way to get started is to download and install tensorflow_demo.apk from their last successful nightly build.

The demo app is really three apps (the README has more info), but we’re going to focus on the “TF Classify” one here.

TF Classify opens your camera and classifies whatever objects you show it. The really mind-blowing thing is that this works totally offline — you don’t need an internet connection.

It prints out the object classification along with a confidence level (1.000 for perfect confidence, 0.000 for zero confidence). When your object fills most of the image, it often does pretty well.

Check the following examples:

The “TF Classify” Android demo app uses the Google Inception model. According to the docs, Inception v3: “achieves 21.2% top-1 and 5.6% top-5 error for single frame evaluation”

That means it should be correct almost 80% of the time, and it has the correct classification in its top 5 choices almost 95% of the time.

But what objects can it identify and what images are used to train it. Let’s dig further into this:

The TensorFlow image recognition tutorial tells us the following:

“Inception-v3 is trained for the ImageNet Large Visual Recognition Challenge using the data from 2012. This is a standard task in computer vision, where models try to classify entire images into 1000 classes”

So if we access the link above we can see what classes can the model recognize. You can see that “coffee mug” is present in that list. If you click on it you would see all the photos that were used to train the model to recognize the coffee mug.

For our next step let’s try to retrain this model ourselves.

Training our own model  

If we launch the original Android “TF Classify” demo app and show it some pictures of a rose you will see the following:

As you can see it says that this is a velvet which is obviously incorrect. That is because ImageNet was not trained on any of these flower species, originally.

So let’s try to train the model to recognize some flowers.

This part of the article is heavily based on the very excellent TensorFlow For Poets code lab that I attended at Google Developer Days in Krakow so if you get stuck you can check it here

The first step is to install tensorflow.

The next step is to clone the git repo used in the codelab

git clone
cd tensorflow-for-poets-2

Before you start any training, you’ll need a set of images to teach the model about the new classes you want to recognize.

By invoking the following two commands you can download

an archive of creative-commons licensed flower photos to use initially:

curl \
    | tar xz -C tf_files

You should now have a copy of the flower photos in your working directory. Confirm the contents of your working directory by issuing the following command:

ls tf_files/flower_photos

The preceding command should display the following objects:


The retrain script can retrain either Inception V3 model or a MobileNet. In this article, we will use a MobileNet. The principal difference is that Inception V3 is optimized for accuracy, while the MobileNets are optimized to be small and efficient, at the cost of some accuracy.

Inception V3 has the first-choice accuracy of 78% on ImageNet but is the model is 85MB, and requires many times more processing than even the largest MobileNet configuration, which achieves 70.5% accuracy, with just a 19MB download.

Pick the following configuration options:

  • Input image resolution: 128,160,192, or 224px. Unsurprisingly, feeding in a higher resolution image takes more processing time, but results in better classification accuracy. We recommend 224 as an initial setting.
  • The relative size of the model as a fraction of the largest MobileNet: 1.0, 0.75, 0.50, or 0.25. We recommend 0.5 as an initial setting. The smaller models run significantly faster, at a cost of accuracy.

With the recommended settings, it typically takes only a couple of minutes to retrain on a laptop. You will pass the settings inside Linux shell variables. Set those shell variables as follows:


Before starting the training, launch tensorboard in the background. TensorBoard is a monitoring and inspection tool included with tensorflow. You will use it to monitor the training progress.

tensorboard --logdir tf_files/training_summaries &

The retrain script is part of the tensorflow repo, but it is not installed as part of the pip package. So for simplicity, I’ve included it in the codelab repository.

Imagenet models are networks with millions of parameters that can differentiate a large number of classes. We’re only training the final layer of that network, so training will end in a reasonable amount of time.

Start your retraining with one big command (note the –summaries_dir option, sending training progress reports to the directory that tensorboard is monitoring) :

python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture="${ARCHITECTURE}" \

This script downloads the pre-trained model, adds a new final layer and trains that layer on the flower photos you’ve downloaded.  This step took 22 minutes on my MacBook Pro (2,3 GHz, 16GB RAM).

Classifying an image

We will be working in that same git directory, ensure that it is your current working directory, and check the contents, as follows:

cd tensorflow-for-poets-2

This directory should contain three other subdirectories:

  • The Android/ directory contains all the files necessary to build a simple Android app that classifies images as it reads them from the camera. The only files missing from the app are those defining the image classification model, which you will create in this tutorial.
  • The scripts/ directory contains the python scripts you’ll be using throughout the tutorial. These include scripts to prepare, test and evaluate the model.
  • The tf_files/ directory contains the files you should have generated in the first part. At the minimum you should have the following files containing the retrained tensorflow program:
ls tf_files/

retrained_graph.pb  retrained_labels.txt

If something fails in the first part and you don’t have the files from training part do the following:

Git clone this google codelab:

git clone
 git checkout end_of_first_codelab

Now cd into the directory of the clone you just created. That’s where you will be working next:

cd tensorflow-for-poets-2

The repo contains three directories: android/, scripts/, and tf_files/

Verify that the retrained model works:

Next, verify that the model is producing sane results before starting to modifying it.

The scripts/ directory contains a simple command line script,, to test the network. Now we’ll test on this picture of some daisies:

Now test the model.

python -m scripts.label_image \
 –graph=tf_files/retrained_graph.pb  \


The script will print the probability the model has assigned to each flower type. Something like this:

daisy 0.94237
roses 0.0487475
sunflowers 0.00510139
dandelion 0.00343337
tulips 0.00034759

Add your model to the project

The demo project is configured to search for the rounded_graph.pb, and retrained_labels.txt files in the android/assets directory. Copy the files you just created into the expected location:

cp tf_files/rounded_graph.pb android/assets/graph.pb
 cp tf_files/retrained_labels.txt android/assets/labels.txt

Open a project with AndroidStudio by taking the following steps:

  1. Open AndroidStudio
  2. After it loads select “Open an existing Android Studio project” :
  3. In the file selector, choose tensorflow-for-poets-2/android from your working directory.
  4. You will get a “Gradle Sync” popup, the first time you open the project, asking about using gradle wrapper. Click “OK”.

Change the output_name in

The app is currently set up to run the baseline MobileNet. The output node for our model has a different name. Open and update the OUTPUT_NAME variable.

private static final String INPUT_NAME = "input";
private static final String OUTPUT_NAME = "final_result";

Optimize the model for mobile

Mobile devices have significant limitations, so any pre-processing that can be done to reduce an app’s footprint is worth considering.

One way the TensorFlow library is kept small, for mobile, is by only supporting the subset of operations that are commonly used during inference. This is a reasonable approach, as training is rarely conducted on mobile platforms.

Optimize for inference

To avoid problems caused by unsupported training ops, the TensorFlow installation includes a tool, optimize_for_inference, that removes all nodes that aren’t needed for a given set of input and outputs.

The script also does a few other optimizations that help speed up the model, such as merging explicit batch normalization operations into the convolutional weights to reduce the number of calculations. This can give a 30% speed up, depending on the input model. Here’s how you run the script:

python -m \
   --input=tf_files/retrained_graph.pb \
   --output=tf_files/optimized_graph.pb \
   --input_names="input" \

Running this script creates a new file at tf_files/optimized_graph.pb.