build Image classification app with tensorflow lite android – Part 1

This is first post of the image classification app with tensorflow lite android series. check out the Part 2 once you completed with this post. In this post, I am explaining about what is tensorflow and how to build tensorflow lite model in simple steps.

What is Tensorflow ?

TensorFlow is Google’s Open Source Machine Learning Framework for dataflow programming across a range of tasks. Nodes in the graph represent mathematical operations, while the graph edges represent the multi-dimensional data arrays communicated between them.

Tensorflow architecture

Tensors are just multidimensional arrays, an extension of 2-dimensional tables to data with a higher dimension. There are many features of TensorFlow which makes it appropriate for Deep Learning and it’s core open source library helps you develop and train ML models.

Use case of TensorFlow

Tensorflow usecases

What is Image Classification?

The intent of Image Classification is to categorize all pixels in a digital image into one of several land cover classes or themes. This categorized data may then be used to produce thematic maps of the land cover present in an image.

tensorflow Image Classification

So, without wasting any time let’s jump into TensorFlow Image Classification.

we can build TensorFlow Lite model for android in 5 steps,

  • Install TensorFlow 2.0 alpha on Colab
  • Dataset Preparation
  • Build model for transfer learning
  • Compile and Train the model
  • Keras models to TFLITE format

1. Install TensorFlow 2.0 alpha on Colab

Google Colaboratory makes it really easy to setup Python notebooks in the cloud. With free access to a GPU for up to 12 hours at a time, Colab has quickly become my go-to platform for performing machine learning experiments.

Let’s install the TensorFlow 2.0 alpha release (GPU version) on a Colab notebook via pip.

!pip install tensorflow-gpu==2.0.0-alpha0

To verify it installed properly:

import tensorflow as tf 
print(tf.__version) 
# Output: 2.0.0-alpha0

2. Dataset Preparation

We can load the images progressively using the Keras ImageDataGenerator class and flow_from_directory() API. This will be slower to execute but will run on more machines.

This API prefers data to be divided into separate train/ and test/ directories, and under each directory to have a sub directory for each class.

Images are then organized under the sub directories.

Now we need to upload the train and test files on Google Drive. There are other methods as well (link) of importing data to the Google Colab environment, however, we have chosen this for its ease of use. Now let’s see how this works.

Once you have uploaded the train and test files, the first step is to mount your drive folder into the Colab environment:

from google.colab import drive
drive.mount('/content/drive')
train_dir = '/content/drive/My Drive/tensorflow/document/Train'
validation_dir = '/content/drive/My Drive/tensorflow/document/Test'
image_size = 128
batch_size = 32
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(directory=train_dir, target_size=(image_size, image_size), batch_size=batch_size)
validation_datagen = tf.keras.preprocessing.image.ImageDataGenerator()
validation_generator = validation_datagen.flow_from_directory(directory=validation_dir, target_size=(image_size, image_size), batch_size=batch_size)

3. Build model for transfer learning

Let’s use TensorFlow 2.0’s high-level Keras API to quickly build our image classification model. For transfer learning, we can use a pre-trained MobileNetV2 model as the feature detector.

MobileNetV2 is the second iteration of MobileNet released by Google with the goal of being smaller and more lightweight than models like ResNet and Inception for running on mobile devices.

Let’s load the MobileNetV2 model pre-trained on ImageNet without the top layer, freeze its weights, and add a new classification head.

IMG_SHAPE = (image_size, image_size, 3)
base_model = tf.keras.applications.MobileNet(input_shape=IMG_SHAPE, include_top=False)
base_model.trainable = False
model = tf.keras.Sequential([
 base_model,
 tf.keras.layers.GlobalAveragePooling2D(),
 tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()

4. Compile and Train the model

Once we have defined the neural network architecture we will now compile it and train the model to check its performance on the validation set:

model.compile(optimizer=tf.keras.optimizers.Adam(),
 loss='categorical_crossentropy',
 metrics=['accuracy'])
epochs = 25
steps_per_epoch = numpy.asarray(train_generator.n / batch_size)
validation_steps = numpy.asarray(validation_generator.n / batch_size)
history = model.fit_generator(generator=train_generator,
 steps_per_epoch=steps_per_epoch,
 epochs=epochs,
 validation_data=validation_generator,
 validation_steps=validation_steps)

I encourage you to do such as:

  • Increasing epochs
  • Use more layers

This will help you to get an even better score on the validation set.

5. Keras models to TFLITE format

What is TensorFlow Lite?

TensorFlow Lite is the lightweight version which is specifically designed for the mobile platform and embedded devices. It provides machine learning solutions for mobile with low latency and small binary size.

TensorFlow Lite supports a set of core operators which have been tuned for mobile platforms. It also supports custom operations in models.

TensorFlow Lite defines a new file format based on FlatBuffers which is an open source platform serialization library. It consists of a new mobile interpreter which is used to keep apps small and faster.

TensorFlow Lite consists of two main components:

The TensorFlow Lite interpreter, which runs specially optimized models on many different hardware types, including mobile phones, embedded Linux devices, and micro controllers.

The TensorFlow Lite converter, which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.

The trained TensorFlow model on the disk will convert into TensorFlow Lite file format (.tflite) using the TensorFlow Lite converter. Then we can use that converted file in the mobile application.

Exporting Keras models to TFLITE format

saved_model_dir = '/content/drive/My Drive/tensorflow/sample 3/TFLite/assets'
tf.saved_model.save(model, saved_model_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
with open('model17.tflite', 'wb') as f:
 f.write(tflite_model)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('labels17.txt', 'w') as f:
 f.write(labels)

Now, we have trained our own TensorFlow Lite Model. 

Next, Check this Android Implementation in part 2 of this series.

5. Keras models to TFLITE format

What is TensorFlow Lite?

TensorFlow Lite is the lightweight version which is specifically designed for the mobile platform and embedded devices. It provides machine learning solutions for mobile with low latency and small binary size.

TensorFlow Lite supports a set of core operators which have been tuned for mobile platforms. It also supports custom operations in models.

TensorFlow Lite defines a new file format based on FlatBuffers which is an open source platform serialization library. It consists of a new mobile interpreter which is used to keep apps small and faster.

TensorFlow Lite consists of two main components:

The TensorFlow Lite interpreter, which runs specially optimized models on many different hardware types, including mobile phones, embedded Linux devices, and micro controllers.

The TensorFlow Lite converter, which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.

The trained TensorFlow model on the disk will convert into TensorFlow Lite file format (.tflite) using the TensorFlow Lite converter. Then we can use that converted file in the mobile application.

Exporting Keras models to TFLITE format

saved_model_dir = '/content/drive/My Drive/tensorflow/sample 3/TFLite/assets'
tf.saved_model.save(model, saved_model_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
with open('model17.tflite', 'wb') as f:
 f.write(tflite_model)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('labels17.txt', 'w') as f:
 f.write(labels)

Now, we have trained our own TensorFlow Lite Model. 

Next, Check this Android Implementation in part 2 of this series.

conclusion

Thanks for the reading. this is my first try with the tensorflow lite android image classification.  Please try yourself to build model with tensorflow lite and let me know your feedback in comments.

One Reply to “build Image classification app with tensorflow lite android – Part 1”

Leave a Reply

Your email address will not be published. Required fields are marked *