Deep Learning for Humans — Possible With TensorFlow and Keras Ecosystem
Touring and discovering the TensorFlow and Keras ecosystem: A set of tools and resources that has made it possible to build and deploy end-to-end machine learning systems!
It is not only time-consuming to build Deep Learning models from scratch, but it is also hard. We needed a full ecosystem to help us build models easily and deploy them.
TensorFlow along with its sister libraries allow us to develop Machine Learning, Computer Vision, and Natural Language Processing (NLP) models easily.
In this write-up, I want to take you through the TensorFlow ecosystem. Let’s start with TensorFlow.
TensorFlow for easy deep modeling
TensorFlow is an end-to-end open-source platform for building deep learning models.
It has got a lot of popularity across job postings and the research community. Keras Author, @fchollet recently shared an overview of key adoption metrics for deep learning frameworks in 2020. On the side of the job posting, TensorFlow leads most of the deep learning frameworks.
Image source: twitter.com/fchollet, Keras Author
Not only that it powers Google apps such as Youtube (Recommender and Video searches) and Google Photos (face recognition, landmarks detection, etc…), it is also adopted by other major tech companies such as Twitter, LinkedIn, DeepMind, Dropbox, PayPal, Lenovo, AMD, AirBus, and Airbnb. Also, there are case studies that show how some of these companies use TensorFlow.
The major two advantages of TensorFlow are easy to model building and the different options to deploy models, such as web, edge devices such as microcontrollers, and mobile devices.
With its flexibility, there are 3 APIs for building models. These include Sequential API, Functional API, and Model Subclassing. Sequential API allows you to combine neural network layers, from the input to the output. It is not suitable for neural network layers with multiple input and outputs. It is also not a proper choice when you have a residual connection or multi-branch model — A reason why we can’t use it for object detection. Below is an example of a simple sequential model.
from tensorflow import kerasmodel = keras.Sequential([ layers.Dense(16, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
For complex models such as object detection or image segmentation — where you will have multiple inputs and outputs such as images and bounding boxes in object detections, Functional API is a proper API for that!
from tensorflow import kerasinput = keras.Input()
x = keras.layers.Dense(16, activation='relu')(inputs)
x=keras.layers.Dense(32, activation='relu')(x)
output= keras.layers.Dense(1, activation='sigmoid')(x)model = keras.Model(inputs, output)
You can’t build an object detection model with Sequential API, but Functional API instead!! Source.
Model Subclassing gives you the ability to write your custom models and layers. Unless you want full control of every step in model building, Sequential API and Functional API can do everything you want to do with TensorFlow!
This is an example of implementing a custom model. It is taken from the TensorFlow documentation for illustration purposes. But first, let’s make a simple model with Functional API and build a custom model later.
inputs = keras.Input(shape=(32,))
x = layers.Dense(64, activation='relu')(inputs)
output s= layers.Dense(10)(x)
mlp = keras.Model(inputs, outputs)
This is the same as (but not complex) as:
class MLP(keras.Model): def __init__(self, **kwargs):
super(MLP, self).__init__(**kwargs)
self.dense_1 = layers.Dense(64, activation='relu')
self.dense_2 = layers.Dense(10) def call(self, inputs):
x = self.dense_1(inputs)
return self.dense_2(x)# Instantiate the model.
mlp = MLP()
TensorFlow Models Deployment
This section will be about tools that are used to deploy models built with TensorFlow across different devices.
TensorFlowLite for Mobile and Edge devices
TensorFlow Lite is a library for deploying TF models in mobile devices (iOS and Android) and embedded devices such as microcontrollers. It is very straightforward work, you just have to take an already built model or build a new model, compress it with TF Lite Converter and deploy it in a device. There is whole documentation on this which you can read here.
At the time of writing this article, the following models are available in either iOS, Android, and embedded devices:
Example of Computer Vision models available in TF Lite:
Image classification
Object detection
Pose Estimation
Gesture recognition
Segmentation
Digit Classifier
Style Transfer
Super resolution image generation
Example of Natural Language Processing (NLP) models:
Speech Recognition
Text Classification
Natural Language Answering
Smart reply
Sound classification
Other models:
On device recommendation
The TensorFlow team keeps adding more models regularly, so check it out often if you are interested in leveraging some of their models into your applications — it gets easier and easier!
TensorFlow for Javascript
TensorFlow.js
allows us to develop models in Javascript and deploy them in a web browser. There is a whole page on this if you want to learn more about tensorflow.js
.
If there is one thing I like about TensorFlow and Javascript, it is the demos done by the TF community- they are very exciting and make doing deep learning amusing! Below is me trying to experiment with Make Mirror. You can check more available demos here.
TensorFlow Extended (TFX) for Model Deployment
When you are ready to scale your TensorFlow models, TFX gives you the full supported framework for creating end-to-end Machine Learning Pipelines. The below illustration shows what happens inside TFX. It was taken from this video, ML engineering for production ML deployments with TFX (TensorFlow Fall 2020 Updates).
You can learn more about TFX from its official documentation.
Swift for TensorFlow (Beta)
This is in its early stage but it will allow ML Engineers and Researchers to build high scalable Machine Learning systems leveraging the advantages of Swift Language such as its super compiler and design. I am very interested in Swift and TensorFlow!
For more about this beta project, you can learn more from its repository.
Models and Datasets
TensorFlow Hub
TF Hub is a repository of trained models that are ready to be deployed. There are four domains that you can search models: Image, Text, Video, and Sound.
Since these models are already trained, you can use them directly on mobile devices or on the web.
Examples of ready-to-use models you can find in TF Hub are image classification, image segmentation, object detection, text embeddings, and speech recognition. For more about available models, check the TF Hub site.
Model Garden
Model Garden is an official GitHub repository of TensorFlow models, build with high-level APIs. The models available are in Computer Vision, Natural Language Processing, and recommendation.
The Garden is updated regularly with state-of-the-art models, so if you plan to use it, keep eye on the repository!
TensorFlow Datasets
Datasets are the primary raw materials of machine learning. But however, good and processed datasets are the key ingredients of machine learning projects.
TensorFlow Datasets is a collection of datasets of different categories such as images and texts. These datasets are ready to be used, so it’s pretty straight forward to load any datasets given that it is available. Below is an example of using TFDS to get MNIST data.
##Loading MNIST from TF Datasetsimport tensorflow_datasets as tfdsmnist_data = tfds.load("mnist")
mnist_train, mnist_test = mnist_data["train"], mnist_data["test"]
It also offers further functions for data processing. Can’t say much about hundreds (if not thousands) of datasets available for use.
Other Libraries, Tools, and Extensions build to support TensorFlow
What If Tool (WIT)
What if you wanted a visual way to understand your data and model within your environment? WIT is exactly designed for that purpose. It can help you to understand your dataset and the output of your model.
The models supported by the WIT are:
- TensorFlow models
- Cloud AI Platform models
- Model by other ML frameworks (or any model which can be wrapped in a Python function)
You can use WIT within these platforms and integrations:
- Colab notebooks
- Jupiter notebooks
- Cloud AI notebooks
- TensorBoard
WIT supports common types of data such as tabular, image, and text data, which makes it useful either in Natural Language Processing and Computer Vision. Below are different types of Machine Learning tasks that WIT can be used for:
- Regression problems
- Binary classification
- Multi-class classification
For more about WIT, here is your guide.
TensorBoard
Machine Learning is collaborative (TF Dev Summit ‘20). TensorBoard provides the engineers and scientists with the visualizations and debugging capabilities for their Machine Learning models.
With TensorBoard, you can:
- Track and visualize the loss and accuracy metrics
- View the weights, biases, and other parameters in plots format such as histogram or line plot
- Display image, audio, and text data
- See which hyperparameters might be a good match for the model to converge
With TensorBoard.dev, you can also share your experiments as a link with other engineers or collaborators. Some research papers have used it to share their results.
Here is a simple model to show how it is simple to introduce TensorBoard into the model:
# This example is taken from 'tensorflow.org/tensorboard'# Load the TensorBoard notebook extension%load_ext tensorboardimport tensorflow as tf
import datetime# Clear any logs from previous runs
rm -rf ./logs/mnist = tf.keras.datasets.mnist(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
%tensorboard --logdir logs/fit
Results:
Here is a Colab link that you can use to run the above codes.
Google Colaboratory Notebooks
I am pretty sure that the deep learning community has appreciated Colab a lot. It has nothing to do with Deep Learning modeling, but it provides a flexible and powerful environment to build models.
Colab not only provides an environment with zero configurations (you don’t have to install frameworks, you just use them), it has also accelerated the training of machine learning models with its free GPU (Graphical Processing Unit).
If you made it to this point, you may already have used Colab but if this is your first time, here is a link to get started.
TensorFlow Playground
Up to this point, we have already seen how diverse and powerful the TensorFlow Ecosystem is for end-to-end deep modeling.
I thought the fun way to end this article may be to invite you to play with neural networks using the TensorFlow playground, made with tensorflow.js
.
You will see how deep learning starts to make sense by playing with things like activation functions, regularization techniques, different types of input data, layers and neurons, and different problem types (regression and classification).
Play with a neural network, you won’t break it!!
Source of the cover image: blog.tensorflow.org