Docker + Tensorflow + Google Cloud Platform

Mayank Chourasia
4 min readApr 27, 2020

--

In this article, I will walk you through Why we want a Docker, Why are Machine Learning or Deep learning models deployed in Containers and Why do we want Cloud for these? and at last steps to deploy it?

It’s a common practice while doing Deep learning we require GPU. Ensuring a workstation should be suitable for Deep learning models.

That’s why Cloud comes within the picture where we are able to use virtual machines to deploy our model of specific GPU ram storage, etc. According to our specifications we are able to make a Virtual Machine.

Also you may easily replace this VM along with your own laptop/desktop with an NVIDIA GPU.

GPUs are one of the best power sources in High-Performance Computing (HPC). As one of the most effective GPU manufacturers and also the leading one in providing HPC solutions for Deep Learning applications, NVIDIA has made a footprint, the CUDA framework, within the ML development space.

Coming to Docker it’s everywhere within the software industry today. Mostly popular as a DevOps tool, Docker has stolen the hearts of the many developers, system administrators, and engineers, among others.

So what exactly is a Docker?

“It is a tool that helps users to exploit operating-system-level virtualization to develop and deliver software in packages called containers.”

This technical definition may sound complicated, but all you need to understand is that Docker is a complete environment where you’ll be able to build and deploy software.

One of the most effective things about Docker is that you simply can move it across platforms and still run without installing a single dependency because all you need is a Docker Engine.

Conventional deployment methods would require setting up the production environment to match the testing and development environments in order that the application runs seamlessly without any trouble across all the instances.

But this can not be an easy-to-achieve task. It is abundantly possible to miss some dependencies in any environment because of a variety of reasons. In production, it might even be too late to grasp that there have been missing dependencies.

What is TensorFlow?

It’s an open-source software library for Machine Intelligence.

About TensorFlow is an open-source software library for numerical computation using data flow graphs.TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the needs of conducting machine learning and deep neural networks research.

Why TensorFlow Serving.

It implements a server that processes incoming requests and forwards them to a model. This server may be running somewhere, most likely, at your Cloud provider TensorFlow Serving may be a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but may be easily extended to serve other kinds of models and data.

For ML applications specifically, this will be an enormous advantage since the applications are often broken into modules and deployed across different machines all communicating with one another and managed by one interface. This may add flexibility and scalability as more machines or containers are often added effortlessly.

Containers have built-in solutions that allow external and distributed data access which may be wont to leverage common data-oriented interfaces that support many data models.

To manage containerization we need a Kubernetes Engine. Kubernetes is a vendor-agnostic cluster and container management tool, open-sourced by Google in 2014. It provides a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.

Note: Kubernetes is not a containerization platform. It is a multi-container management solution.

Google Kubernetes Engine

It is Google Cloud Platform’s (GCP’s) managed Kubernetes service. With this service, GCP customers will produce and maintain their own Kubernetes clusters while not having to manage the Kubernetes platform. Kubernetes runs containers on a cluster of virtual machines (VMs). It determines wherever to run containers, monitors the health of containers, and manages the complete lifecycle of VM instances.

I will give an overview of how we will deploy these.

1. Create an instance with make create-instance.

2. Run jupyter on the instance with make run-jupyter.

It may take just about 5 minutes.

3. Install python libraries with make pip-install.

Put libraries you would like to put in in ./requirements.txt

4.Upload files to the instance with make upload-files.

5. Make an SSH tunnel to the instance with make ssh-tunnel.

6.Access jupyter via your web browser.

Default: http://localhost:8080

7.Download outputs with make download-outputs.

8. Delete the instance with make delete-instance.

So In this blog, we learned DevOps helping the Data Science.

Docker changed the way I work before.

Stay tuned till next blog

If you Want to Connect with Me:

Linkedin: https://www.linkedin.com/in/mayank-chourasia-38421a134/

Twitter: https://twitter.com/ChourasiaMayank

Google Developers Google Cloud Google Cloud Plattform @Docker

@Tenserflow @Machine learning

--

--

Mayank Chourasia
Mayank Chourasia

Written by Mayank Chourasia

Hey, My name is Mayank Chourasia. Currently I am working on SAP Utilities as a SAP ABAP Developer. I had written a blogs on SAP ISU, SAP ABAP, Google Cloud .

No responses yet