Continuous Machine Learning with Kubeflow
US$ 19.95
The publisher has enabled DRM protection, which means that you need to use the BookFusion iOS, Android or Web app to read this eBook. This eBook cannot be used outside of the BookFusion platform.
Description
Contents
Reviews
Language
English
ISBN
9789389898507
Cover Page
Title Page
Copyright Page
Dedication Page
About the Author
About the Reviewer
Acknowledgement
Preface
Errata
Table of Contents
1. Introduction to Kubeflow & Kubernetes Cloud Architecture
Structure
Objectives
1.1 Docker understanding
1.1.1 Dockerfile
1.2 Kubernetes Architecture
1.2.1 What is Kubernetes?
1.2.2 Why do we need Kubernetes?
1.2.3 What are the Advantages of Kubernetes?
1.2.4 How do Kubernetes work?
1.3 Kubernetes components
1.3.1 Types of Services
1.4 Introduction on Kubeflow Orchestration for ML Deployment
1.5 Components of Kubeflow
1.5.1 Central Dashboard
1.5.2 Registration Flow
1.5.3 Metadata
1.5.4 Jupyter Notebook server
1.5.5 Katib
1.6 Getting Started in GCP Kubeflow setup
1.6.1 Install and Set Up kubectl
1.6.2 Install and Set Up gcloudsdk
1.6.3 Set Up OAuth from Cloud IAP
1.6.4 Set Up Docker
1.6.5 Set Up Kubeflow in Kubernetes Cluster in GCP
1.6.6 Connect to cluster and Deploy Grafana
1.6.7 Jupyter Notebook server setup in Kubeflow
1.7 Optional: PVC setup for Jupyter Notebook
1.8 Conclusion
1.9 Reference
2. Developing Kubeflow Pipeline in GCP
Structure
Objectives
2.1 Problem statement
2.2 Getting started in GCP Kubeflow setup
2.3 Breakdown technique to build production pipeline
2.4 Building the Kubeflow Pipeline components for TensorFlow model
2.3.1 Data Extraction or Ingestion Component
2.3.2 Data pre-processing component
2.3.3 Training model component
2.3.4 Evaluation component
2.5 Serving the Model with KF Serving
2.6 Building the pipeline end to end
2.7 Monitoring the performance with Grafana dashboard
2.8 Conclusion
2.9 Reference
3. Designing Computer Vision Model in Kubeflow
Structure
Objectives
3.1 Problem statement
3.2 Getting started in GCP Kubeflow setup
3.3 Analytics behind the problem statement
3.4. Building the Kubeflow pipeline components for Computer Vision (CNN) TensorFlow model
3.4.1 Data extraction or Ingestion component
3.4.2 Data pre-processing component
3.4.3 Training model component
3.3.4 Evaluation component
3.5. Serving the Model with KF Serving
3.6 Building the pipeline end to end
3.7. Auto-Scaling of the Serving Endpoint
3.8 Conclusion
3.9 Reference
4. Building TFX Pipeline
Structure
Objective
4.1 Problem statement
4.2 Architecture of TFX components
4.3 TFX environment setup
4.4 TFX pipeline components
4.4.1 ExampleGen
4.4.2 StatisticsGen
4.4.3 SchemaGen
4.4.4 ExampleValidator
4.4.5 Transform
4.4.6 Tuner and Trainer
4.4.7 Evaluator
4.4.7.1 Fairness and TFMA Visualization
4.4.8 Pusher
4.5 Serve the Model with TF Serving
4.6 Building Kubeflow Pipeline Orchestrator
4.7 Conclusion
4.8 Reference
5. ML Model Explainability & Interpretability
Structure
Objectives
5.1 Problem
5.2 General idea and concept behind Shap
5.3 Getting Started with Python library Installation and Data loading in Colab
5.4 Feature transformation for Training Model
5.5 LightGBM Model training
5.6 Model Analysis with advance Visualization along Shap Tool
5.6.1 Basic decision plot features
5.6.2 Force Plots Analysis
5.7 TensorFlow Estimator Model Framework Building
5.7.1 TensorFlow Estimator Model
5.8 Advance Visualization for TensorFlow Model with Tensorboard & What-IF Tool
5.8.1 Tensorboard
5.8.2 What-If Tool
5.9 Conclusion
5.10 References
6. Building Weights & Biases Pipeline Development
Structure
Objectives
6.1 Problem statement
6.2 Setup of project requirements in GCP & Wandb
6.2.1 Kubeflow Cluster in GCP and Docker setup
6.2.2 Kaggle API setup for downloading data
6.2.3 Weights & Biases API Key
6.3 Introduction on how to use Weights & Biases
6.4 Modeling and training the LightGBM Model for Equity Data
6.4.1 Get the latest version of Weights & Biases Dependency & Kaggle Setup
6.4.2 Weights & Biases Dependency & Kaggle API Setup
6.4.3 Loading and Extracting of Data
6.4.4 Exploratory Data Analysis
6.4.4 Utility Metrics Function
6.4.5 Training model (using Weights & Biases) with LightGBM Framework
6.5 Serving the model with KF Serving
6.6 Monitoring the performance with Grafana Dashboard
6.7 Conclusion
6.8 References
7. Applied ML with AWS SageMaker
Structure
Objectives
7.1 Problem
7.2 Getting started in AWS SageMaker setup
7.3 Getting Started with JupyterLab Notebook Instances and SDK & S3 Bucker
7.3.1 Create an S3 Bucket
7.3.2 Create an Amazon SageMaker Notebook Instance
7.4 Getting Started by Launching Notebook and loading data to S3
7.5 Load, Analyse, and Transform the Training Data
7.5.1 Data Loading from S3 and Library
7.5.2 Feature Engineering
7.5.2.1 Finding Categorical & Numerical Columns
7.5.2.2 Checking the missing values sum
7.5.2.3 Log transformation of dependent feature
7.5.2.4 Correlation & Scatter Plots
7.5.2.5 Outlier Detection
7.5.3 Feature Transformation
7.6 Amazon SageMaker Training Model
7.6.1 Splitting Data into Train/Validation and push to S3
7.6.2 Train with SageMaker API XG-Boost which maintains the algorithm container
7.7 Amazon SageMaker model deployment
7.8 Conclusion
7.9 References
8. Web App Development with Streamlit & Heroku
Structure
Objectives
8.1 Problem statement
8.2 Setup of project requirements in GCP & Heroku
8.3 Introduction on components of Streamlit
8.3.1 Main concepts
8.4 Building the Framework for Streamlit for OpenCV models
8.5 Creating the components for Heroku Deployment
8.6 Deploying the Streamlit code by containerizing in Kubernetes cluster
8.7 Summary
8.8 References
Index
The book hasn't received reviews yet.