Using Deployment Configurations
Estimated completion time: 11m.
This tutorial is relevant only for Kubernetes installation of Hydrosphere. Please refer to How to Install Hydrosphere on Kubernetes cluster.
In this tutorial, you will learn how to configure deployed Applications.
By the end of this tutorial you will know how to:
- Create an Application from the uploaded model version with previously created deployment configuration
- Examine settings of a Kubernetes cluster
In this section, we describe the resources required to create and upload an example model used in further sections. If you have no prior experience with uploading models to the Hydrosphere platform we suggest that you visit the Getting Started Tutorial.
Here are the resources used to train
sklearn.ensemble.GradientBoostingClassifier
and upload it to the Hydrosphere cluster.requirements.txt
serving.yaml
train.py
func_main.py
requirements.txt
is a list of Python dependencies used during the process of building model image.numpy~=1.18
scipy==1.4.1
scikit-learn~=0.23
pandas==1.3.1
serving.yaml
is a resource definition that describes how model should be built and uploaded to Hydrosphere platform.serving.yaml
kind: Model
name: my-model
runtime: hydrosphere/serving-runtime-python-3.7:3.0.0
install-command: pip install -r requirements.txt
payload:
- src/
- requirements.txt
- model.joblib
contract:
name: infer
inputs:
x:
shape: [30]
type: double
profile: numerical
outputs:
y:
shape: scalar
type: int64
profile: numerical
train.py
is used to generate a model.joblib
which is loaded from func_main.py
during model serving.Run
python train.py
to generate model.joblib
train.py
import joblib
import pandas as pd
from sklearn.datasets import make_blobs
from sklearn.ensemble import GradientBoostingClassifier
# initialize data
X, y = make_blobs(n_samples=3000, n_features=30)
# create a model
model = GradientBoostingClassifier(n_estimators=200)
model.fit(X, y)
# Save training data and model
pd.DataFrame(X).to_csv("training_data.csv", index=False)
joblib.dump(model, "model.joblib")
func_main.py
is a script which serves requests and produces responses.func_main.py
import joblib
import numpy as np
# Load model once
model = joblib.load("/model/files/model.joblib")
def infer(x):
# Make a prediction
y = model.predict(x[np.newaxis])
# Return the scalar representation of y
return {"y": np.asscalar(y)}
Our folder structure should look like this:
dep_config_tutorial
├── model.joblib
├── train.py
├── requirements.txt
├── serving.yaml
└── src
└── func_main.py
Do not forget to run
python train.py
to generate model.joblib
!After we have made sure that all files are placed correctly, we can upload the model to the Hydrosphere platform by running
hs apply
from the command line.hs apply -f serving.yaml
Next, we are going to create and upload an instance of Deployment Configuration to the Hydrosphere platform.
Deployment Configurations describe with which Kubernetes settings Hydrosphere should deploy servables. You can specify Pod Affinity and Tolerations, the number of desired pods in deployment, ResourceRequirements, and Environment Variables for the model container, and HorizontalPodAutoScaler settings.
Created Deployment Configurations can be attached to Servables and Model Variants inside of Application.
Deployment Configurations are immutable and cannot be changed after they've been uploaded to the Hydrosphere platform.
You can create and upload Deployment Configuration to Hydrosphere via YAML Resource definition or via Python SDK.
For this tutorial, we'll create a deployment configuration with 2 initial pods per deployment, HPA, and
FOO
environment variable with value bar
.YAML Resource Definition
Python SDK
Create the deployment configuration resource definition:
deployment_configuration.yaml
kind: DeploymentConfiguration
name: my-dep-config
deployment:
replicaCount: 2
hpa:
minReplicas: 2
maxReplicas: 4
cpuUtilization: 70
container:
env:
FOO: bar
To upload it to the Hydrosphere platform, run:
hs apply -f deployment_configuration.yaml
from hydrosdk import Cluster, DeploymentConfigurationBuilder
cluster = Cluster("http://localhost")
dep_config_builder = DeploymentConfigurationBuilder("my-dep-config-sdk")
dep_config = dep_config_builder \
.with_replicas(replica_count=2) \
.with_env({"FOO":"bar"}) \
.with_hpa(max_replicas=4,
min_replicas=2,
target_cpu_utilization_percentage=70).build(cluster)
YAML Resource Definition
Python SDK
Create the application resource definition:
application.yaml
kind: Application
name: my-app-with-config
singular:
model: my-model:1
deployment-config: my-dep-config
To upload it to the Hydrosphere platform, run:
hs apply -f application.yaml
from application import ApplicationBuilder, ExecutionStageBuilder
from hydrosdk import ModelVersion, Cluster, DeploymentConfiguration
cluster = Cluster('http://localhost')
my_model = ModelVersion.find(cluster, "my-model", 1)
my_config = DeploymentConfiguration.find(cluster, "my-dep-config")
stage = ExecutionStageBuilder().with_model_variant(model_version=my_model,
weight=100,
deployment_configuration=my_config).build()
app = ApplicationBuilder("my-app-with-config-sdk").with_stage(stage).build(cluster)
You can check whether
with_replicas
was successful by calling kubectl get deployment -A -o wide
and checking the READY
column.To check whether
with_hpa
was successful you should get a list of all created Horizontal Pod Autoscaler Resources. You can do so by calling kubectl get hpa -A
The output is similar to:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-model-1-tumbling-star CrossVersionObjectReference/my-model-1-tumbling-star 20%/70% 2 4 2 1d
To list all environment variables run
kubectl exec my-model-1-tumbling-star -it /bin/bash
and then execute the printenv
command which prints ann system variables.The output is similar to:
MY_MODEL_1_TUMBLING_STAR_SERVICE_PORT_GRPC=9091
...
FOO=bar