Only this pageAll pages
Powered by GitBook
1 of 44

2.4.0 Release

Loading...

About Hydrosphere

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AWS Sagemaker

Quickstart

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Resources

Loading...

Loading...

Loading...

Loading...

Loading...

Overview

What is Hydrosphere?

Hydrosphere is an open-source MLOps platform for deploying, managing, and monitoring ML models in production with Kubernetes.

Hydrosphere supports all major machine learning frameworks, including Tensorflow, Keras, PyTorch, XGBoost, scikit-learn, fastai, etc. The platform is designed to effectively measure performance and health metrics of your production models, making it possible to spot early signs of performance drops and data drifts, get insights into why they happen.

Hydrosphere offers immediate value to ML-based products:

  • Сovers all aspects of the production ML lifecycle - model versioning & deployment, traffic & contract management, data monitoring, gaining insights.

  • Easy & fast management of production models that brings models to production in minutes by reducing time to upload, update, and roll your models into production.

  • Allows to create reproducible, observable, and explainable machine learning pipelines.

  • Provides understanding and control of models’ performance in production via data and target metrics analysis.

  • Adds in-depth observability for your production models and data flowing through them.

  • Improves business metrics of ML-based products as a result of a reduction in MTTR and MTTD incidents related to ML models due to early alerts once data drifts happen.

Why Hydrosphere?

Production ML is a dangerous place where numerous things can and usually do go wrong, making issues harder to discover and fix. Hydrosphere automates MLOps in the production part of the ML lifecycle combining best practices of CI/CD and DevOps and putting special emphasis on monitoring performance of ML models after their deployment.

MLOps problems Hydrosphere addresses:

  • Non-interpretable, biased models

  • Integration between the tools of each step of production ML lifecycle

  • Long time to find & debug issues with production ML Models

  • Monitoring for Model Degradation and Performance Loss

  • Understanding the reasons behind wrong predictions

What Hydrosphere is not

Hydrosphere is not an ML model training framework. Before using Hydrosphere, you need to train your models with one of many existing frameworks for ML model training.

We suggest you use one of the orchestrators, such as Kubeflow or Airflow, to deliver your model to the Hydrosphere.

Hydrosphere Components

Hydrosphere platform includes all steps of a production ML model cycle - Versioning, Deployment, Monitoring, and Maintenance. This combination allows us to use a single tool to build an observable, reproducible, and scalable workflow, and start getting early warnings once anything goes wrong. These steps of an ML lifecycle are divided between three components that make up the Hydrosphere platform - Serving, Monitoring, and Interpretability.

Serving

Hydro Serving is responsible for framework-agnostic model deployment and management. It allows Data Scientists to upload, version, combine into linear pipelines and deploy ML models trained in any ML framework to a Docker/Kubernetes cluster and expose HTTP/gRPC/Kafka API to other parties.

Monitoring

Hydro Monitoring tracks model performance over time, raising alerts in case of detected issues. It provides a real-time updated UI, where you can monitor your models to see service health and usage. This constant monitoring of model health is crucial for any ML-based business as it’s tied to business and financial metrics.

Hydrosphere is capable of monitoring model quality with or without getting additional labeled data. Labeled data is often used in production drawing conclusions about the quality of a model’s predictions. Sometimes it is hard to get labeled data in production in a timely and cost-effective manner, especially when you deal with large volumes of complex data. Hydrosphere circumvents this issue by analyzing data that flows through a model as a proxy evaluating model quality to detect if ML models start to degrade and make unreliable predictions due to production data drifts from training data.

Interpretability

Hydrosphere Interpretability provides human-readable explanations of the predictions made by your ML models, as well as the explanations of monitoring analytics made by Hydrosphere Monitoring. It helps to evaluate and analyze models and understand what features influence their decisions. The Interpretability component demystifies your ML process, provides a new level of confidence about the reasons behind your models’ decisions and a certain level of trust business can rely on.

Hydrosphere Monitoring is not available as an open-source solution. If you are interested in this component you can contact us via or our

Hydrosphere Interpretability is not available as an open-source solution. If you are interested in this component you can contact us via or our .

gitter
website
Gitter
website
Hydrosphere Platform Components

Hydrosphere

Platform for deploying your Machine Learning to production

Hydrosphere is a platform for deploying, versioning, and monitoring your machine learning models in production. It is language-agnostic and framework-agnostic, with support for all major programming languages and frameworks - Python, Java, Tensorflow, Pytorch, etc.

What to do next?

⭐️ Star on Github

💦 Explore our tutorial

🥳 Join

Hydrosphere repo
Getting Started
Hydrosphere Slack Community

Serving

Gateway

Gateway is a service responsible for routing requests to/from or between Servables and Applications and validating these requests for matching a Model's/Application signature.

The Gateway maps a model’s name to a corresponding container. Whenever it receives a request via HTTP API, GRPC, or Kafka Streams, it communicates with that container via the gRPC protocol.

Manager

Manager is responsible for:

  • Building a Docker Image from your ML model for future deployment

  • Storing these images inside a Docker Registry deployed alongside with

    manager service

  • Versioning these images as Model Versions

  • Creating running instances of these Model Versions called Servables

    inside Kubernetes cluster

  • Combining multiple Model Versions into a linear graph with a single

    endpoint called Application

Key Features

Features that make up Hydrosphere Platform

Serving

Monitoring

Interpretability

Third-Party Integrations

  • AWS Sagemaker

Monitoring

Automatic Outlier Detection

Sonar

Sonar service is responsible for managing metrics, training and production data storage, calculating profiles, and shadowing data to the Model Versions which are used as an outlier detection metrics.

Drift Report

Concepts

There are a few concepts that you should be familiar with before starting to work with the Hydrosphere platform.

Resource definitions

Serving

Models & Model Versions

A Model is a machine learning model or a processing function that consumes provided inputs and produces predictions or transformations.

Within the Hydrosphere platform, we break down a model into its versions. Each Model version represents a single Docker image containing all the artifacts that you have uploaded to the platform. Consequently, Model is a group of Model versions with the same name.

Runtimes

A Runtime is a Docker image with the predefined gRPC interface which loads and serves your model.

Servable

Servable is a deployed instance of a Model version combined with a Runtime. It exposes a gRPC endpoint that can be used to send requests.

Users should not use Servables as-is, since they are designed to be building blocks, rather than inference endpoints. Hydrosphere provides a better alternative to deploy a Model version — Application.

Applications

An Application is a pipeline of one or more stages, each consisting of one or multiple Model Versions. Data sent to an application stage is shadowed to all of its model versions. The output of a stage is picked randomly with respect to weights.

When a user creates an Application, the Manager service automatically deploys appropriate Servables. The Application handles monitoring of your models and can perform A/B traffic splits.

Each Application has publicly available HTTP and gRPC endpoints that you can send requests to.

Deployment Configurations

A Deployment Configuration is a collection of Kubernetes settings that you can set for your Servables and Model Versions used inside of Application stages.

Deployment Configuration covers:

  • Horizontal Pod Autoscaler specs

  • Container Specs

    • Resource requirements: limits and requests

  • Pod Specs

    • Node Selectors

    • Affinity

    • Tolerations

  • Deployment Specs

    • Replicas count

Model's Signature

A Model's Signature is a specification of your model computation which identifies the name of a function with its inputs and outputs, including their names, shapes, and data types.

Example of a signature defined in a YAML file:

contract:
  name: predict
  inputs:
    x:
      shape: [-1, 2]
      type: double
      profile: numerical
  outputs:
    y:
      shape: scalar
      type: int
      profile: categorical

Field

A Field is a basic element of a Model's signature. It has a name, shape, data type, and profile.

Example of a model's signature field defined in a YAML file:

x:
  shape: [-1, 2]
  type: double
  profile: numerical

Field`s Profile

A Profile is a special tag that tells how Hydrosphere should interpret the field's data.

There are multiple available tags: Numerical, Categorical, Image, Text, Audio, Video, etc.

Monitoring

Metrics

Data coming through deployed Model Versions can be monitored with metrics.

Metric is a Model Version that takes a combination of inputs & outputs from another monitored Model Version, receives every request and response from the monitored model, produces a single value, and compares it with a threshold to determine whether this request was healthy or not.

Every request is evaluated against all metrics assigned to the model.

Checks

A check is a boolean condition associated with a field of a Model Version signature which shows for every request whether the field value is acceptable or not.

For example, Min/Max checks ensure that a field value is in an acceptable range which is inferred from training data values.

Automatic Outlier Detection

For each model with uploaded training data, Hydrosphere creates an outlier detection (Auto OD) metric, which assigns an outlier score to each request. A request is labeled as an outlier if the outlier score is greater than the 97th percentile of training data outlier scores distribution.

You can observe those models deployed as metrics in your monitoring dashboard. These metrics provide you with information about how novel/anomalous your data is.

If these values of the metric deviate significantly from the average, you can tell that you experience a data drift and need to re-evaluate your ML pipeline to check for errors.

Supported Models

Right now Auto OD feature works only for Models with numerical scalar fields and uploaded training data.

Traffic Shadowing

A/B Deployment

Users can specify the likelihood that a model output will be selected as an application stage output by using the weight argument.

Traffic Shadowing

Hydrosphere shadows traffic to all model versions inside of an application stage.

If you want to shadow your traffic between model versions without producing output from them simply set weight parameter to 0. This way your model version will receive all incoming traffic, but its output will never be chosen as an output of an application stage.

Platform Architecture

Hydrosphere is composed of several microservices, united to efficiently serve and monitor machine learning models in production. Hydrosphere features are divided between multiple services. You can learn more about each of them in this section.

UI / nginx

Interpretability

Interpretability provides EDA (Exploratory Data Analysis) and explanations for predictions made by your models to make predictions understandable and actionable. It also produces explanations for monitoring metrics to let you know why a particular request was marked as an outlier. The component consists of 2 services:

  • Explanations

  • Data Projections

Both services are built with Celery to run asynchronous tasks from apps and consists of a client, a worker, and a broker that mediates in between. A client generates a task and initiates it by adding a message to a queue, а broker delivers it to a worker, then the worker executes the task.

Interpretability services use MongoDB as both a Celery broker and backend storage to save task results. To save and retrieve model training and production data, the Interpretability component uses S3 storage.

When Explanation or Data Projection receives a task they create a new temporary Servable specifically for the model they need to make an explanation for. They use this Servable to run data through it in order to make new predictions and delete it after.

Prediction Explanations

Prediction Explanations generate explanations of model predictions to help you understand them. Depending on the type of data your model uses, it provides an explanation as either a set of logical predicates if your data is in a tabular format or a saliency map if your data is in the image format. Saliency Map is a heat map that highlights parts of a picture that a prediction was based on.

Data Projections

Data Projection visualizes high-dimensional data in a 2D scatter plot with an automatically trained UMAP transformer to let you evaluate data structure and spot clusters, outliers, novel data, or any other patterns. It is especially helpful if your model works with high-dimensional data, such as images or text embeddings.

​​
​​

Hydrosphere Monitoring is not available as an open-source solution. If you are interested in this component you can contact us via or our

Resource definitions describe Models, Applications, and Deployment Configurations in the YAML format. You can learn more about them in the section.

We have a few runtimes, which you can use in your own projects.

Auto OD Metric is an automatically generated Outlier Detection metric. More details are described .

Hydrosphere users can use multiple inside of the same stage. Hydrosphere shadows traffic to all model versions inside of an application stage.

Hydrosphere Interpretability is not available as an open-source solution. If you are interested in this component you can contact us via or our

Model Registry
Inference Pipelines
A/B Model Version Deployment
Traffic Shadowing
Language-Agnostic Deployment
Automatic Outlier Detection
Data Drift Report
Monitoring Dashboard and Data Health Metrics
Alerts
Prediction Explanations
Data Projection
Kubeflow Components
Gitter
website
How to write resource definitions
implemented
here
Gitter
website
model versions
Application

Model Registry

Hydrosphere has an internal Model Registry as centralized storage for Model Versions. When you build a Dockerized model and upload it to Hydrosphere or create new model versions, they get uploaded/stored to the configured model registry in the form of images. This organizes and simplifies model management across the platform and production lifecycle.

A/B Model Deployments

Hydrosphere allows you to A/B test your ML models in production.

A/B testing is a great way of measuring how well your models perform or which of your model versions is more effective and taking data-driven decisions upon this knowledge.

Production ML applications always have specific goals, for example driving as many users as possible to perform some action. To achieve these goals, it’s necessary to run online experiments and compare model versions using metrics in order to measure your progress against them. This approach allows to track whether your development efforts lead to desired outcomes.

To perform a basic A/B experiment on an application consisting of 2 variants of a model, you need to train and upload both versions to Hydrosphere, create an application with a single execution stage from them, invoke it by simulating production data flow, then analyze production data using metrics of your choice.

Learn how to set up an A/B application:

Inference Pipelines

Prediction Explanation

Prediction Explanation service is designed to help Hydrosphere users understand the underlying causes of changes in predictions coming from their models.

Prediction Explanation generates explanations of predictions produced by your models and tells you why a model made a particular prediction. Depending on the type of data your model uses, Prediction Explanation provides an explanation as either a set of logical predicates (if your data is in a tabular format) or a saliency map (if your data is in the image format). A saliency map is a heat map that highlights parts of a picture that a prediction was based on.

Apache2
Slack Community

A Hydrosphere user can create a linear inference pipeline from multiple model versions. Such pipelines are called .

Hydrosphere uses methods for explaining your model predictions. Such methods can be used on any machine learning model after they've been uploaded to the platform.

As of now, Hydrosphere supports explaining tabular and image data with and tools correspondingly.

A/B Analysis for a Recommendation Model
Applications
model-agnostic
Anchor
RISE

Monitoring Dashboard

Monitoring Dashboard plots all requests streaming through a model version which are colored in respect with how "healthy" they are. On the horizontal axis we group our data by batches and on the vertical axis we group data by signature fields. In this plot cells are determined by their batch and field. Cells are colored from green to red, depending on the average request health inside the batch.

Data Drift Report

Drift Report service creates a statistical report based on a comparison of training and production data distributions. It compares these two sets of data by a set of statistical tests and finds deviations.

Drift report uses multiple different tests with p=.95 for different features:

Numerical features:

  • Levene's test with a trimmed mean

  • Welch's t-test

  • Mood's test

  • Kolmogorov–Smirnov test

Categorical features:

  • Chi-Square test

  • Unseen categories

Supported Models

Right now Drift Report feature works only for Models with numerical scalar fields.

Monitoring Dashboard lets you track your performance and get a high-level view of your data health.

metrics

Language-Agnostic

Hydrosphere is a language-agnostic platform. You can use it with models written in any language and trained in any framework. Your ML models can come from any background, without restrictions of your choices regarding ML model development tools.

In Hydrosphere you operate ML models as , which are Docker containers packed with predefined dependencies and gRPC interfaces for loading and serving them on the platform with a model inside. All models that you upload to Hydrosphere must have the corresponding runtimes.

Runtimes are created by building a Docker container with dependencies required for the language that matches your model. You can either or .

The Hydrosphere component responsible for building Docker images from models for deployment, storing them in the registry, versioning, and more is .

use our pre-made runtimes
create your own runtime
Runtimes
Manager

Installation

The Hydrosphere platform can be installed in the following orchestrator's:

Docker installation

To install Hydrosphere using docker-compose, you should have the following prerequisites installed on your machine.

Install from releases

  1. Unpack the tar ball:

  1. Set up an environment:

Install from source

  1. Clone the serving repository:

  2. Set up an environment:

Kubernetes installation

To install Hydrosphere on the Kubernetes cluster you should have the following prerequisites fulfilled.

  • PV support on the underlying infrastructure (if persistence is required)

  • Docker registry with pull/push access (if the built-in one is not used)

Install from charts repository

  1. Add the Hydrosphere charts repository:

  2. Install the chart from repo to the cluster:

Install from source

  1. Clone the repository:

  2. Build dependencies:

  3. Install the chart:

After the chart has been installed, you have to expose the ui component outside of the cluster. For the sake of simplicity, we will just port-forward it locally.

Download the latest 2.4.0 release from the :

To check the installation, open . By default, Hydrosphere UI is available at port 80.

To check the installation, open .

export HYDROSPHERE_RELEASE=$released_version$
wget -O hydro-serving-${HYDROSPHERE_RELEASE}.tar.gz https://github.com/Hydrospheredata/hydro-serving/archive/${HYDROSPHERE_RELEASE}.tar.gz
tar -xvf hydro-serving-${HYDROSPHERE_RELEASE}.tar.gz
cd hydro-serving-${HYDROSPHERE_RELEASE}
docker-compose up
git clone https://github.com/Hydrospheredata/hydro-serving
cd hydro-serving
docker-compose up -d
helm repo add hydrosphere https://hydrospheredata.github.io/hydro-serving/helm
helm install --name serving --namespace hydrosphere hydrosphere/serving
git clone https://github.com/Hydrospheredata/hydro-serving.git
cd hydro-serving/helm
helm dependency build serving
helm install --namespace hydrosphere serving
kubectl port-forward -n hydrosphere svc/hydro-serving-ui-serving 8080:9090
Docker 18.0+
Docker Compose 1.23+
releases page
http://localhost/
Helm 2.9+
Kubernetes 1.14+ with v1 API
http://localhost:8080/
Docker Compose
Kubernetes

Alerts

Overview

Users can manage alerts by setting up AlertManager for Prometheus on Kubernetes. This can be helpful when you have models that you get too many alerts from and need to filter, group, or partly silence them. AlertManager can take care of grouping, inhibition, silencing of alerts, and routing them to the receiver integration of your choice. To configure alerts, modify the prometheus-am-configmap-<release_name> ConfigMap.

Kubeflow Components

Serving components

Deploy

The Deploy component allows you to upload a model, trained in a Kubeflow pipelines workflow to a Hydrosphere platform.

Release

The Release component allows you to create an Application from a model previously uploaded to Hydrosphere platform. This application will be capable of serving prediction requests by HTTP or gRPC.

Data Projection

Data Projection is a service that visualizes high-dimensional data in a 2D scatter plot with an automatically trained transformer to let you evaluate the data structure and spot clusters, outliers, novel data, or any other patterns. This is especially helpful if your model works with high-dimensional data, such as images or text embeddings.

Data Projection is an important tool, which helps to describe complex things in a simple way. One good visualization can show more than text or data. Monitoring and interpretation of machine learning models are hard tasks that require analyzing a lot of raw data: training data, production requests, as well as model outputs.

Essentially, this data is just numbers that in their original form of vectors and matrices do not have any meaning since it is hard to extract any meaning from thousands of vectors of numbers. In Hydrosphere we want to make monitoring easier and clearer that is why we created a data projection service that can visualize your data in a single plot.

Usage

To start working with Data Projection you need to create a model that has an output field with an embedding of your data. Embeddings are real-valued vectors that represent the input features in a lower dimensionality.

  1. Create a model with an embedding field

    Data Projection service delegates the creation of embeddings to the user. It expects that model will create embedding from input features and pass it as output vector. Thus embedding field is required, models without this field are not supported. Data Projection also expects that output labels field is called class and model confidence is called respectively confidence. Other outputs are ignored.

  2. Send data through your model

  3. Check Data Projection service inside the Model Details menu

Each point in the plot presents a request. Requests with similar features are close to each other. You can select a specific request point and inspect what it consists of.

Above plot, there are several scores: global score, stability score, MSID score, etc. These scores reflect the quality of projection of multidimensional requests to 2D. To interpret scores you refer to technical documentation on Data Projection service.

In the Colorize menu, you can choose how to colorize model requests: by class, by monitoring metric or by confidence. Data Projection searchers specifically for output scalars class and confidence.

In the Accent Points menu, you can highlight the nearest in original space points to the selected one by picking the nearest variant. Counterfactuals will show you nearest points to selected but with a different predicted label.

Hydrosphere Alerts about failed data checks and other issues with models are not available in the open-source version. If you are interested in this component please contact us via or our .

**** sends data about any failed health checks of live production models and applications to Prometheus AlertManager. Once a user deploys a model to production, adds training data and starts sending production requests, these requests start getting checked by Sonar. If Sonar detects an anomaly (for example, a data check failed, or a metric value exceeded the threshold), AlertManager sends an appropriate alert.

For more information about Prometheus AlertManager please refer to its .

Hydrosphere Serving Components for Kubeflow Pipelines provide integration between Hydrosphere model serving benefits and orchestration capabilities. This allows launching training jobs as well as serving the same models in Kubernetes in a single pipeline.

You can find examples of sample pipelines .

For more information, check

For more information, check

Inside Data Projection service you can see your requests features projected on a 2D space:

Gitter
website
official documentation
Sonar
Kubeflow
here
Hydrosphere Deploy Kubeflow Component
Hydrosphere Release Kubeflow Component

Python SDK

Python SDK offers a simple and convenient way of integrating a user's workflow scripts with Hydrosphere API.

Installation

You can use pip to install hydrosdk

pip install hydrosdk

Usage

from hydrosdk import Cluster, Application 
import pandas as pd

cluster = Cluster("http://localhost", grpc_address="localhost:9090")

app = Application.find(cluster, "my-model")
predictor = app.predictor()

df = pd.read_csv("path/to/data.csv")
for row in df.itertuples(index=False):
    predictor.predict(row)
Gateway enables data flow between different stages in an Application Pipeline
Model vs Model Version Difference
Place of Runtimes in the Architecture
Example of a multi-staged output with an A/B test on the second stage
Traffic is shadowed to all versions, but only v2 and v3 return output
Inference Pipeline with two stages
Tabular Explanation for class 0
Saliency map calculated by RISE.
Monitoring Dashboard UI

Source code: PyPI:

You can learn more about it in its documentation .

You can access the locally deployed Hydrosphere platform from previous by running the following code:

https://github.com/Hydrospheredata/hydro-serving-sdk
https://pypi.org/project/hydrosdk/
here
steps

CLI

Hydrosphere CLI, orhs, is a command-line interface designed to work with the Hydrosphere platform.

Installation

Use pip to install hs:

pip install hs

Check the installation:

hs --version

Usage

hs cluster

This command lets you operate cluster instances. A cluster points to your Hydrosphere instance. You can use this command to work with different Hydrosphere instances.

See hs cluster --help for more information.

hs upload

See hs upload --help for more information.

hs apply

This command is an extended version of the hs upload command, which also allows you to operate applications and host selector resources.

See hs apply --help for more information.

hs profile

This command lets you upload your training data to build profiles.

  • $ hs profile push - upload training data to compute its profiles.

  • $ hs profile status - show profiling status for a given model.

See hs profile --help for more information.

hs app

This command provides information about available applications.

  • $ hs app list - list all existing applications.

  • $ hs app rm - remove a certain application.

See hs app --help - for more information.

hs model

This command provides information about available models.

  • $ hs model list - list all existing models.

  • $ hs model rm - remove a certain model.

See hs model --help for more information.

Source code: PyPI:

This command lets you upload models to the Hydrosphere platform. During the upload, hs looks for a serving.yaml file in the current directory. This file must contain a definition of the model ().

https://github.com/Hydrospheredata/hydro-serving-cli
https://pypi.org/project/hs/

A/B Analysis for a Recommendation Model

Estimated completion time: 14 min.

Overview

In this tutorial, you will learn how to retrospectively compare the behavior of two different models.

By the end of this tutorial you will know how to:

  • Set up an A/B application

  • Analyze production data

Prerequisites

Set Up an A/B Application

Prepare a model for uploading

requirements.txt
lightfm==1.15
numpy~=1.18
joblib~=0.15
train_model.py
import sys

import joblib
from lightfm import LightFM
from lightfm.datasets import fetch_movielens

if __name__ == "__main__":
    no_components = int(sys.argv[1])
    print(f"Number of components is set to {no_components}")

    # Load the MovieLens 100k dataset. Only five
    # star ratings are treated as positive.
    data = fetch_movielens(min_rating=5.0)

    # Instantiate and train the model
    model = LightFM(no_components=no_components, loss='warp')
    model.fit(data['train'], epochs=30, num_threads=2)

    # Save the model
    joblib.dump(model, "model.joblib")
src/func_main.py
import joblib
import numpy as np
from lightfm import LightFM

# Load model once
model: LightFM = joblib.load("/model/files/model.joblib")

# Get all item ids
item_ids = np.arange(0, 1682)


def get_top_rank_item(user_id):
    # Calculate scores per item id
    y = model.predict(user_ids=[user_id], item_ids=item_ids)

    # Pick top 3
    top_3 = y.argsort()[:-4:-1]

    # Return {'top_1': ..., 'top_2': ..., 'top_3': ...}
    return dict([(f"top_{i + 1}", item_id) for i, item_id in enumerate(top_3)])
setup_runtime.sh
apt install --yes gcc
pip install -r requirements.txt
serving.yaml
kind: Model
name: movie_rec
runtime: hydrosphere/serving-runtime-python-3.7:2.3.2
install-command: chmod a+x setup_runtime.sh && ./setup_runtime.sh
payload:
  - src/
  - requirements.txt
  - model.joblib
  - setup_runtime.sh
contract:
  name: get_top_rank_item
  inputs:
    user_id:
      shape: scalar
      type: int64
  outputs:
    top_1:
      shape: scalar
      type: int64
    top_2:
      shape: scalar
      type: int64
    top_3:
      shape: scalar
      type: int64

Upload Model A

We train and upload our model with 5 components as movie_rec:v1

python train_model.py 5
hs upload

Upload Model B

Next, we train and upload a new version of our original model with 20 components as movie_rec:v2

python train_model.py 20
hs upload

We can check that we have multiple versions of our model by running:

hs model list

Create an Application

The following code will create such an application:

from hydrosdk import ModelVersion, Cluster
from hydrosdk.application import ApplicationBuilder, ExecutionStageBuilder

cluster = Cluster('http://localhost')

model_a = ModelVersion.find(cluster, "movie_rec", 1)
model_b = ModelVersion.find(cluster, "movie_rec", 2)

stage_builder = ExecutionStageBuilder()
stage = stage_builder.with_model_variant(model_version=model_a, weight=50). \
    with_model_variant(model_version=model_b, weight=50). \
    build()

app = ApplicationBuilder(cluster, "movie-ab-app").with_stage(stage).build()

Invoking movie-ab-app

We'll simulate production data flow by repeatedly asking our model for recommendations.

import numpy as np
from hydrosdk import Cluster, Application
from tqdm.auto import tqdm

cluster = Cluster("http://localhost", grpc_address="localhost:9090")

app = Application.find(cluster, "movie-ab-app")
predictor = app.predictor()

user_ids = np.arange(0, 943)

for uid in tqdm(np.random.choice(user_ids, 2000, replace=True)):
    result = predictor.predict({"user_id": uid})

Analyze production data

Read Data from parquet

Each request-response pair is stored in S3 (or in minio if deployed locally) in parquet files. We'll use fastparquet package to read these files and use s3fs package to connect to S3.

import fastparquet as fp
import s3fs

s3 = s3fs.S3FileSystem(client_kwargs={'endpoint_url': 'http://localhost:9000'},
                       key='minio', secret='minio123')

# The data is stored in `feature-lake` bucket by default 
# Lets print files in this folder
s3.ls("feature-lake/")

The only file in the feature-lake folder is ['feature-lake/movie_rec']. Data stored in S3 is stored under the following path: feature-lake/MODEL_NAME/MODEL_VERSION/YEAR/MONTH/DAY/*.parquet

# We fetch all parquet files with glob
version_1_paths = s3.glob("feature-lake/movie_rec/1/*/*/*/*.parquet")
version_2_paths = s3.glob("feature-lake/movie_rec/2/*/*/*/*.parquet")

myopen = s3.open

# use s3fs as the filesystem to read parquet files into a pandas dataframe
fp_obj = fp.ParquetFile(version_1_paths, open_with=myopen)
df_1 = fp_obj.to_pandas()

fp_obj = fp.ParquetFile(version_2_paths, open_with=myopen)
df_2 = fp_obj.to_pandas()

Now that we have loaded the data, we can start analyzing it.

Compare production data with new labeled data

To compare differences between model versions we'll use two metrics:

  1. Latency - we compare the time delay between the request received and the response produced.

  2. Mean Top-3 Hit Rate - we compare recommendations to those the user has rated. If they match then increase the hit rate by 1. Do this for the complete test set to get the hit rate.

Latencies

Let's calculate the 95th percentile of our latency distributions per model version and plot them. Latencies are stored in the _hs_latency column in our dataframes.

latency_v1 = df_1._hs_latency
latency_v2 = df_2._hs_latency

p95_v1 =  latency_v1.quantile(0.95)
p95_v2 = latency_v2.quantile(0.95)

In our case, the output was 13.0ms against 12.0ms. Results may differ.

Furthermore, we can visualize our data. To plot latency distribution we'll use the Matplotlib library.

import matplotlib.pyplot as plt

# Resize the canvas
plt.gcf().set_size_inches(10, 5)

# Plot latency histograms
plt.hist(latency_v1, range=(0, 20),
 normed=True, bins=20, alpha=0.6, label="Latency Model v1")
plt.hist(latency_v2, range=(0, 20),
 normed=True, bins=20, alpha=0.6, label="Latency Model v2")

# Plot previously computed percentiles
plt.vlines(p95_v1, 0, 0.1, color="#1f77b4",
 label="95th percentile for model version 1")
plt.vlines(p95_v2, 0, 0.1, color="#ff7f0e",
 label="95th percentile for model version 2")

plt.legend()
plt.title("Latency Comparison between v1 and v2")

Mean Top-3 Hit Rate

Next, we'll calculate hit rates. To do so, we need new labeled data. For recommender systems, this data is usually available after a user has clicked\watched\liked\rated the item we've recommended to him. We'll use the test part of movielens as labeled data.

To measure how well our models were recommending movies we'll use a hit rate metric. It calculates how many movies users have watched and rated with 4 or 5 out of 3 movies recommended to him.

from lightfm.datasets import fetch_movielens

test_data = fetch_movielens(min_rating=5.0)['test']
test_data = test_data.toarray()

# Dict with model version as key and mean hit rate as value
mean_hit_rate = {}
for version, df in {"v1": df_1, "v2": df_2}.items():

    # Dict with user id as key and hit rate as value
    hit_rates = {}
    for x in df.itertuples():
        hit_rates[x.user_id] = 0

        for top_x in ("top_1", "top_2", "top_3"):
            hit_rates[x.user_id] += test_data[x.user_id, getattr(x, top_x)] >= 4

    mean_hit_rate[version] = round(sum(hit_rates.values()) / len(hit_rates), 3)

In our case the mean_hit_rate variable is {'v1': 0.137, 'v2': 0.141} . Which means that the second model version is better in terms of hit rate.

You have successfully completed the tutorial! 🚀

Now you know how to read and analyze automatically stored data.

Getting Started

This is an entry-point tutorial to the Hydrosphere platform. Estimated completion time: 13 min.

Overview

In this tutorial, you will learn the basics of working with Hydrosphere. We will prepare an example model for serving, deploy it to Hydrosphere, turn it into an application, invoke it locally, and use monitoring. As an example model, we will take a simple logistic regression model fit with randomly generated data, with some noise added to it.

By the end of this tutorial you will know how to:

  • Prepare a model for Hydrosphere

  • Serve a model on Hydrosphere

  • Create an Application

  • Invoke an Application

  • Use basic monitoring

Prerequisites

For this tutorial, you need to have Hydrosphere Platform deployed and Hydrosphere CLI (hs) along with Python SDK (hydrosdk) **installed on your local machine. If you don't have them yet, please follow these guides first:

To let hs know where the Hydrosphere platform runs, configure a new cluster entity:

hs cluster add --name local --server http://localhost
hs cluster use local

Before you start

In the next two sections, we will prepare a model for deployment to Hydrosphere. It is important to stick to a specific folder structure during this process to let hs parse and upload the model correctly. Make sure that the structure of your local model directory looks like this by the end of the model preparation section:

logistic_regression
├── model.joblib
├── train.py
├── requirements.txt
├── serving.yaml
└── src
    └── func_main.py
  • train.py - a training script for our model

  • requirements.txt - provides dependencies for our model

  • model.joblib - a model artifact that we get as a result of model training

  • src/func_main.py - an inference script that defines a function for making model predictions

  • serving.yaml - a resource definition file to let Hydrosphere know which function to call from the func_main.py script and let the model manager understand model’s inputs and outputs.

Training a model

While Hydrosphere is a post-training platform, let's start with basic training steps to have a shared context.

First, create a directory for your model and create a new train.py inside:

mkdir logistic_regression
cd logistic_regression
touch train.py

Put the following code for your model in the train.py file:

train.py
import joblib
from sklearn.datasets import make_blobs
from sklearn.linear_model import LogisticRegression

# initialize data
X, y = make_blobs(n_samples=300, n_features=2, centers=[[-5, 1],[5, -1]])

# create a model
model = LogisticRegression()
model.fit(X, y)

joblib.dump(model, "model.joblib")

Next, we need to install all the necessary libraries for our model. In your logistic_regression folder, create a requirements.txt file and provide dependencies inside:

requirements.txt
numpy~=1.18
scipy==1.4.1
scikit-learn~=0.23
joblib~=0.15

Install all the dependencies to your local environment:

$ pip install -r requirements.txt

Train the model:

$ python train.py

As soon as the script finishes, you will get the model saved to a model.joblib file.

Model preparation

Every model in the Hydrosphere cluster is deployed as an individual container. After a request is sent from the client application, it is passed to the appropriate Docker container with your model deployed on it. An important detail is that all model files are stored in the /model/files directory inside the container, so we will look there to load the model.

To run our model we will use a Python runtime that can execute any Python code you provide. Model preparation is pretty straightforward, but you have to create a specific folder structure described in the "Before you start" section.

Provide the inference script

Let's create the main file func_main.pyin the /src folder of your model directory:

mkdir src
cd src
touch func_main.py

To do inference you have to define a function that will be invoked every time Hydrosphere handles a request and passes it to the model. Inside that function, you have to call a predict (or similar) method of your model and return your predictions:

func_main.py
import joblib
import numpy as np

# Load a model once
model = joblib.load("/model/files/model.joblib")

def infer(x1, x2):

    # Make a prediction
    y = model.predict([[x1, x2]])

    # Return the scalar representation of y
    return {"y": y.item()}

Inside func_main.py we initialize our model outside of the serving function infer.This process will not be triggered every time a new request comes in.

The infer function takes the actual request, unpacks it, makes a prediction, packs the answer, and returns it. There is no strict rule for naming this function, it just has to be a valid Python function name.

Provide a resource definition file

To let Hydrosphere know which function to call from the func_main.py file, we have to provide a resource definition file. This file will define a function to be called, inputs and outputs of a model, a signature function, and some other metadata required for serving.

Create a resource definition file serving.yamlin the root of your model directorylogistic_regression:

cd ..
touch serving.yaml

Inside serving.yaml we also providerequirements.txt andmodel.joblib as payload files to our model:

serving.yaml
kind: Model
name: logistic_regression
runtime: hydrosphere/serving-runtime-python-3.7:2.3.2
install-command: pip install -r requirements.txt
payload:
  - src/
  - requirements.txt
  - model.joblib

contract:
  name: infer
  inputs:
    x1:
      shape: scalar
      type: double
      profile: numerical
    x2:
      shape: scalar
      type: double
      profile: numerical
  outputs:
    y:
      shape: scalar
      type: int64
      profile: categorical

At this point make sure that the overall structure of your local model directory looks as shown in the "Before you start" section.

Although we have train.py inside the model directory, it will not be uploaded to the cluster since we are not listing it underpayload in the resource definition file.

Serving a Model

Now we are ready to upload our model to Hydrosphere. To do so, inside the logistic_regression model directory run:

hs upload

If you cannot find your newly uploaded model and it is listed on your models' page, it is probably still in the building stage. Wait until the model changes its status to Released, then you can use it.

Creating an Application

Invoking an application

Invoking applications is available via different interfaces. For this tutorial, we will cover calling the created Application by gRPC via our Python SDK.

To install SDK run:

pip install hydrosdk

Define a gRPC client on your side and make a call from it:

send_data.py
from sklearn.datasets import make_blobs
from hydrosdk import Cluster, Application

cluster = Cluster("http://localhost", grpc_address="localhost:9090")

app = Application.find(cluster, "logistic_regression")
predictor = app.predictor()

X, _ = make_blobs(n_samples=300, n_features=2, centers=[[-5, 1],[5, -1]])
for sample in X:
    y = predictor.predict({"x1": sample[0], "x2": sample[1]})
    print(y)

Getting Started with Monitoring

Hydrosphere Platform has multiple tools for data drift monitoring:

  1. Data Drift Report

  2. Automatic Outlier Detection

  3. Profiling

In this tutorial, we'll look at the monitoring dashboard and Automatic Outlier Detection feature.

Hydrosphere Monitoring relies heavily on training data. Users must provide training data to enable monitoring features.

Provide training data

To provide training data users need to add the training-data=<path_to_csv> field to the serving.yaml file. Run the following script to save training data used in previous steps as a trainig_data.csv file:

save_training_data.py
import pandas as pd
from sklearn.datasets import make_blobs

# Create training data
X, y = make_blobs(n_samples=300, n_features=2, centers=[[-5, 1],[5, -1]])

# Create pandas.DataFrame from it
df = pd.DataFrame(X, columns=['x1', 'x2'])
df['y'] = y

# Save it as .csv
df.to_csv("training_data.csv", index=False)

Next, add the training data field to the model definition inside the serving.yaml file:

serving.yaml
kind: Model
name: logistic_regression
runtime: hydrosphere/serving-runtime-python-3.7:2.3.2
install-command: pip install -r requirements.txt
training-data: training_data.csv
payload:
  - src/
  - requirements.txt
  - model.joblib
contract:
  name: infer
  inputs:
    x1:
      shape: scalar
      type: double
      profile: numerical
    x2:
      shape: scalar
      type: double
      profile: numerical
  outputs:
    y:
      shape: scalar
      type: int64
      profile: categorical

Upload a model

Now we are ready to upload our model. Run the following command to create a new version of the logistic_regresion model:

hs upload

For each model with uploaded training data, Hydrosphere creates an outlier detection metric, which assigns an outlier score to each request. This metric labels a request as an outlier if the outlier score is greater than the 97th percentile of training data outlier scores distribution.

Update an Application

Let's send some data to our new model version. To do so, we need to update our logistic_regression application. To update it, we can go to the Application tab and click the "Update" button:

Send data to Application

After updating our Application, we can reuse our old code to send some data:

send_data.py
from sklearn.datasets import make_blobs
from hydrosdk import Cluster, Application

cluster = Cluster("http://localhost", grpc_address="localhost:9090")

app = Application.find(cluster, "logistic_regression")
predictor = app.predictor()

X, _ = make_blobs(n_samples=300, n_features=2, centers=[[-5, 1],[5, -1]])
for sample in X:
    y = predictor.predict({"x1": sample[0], "x2": sample[1]})
    print(y)

Monitor data quality

You can monitor your data quality in the Monitoring Dashboard:

The Monitoring dashboard plots all requests streaming through a model version as rectangles colored according to how "healthy" they are. On the horizontal axis, we group our data by batches and on the vertical axis, we group data by signature fields. In this plot, cells are determined by their batch and field. Cells are colored from green to red, depending on the average request health inside this batch.

Check data drift detection

To check whether our metric will be able to detect data drifts, let's simulate one and send data from another distribution. To do so, let's slightly modify our code:

send_bad_data.py
from sklearn.datasets import make_blobs
from hydrosdk import Cluster, Application

cluster = Cluster("http://localhost", grpc_address="localhost:9090")

app = Application.find(cluster, "logistic_regression")
predictor = app.predictor()

# Change make_blobs arguments to simulate different distribution 
X, _ = make_blobs(n_samples=300, n_features=2, centers=[[-10, 10],[0, 0]])
for sample in X:
    y = predictor.predict({"x1": sample[0], "x2": sample[1]})
    print(y)

You can validate that your model was able to detect data drifts on the monitoring dashboard.

Tutorials

Contents

Overview

This section contains tutorials to help you get started with the Hydrosphere platform. A tutorial shows how to accomplish a goal rather than a single basic task.

Typically, a tutorial has several sections. When a tutorial section has several pieces of code to illustrate it, they can be shown as a group of tabs that you can switch between.

For guides on performing more basic technical steps, please look in the How-To section:

Using Deployment Configurations

Estimated completion time: 11m.

Overview

In this tutorial, you will learn how to configure deployed Applications.

By the end of this tutorial you will know how to:

  • Examine settings of a Kubernetes cluster

Prerequisites

Upload a Model

Here are the resources used to train sklearn.ensemble.GradientBoostingClassifier and upload it to the Hydrosphere cluster.

requirements.txt is a list of Python dependencies used during the process of building model image.

train.py is used to generate a model.joblib which is loaded from func_main.py during model serving.

Run python train.py to generate model.joblib

func_main.py is a script which serves requests and produces responses.

Our folder structure should look like this:

Do not forget to run python train.py to generate model.joblib!

After we have made sure that all files are placed correctly, we can upload the model to the Hydrosphere platform by running hs upload from the command line.

Create a Deployment Configuration

Created Deployment Configurations can be attached to Servables and Model Variants inside of Application.

Deployment Configurations are immutable and cannot be changed after they've been uploaded to the Hydrosphere platform.

For this tutorial, we'll create a deployment configuration with 2 initial pods per deployment, HPA, and FOO environment variable with value bar.

Create the deployment configuration resource definition:

To upload it to the Hydrosphere platform, run:

Create an Application

Create the application resource definition:

To upload it to the Hydrosphere platform, run:

Examine Kubernetes Settings

Replicas

You can check whether with_replicas was successful by calling kubectl get deployment -A -o wide and checking the READYcolumn.

HPA

To check whether with_hpa was successful you should get a list of all created Horizontal Pod Autoscaler Resources. You can do so by calling kubectl get hpa -A

The output is similar to:

Environment Variables

To list all environment variables run kubectl exec my-model-1-tumbling-star -it /bin/bash and then execute the printenv command which prints ann system variables.

The output is similar to:

Train & Deploy Census Income Classification Model

Overview

By the end of this tutorial you will know how to:

  • Prepare data

  • Train a model

  • Deploy a model with SDK

  • Explore models via UI

  • Deploy a model with CLI and resource definition

Prerequisites

For this tutorial, you need to have Hydrosphere Platform deployed and Hydrosphere CLI (hs) along with Python SDK (hydrosdk) **installed on your local machine. If you don't have them yet, please follow these guides first:

For this tutorial, you can use a local cluster. To ensure that, run hs cluster in your terminal. This command will show the name and server address of a cluster you’re currently using. If it shows that you're not using a local cluster, you can configure one with the following commands:

Data preparation

Model training always requires some amount of initial preparation, most of which is data preparation. The Adult Dataset consists of 14 descriptors, 5 of which are numerical and 9 categorical, including the class column.

Categorical features are usually presented as strings. This is not an appropriate data type for sending it into a model, so we need to transform it first. We can remove rows that contain question marks in some samples. Once the preprocessing is complete, you can delete the DataFrame (df):

Training a model

There are many classifiers that you can potentially use for this step. In this example, we’ll apply the Random Forest classifier. After preprocessing, the dataset will be separated into train and test subsets. The test set will be used to check whether our deployed model can process requests on the cluster. After the training step, we can save a model with joblib.dump() in a model/ model folder.

Deploy a model with SDK

The code in the func_main.py should be as follows:

It’s important to make sure that variables will be in the right order after we transform our dictionary for a prediction. So in cols we preserve column names as a list sorted by order of their appearance in the DataFrame.

To start working with the model in a cluster, we need to install the necessary libraries used in func_main.py. Create a requirements.txt in the folder with your model and add the following libraries to it:

After this, your model directory with all necessary dependencies should look as follows:

Now we are ready to upload our model to the cluster.

Use X.dtypes to check what types of data you have for each column. You can use int64 fields for all variables including income, which is our dependent variable and we can name it as 'y' in a signature for further prediction.

Besides, you can specify the type of profiling for each variable using ProfilingType so Hydrosphere could know what this variable is about and analyze it accordingly. For this purpose, we can create a dictionary, which could contain keys as our variables and values as our profiling types. Otherwise, you can describe them one by one as a parameter in the input.

Finally, we can complete our signature with the .build() method.

Next, we need to specify which files will be uploaded to the cluster. We use path to define the root model folder and payload to point out paths to all files that we need to upload.

At this point, we can combine all our efforts into the LocalModel object. LocalModels are models before they get uploaded to the cluster. They contain all the information required to instantiate a ModelVersion in a Hydrosphere cluster. We’ll name this model adult_model.

Now we are ready to upload our model to the cluster. This process consists of several steps:

  1. Once LocalModel is prepared we can apply the upload method to upload it.

  2. Then we can lock any interaction with the model until it will be successfully uploaded.

  3. ModelVersion helps to check whether our model was successfully uploaded to the platform by looking for it.

Predictors provide a predict method which we can use to send our data to the model. We can try to make predictions for our test set that has preliminarily been converted to a list of dictionaries. You can check the results using the name you have used for an output of Signature and preserve it in any format you would prefer. Before making a prediction don't forget to make a small pause to finish all necessary loadings.

Explore the UI

If you want to interact with your model via Hydrosphere UI, you can go to http://localhost. Here you can find all your models. Click on a model to view information about it: versions, building logs, created applications, model's environments, and other services associated with deployed models.

🎉 You have successfully finished this tutorial! 🎉

Next Steps

Next, you can:

  1. Go to the next tutorial and learn how to create a custom Monitoring Metric and attach it to your deployed model:

  1. Explore the extended part of this tutorial to learn how to use YAML resource definitions to upload a ModelVersion and create an Application.

Deploy a model with CLI and Resource Definitions

Model deployment with a resource definition repeats all the steps of that with SDK, but in one file. A considerable advantage of using a resource definition is that besides describing your model it allows creating an application by simply adding an object to the contract after the separation line at the bottom. Just name your application and provide the name and version of a model you want to tie to it.

To start uploading, run hs apply -f serving.yaml. To monitor your model you can use Hydrosphere UI as was previously shown.

Monitoring Anomalies with a Custom Metric

Estimated Completion Time: 18m.

Overview

In this tutorial, you will learn how to create a custom anomaly detection metric for a specific use case.

By the end of this tutorial you will know how to:

  • Train a monitoring model

  • Deploy of a monitoring model with SDK

  • Manage сustom metrics with UI

  • Upload a monitoring model with CLI

Prerequisites

For this tutorial, you need to have Hydrosphere Platform deployed and Hydrosphere CLI (hs) along with Python SDK (hydrosdk) **installed on your local machine. If you don't have them yet, please follow these guides first:

This tutorial is a sequel to the previous tutorial. Please complete it first to have a prepared dataset and a trained model deployed to the cluster:

Train a Monitoring Model

We start with the steps we used for the common model. First, let's create a directory structure for our monitoring model with an /src folder containing an inference scriptfunc_main.py:

To make sure that our monitoring model will see the same data as our prediction model, we are going to apply the training data that was saved previously for our monitoring model.

This is what the distribution of our inliers looks like. By choosing a contamination parameter we can adjust a threshold that will separate inliers from outliers accordingly. You have to be thorough in choosing it to avoid critical prediction mistakes. Otherwise, you can also stay with 'auto'. To create a monitoring metric, we have to deploy that IsolationForest model as a separate model on the Hydrosphere platform. Let's save a trained model for serving.

Deploy a Monitoring Model with SDK

First, let's create a new directory where we will store our inference script with declared serving function and its definitions. Put the following code inside the src/func_main.py file:

Next, we need to install the necessary libraries. Create a requirements.txt and add the following libraries to it:

Just like with common models, we can use SDK to upload our monitoring model and bind it to the trained one. The steps are almost the same, but with some slight differences. First, since we want to predict the anomaly score instead of sample class, we need to change the type of output field from 'int64' to 'float64'.

Secondly, we need to apply a couple of new methods to create a metric. MetricSpec is responsible for creating a metric for a specific model, with specific MetricSpecConfig.

Managing Custom Metrics with UI

Go to the UI to observe and manage all your models. Here you will find 3 models on the left panel:

  • adult_monitoring_model - our monitoring model

  • adult_model_metric - a model that was created by Automatic Outlier Detection

During the prediction, you will get anomaly scores for each sample in the form of a chart with two lines. The curved line shows scores, while the horizontal dotted one is our threshold. When the curve intersects the threshold, it might be a sign of potential anomalousness. However, this is not always the case, since there are many factors that might affect this, so be careful about your final interpretation.

Uploading a Monitoring model with CLI

Just like in the case with all other types of models, we can define and upload a monitoring model using a resource definition. We have to pack our model with a model definition, like in the previous tutorial.

Inputs of this model are the inputs of the target monitored model plus the outputs of that model. We will use the value field as an output for the monitoring model. The final directory structure should look like this:

From that folder, upload the model to the cluster:

Now we have to attach the deployed Monitoring model as a custom metric. Let's create a monitoring metric for our pre-deployed classification model in the UI:

  1. From the Models section, select the target model you would like to deploy and select the desired model version.

  2. Open the Monitoring tab.

  3. At the bottom of the page click the Configure Metric button.

  4. From the opened window click the Add Metric button.

    1. Specify the name of the metric.

    2. Choose the monitoring model.

    3. Choose the version of the monitoring model.

    4. Select a comparison operator Greater. This means that if you have a metric value greater than a specified threshold, an alarm should be fired.

    5. Set the threshold value. In this case, it should be equal to the value of monitoring_model.threshold_.

    6. Click the Add Metric button.

That's it. Now you have a monitored income classifier deployed on the Hydrosphere platform.

To create an A/B deployment we need to create an with a single execution stage consisting of two model variants. These model variants are our and correspondingly.

As mentioned before, we will use the logistic regression model sklearn.LogisticRegression. For data generation, we will use the sklearn.datasets.make_regression () method.

Hydrosphere communicates with the model using messages. If you want to perform a transformation or inference on the received TensorProto message, you will have to retrieve its contents, perform a transformation on it, and pack the result back to the TensorProto message. Pre-built python runtime automatically converts TensorProto messages to Numpy arrays, so the end-user doesn't need to interact with TensorProto messages.

To see your uploaded model, open .

Once you have opened your model in the UI, you can create an application for it. Basically, an application represents an endpoint to your model, so you can invoke it from anywhere. To learn more about advanced features, go to the page.

Creating an Application from the uploaded model

Open and press the Add New Application button. In the opened window select the logistic_regression model, name your application logistic_regression and click the "Add Application" button.

Open the page to see that there are now two versions of theogistic_regression model.

Upgrading an application stage to a newer version

This tutorial is relevant only for Kubernetes installation of Hydrosphere. Please refer to .

Train and upload an example

Create a

Create an from the uploaded model version with previously created deployment configuration

/

In this section, we describe the resources required to create and upload an example model used in further sections. If you have no prior experience with uploading models to the Hydrosphere platform we suggest that you visit the .

serving.yaml is a that describes how model should be built and uploaded to Hydrosphere platform.

Next, we are going to create and upload an instance of to the Hydrosphere platform.

Deployment Configurations describe with which Kubernetes settings Hydrosphere should deploy . You can specify Pod and , the number of desired pods in deployment, , and Environment Variables for the model container, and settings.

You can create and upload Deployment Configuration to Hydrosphere via or via .

In this tutorial, you will learn how to train and deploy a model for a classification task based on the . The main steps of this process are data preparation, training a model, uploading a model to the cluster, and making a prediction on test samples.

The easiest way to upload a model to your cluster is by using . SDK allows Python developers to configure and manage the model lifecycle on the Hydrosphere platform. Before uploading a model, you need to connect to your cluster:

Next, we need to create an inference script to be uploaded to the Hydrosphere platform. This script will be executed each time you are instantiating a model . Let's name our function file func_main.py and store it in the src folder inside the directory where your model is stored. Your directory structure should look like this:

Hydrosphere Serving has a strictly typed inference engine, so before uploading our model we need to specify it’s signature withSignatureBuilder. A contains information about which method inside the func_main.py should be called, as well as shapes and types of its inputs and outputs.

Additionally, we need to specify the environment in which our model will run. Such environments are called . In this tutorial, we will use the default Python 3.7 runtime. This runtime uses the src/func_main.py script as an entry point, which is the reason we organized our files the way we did.

One more parameter that you can define is a path to the training data of your model, required if you want to utilize additional services of Hydrosphere (for example, ).

To deploy a model you should create an - a linear pipeline of ModelVersions with monitoring and other benefits. Applications provide objects, which should be used for data inference purposes.

You might notice that after some time there appears an additional model with the metric postscript at the end of the name. This is your automatically formed monitoring model for outlier detection. Learn more about the Automatic Outlier Detection feature .

Another way to upload your model is to apply a . This process repeats all the previous steps like data preparation and training. The difference is that instead of SDK, we are using CLI to apply a resource definition.

A is a file that defines the inputs and outputs of a model, a signature function, and some other metadata required for serving. Go to the root directory of the model and create a serving.yaml file. You should get the following file structure:

Let's take a problem described in the previous tutorial as a use case and as a data source. We will monitor a model that classifies whether the income of a given person exceeds $50.000 per year.

As a monitoring metric, we will use IsolationForest. You can learn how it works in its .

Anomaly scores are obtained through inside the Hydrosphere's engine after making a Servable, so you don't need to perform any additional manipulations.

adult_model - a model that we trained for prediction in the

Click on the trained model and then on Monitoring. On the monitoring dasboard you now have two external metrics: the first one is auto_od_metric that was automatically generated by , and the new one is custom_metric that we have just created. You can also change settings for existing metrics and configure the new ones in the Configure Metrics section:

Installed Hydrosphere platform
Platform Installation
CLI
SDK
link
TensorProto
http://localhost/models
Applications
http://localhost/applications
http://localhost/models
Python SDK
Application
Model A
Model B
numpy~=1.18
scipy==1.4.1
scikit-learn~=0.23
serving.yaml
kind: Model
name: my-model
runtime: hydrosphere/serving-runtime-python-3.7:2.3.2
install-command: pip install -r requirements.txt
payload:
  - src/
  - requirements.txt
  - model.joblib
contract:
  name: infer
  inputs:
    x:
      shape: [30]
      type: double
  outputs:
    y:
      shape: scalar
      type: int64
train.py
import joblib
import pandas as pd
from sklearn.datasets import make_blobs
from sklearn.ensemble import GradientBoostingClassifier

# initialize data
X, y = make_blobs(n_samples=3000, n_features=30)

# create a model
model = GradientBoostingClassifier(n_estimators=200)
model.fit(X, y)

# Save training data and model
pd.DataFrame(X).to_csv("training_data.csv", index=False)
joblib.dump(model, "model.joblib")
func_main.py
import joblib
import numpy as np

# Load model once
model = joblib.load("/model/files/model.joblib")


def infer(x):
    # Make a prediction
    y = model.predict(x[np.newaxis])

    # Return the scalar representation of y
    return {"y": np.asscalar(y)}
dep_config_tutorial
├── model.joblib
├── train.py
├── requirements.txt
├── serving.yaml
└── src
    └── func_main.py
hs upload
deployment_configuration.yaml
kind: DeploymentConfiguration
name: my-dep-config
deployment:
  replicaCount: 2
hpa:
  minReplicas: 2
  maxReplicas: 4
  cpuUtilization: 70
container:
  env:
    FOO: bar
hs apply -f deployment_configuration.yaml
from hydrosdk import Cluster, DeploymentConfigurationBuilder

cluster = Cluster("http://localhost")

dep_config_builder = DeploymentConfigurationBuilder("my-dep-config", cluster)
dep_config = dep_config_builder. \
    with_replicas(replica_count=2). \
    with_env({"FOO":"bar"}). \
    with_hpa(max_replicas=4,
             min_replicas=2,
             target_cpu_utilization_percentage=70).build()
application.yaml
kind: Application
name: my-app-with-config
pipeline:
  - - model: my-model:1
      weight: 100
      deploymentConfiguartion: my-config
hs apply -f application.yaml
from application import ApplicationBuilder, ExecutionStageBuilder
from hydrosdk import ModelVersion, Cluster, DeploymentConfiguration

cluster = Cluster('http:\\localhost')
my_model = ModelVersion.find(cluster, "my-model", 1)
my_config = DeploymentConfiguration.find(cluster, "my-config")

stage = ExecutionStageBuilder().with_model_variant(model_version=my_model,
                                                   weight=100,
                                                   deployment_configuration=my_config).build()

app = ApplicationBuilder(cluster, "my-app-with-config").with_stage(stage).build()
NAME                        REFERENCE                                            TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
my-model-1-tumbling-star    CrossVersionObjectReference/my-model-1-tumbling-star 20%/70%    2         4         2          1d
MY_MODEL_1_TUMBLING_STAR_SERVICE_PORT_GRPC=9091
...
FOO=bar
hs cluster add --name local --server http://localhost
hs cluster use local
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder  

df = pd.read_csv('adult.csv', sep = ',').replace({'?':np.nan}).dropna()

categorical_encoder = LabelEncoder()
categorical_features = ["workclass", "education", "marital-status", 
                        "occupation", "relationship", "race", "gender", 
                        "capital-gain", "capital-loss", "native-country", 'income']

numerical_features = ['age', 'fnlwgt', 'educational-num', 
                      'capital-gain', 'capital-loss', 'hours-per-week']

for column in categorical_features:
    df[column] = categorical_encoder.fit_transform(df[column])

X, y = df.drop('income', axis = 1), df['income']

del df
 from sklearn.ensemble import RandomForestClassifier
 from sklearn.model_selection import train_test_split
 import joblib 

 train_X, test_X, train_y, test_y = train_test_split(X, y.astype(int), 
                                                    stratify=y,
                                                    test_size=0.2, 
                                                    random_state=random_seed)
 clf = RandomForestClassifier(n_estimators=20, 
                              max_depth=10,
                              n_jobs=5, 
                              random_state=random_seed).fit(train_X, train_y)

 joblib.dump(clf, 'model/model.joblib')
from hydrosdk.contract import SignatureBuilder, ModelContract
from hydrosdk.cluster import Cluster

cluster = Cluster("http-cluster-address", 
                 grpc_address="grpc-cluster-address", ssl=True,
                 grpc_credentials=ssl_channel_credentials())
.
└── model
    └── model.joblib
    └── src
        └── func_main.py
func_main.py
import pandas as pd
from joblib import load


clf = load('/model/files/model.joblib')

cols = ['age', 'workclass', 'fnlwgt',
 'education', 'educational-num', 'marital-status',
 'occupation', 'relationship', 'race', 'gender',
 'capital-gain', 'capital-loss', 'hours-per-week',
 'native-country']

def predict(**kwargs):
    X = pd.DataFrame.from_dict({'input': kwargs}, 
                               orient='index', columns = cols)
    predicted = clf.predict(X)

    return {"y": predicted[0]}
pandas==1.0.5
scikit-learn==0.23.2
joblib==0.16.0
.
└── model
    └── model.joblib
    └── requirements.txt
    └── src
        └── func_main.py
from hydrosdk.contract import SignatureBuilder, ModelContract, ProfilingType as PT

signature = SignatureBuilder('predict') 

col_types = {
  **dict.fromkeys(numerical_features, PT.NUMERICAL), 
  **dict.fromkeys(categorical_features, PT.CATEGORICAL)}

for i in X.columns:
    signature.with_input(i, 'int64', 'scalar', col_types[i])

signature = signature.with_output('y', 'int64', 'scalar', PT.NUMERICAL).build()
from hydrosdk.modelversion import LocalModel
from hydrosdk.image import DockerImage

path = "model/"
payload = ['src/func_main.py', 'requirements.txt', 'model.joblib']
contract = ModelContract(predict=signature)

local_model = LocalModel(name="adult_model", 
                         install_command = 'pip install -r requirements.txt',
                         contract=contract, payload=payload,
                         runtime=DockerImage("hydrosphere/serving-runtime-python-3.7", "2.3.2", None),
                         path=path, training_data = 'data/train.csv')
from hydrosdk.modelversion import ModelVersion

uploaded_model = local_model.upload(cluster)
uploaded_model.lock_till_released()
uploaded_model.upload_training_data()

# Check that model was uploaded successfully
adult_model = ModelVersion.find(cluster, name="adult_model", 
                               version=uploaded_model.version)
from hydrosdk.application import ExecutionStageBuilder, Application, ApplicationBuilder

stage = ExecutionStageBuilder().with_model_variant(adult_model).build()
app = ApplicationBuilder(cluster, "adult-app").with_stage(stage).build()

predictor = app.predictor()
results = []
for x in test_X.to_dict('records'):
    result = predictor.predict(x)
    results.append(result['y'])
print(results[:10])
.
└── model
    └── model.joblib
    └── serving.yaml
    └── requirements.txt
    └── src
        └── func_main.py
kind: Model
name: "adult_model"
payload:
  - "model/src/"
  - "model/requirements.txt"
  - "model/classification_model.joblib"
runtime: "hydrosphere/serving-runtime-python-3.6:0.1.2-rc0"
install-command: "pip install -r requirements.txt"
training-data: data/profile.csv
contract:
  name: "predict"
  inputs:
    age:
      shape: scalar
      type: int64
      profile: numerical
    workclass:
      shape: scalar
      type: int64
      profile: categorical
    fnlwgt:
      shape: scalar
      type: int64
      profile: numerical
    education:
      shape: scalar
      type: int64
      profile: categorical
    educational-num:
      shape: scalar
      type: int64
      profile: numerical
    marital_status:
      shape: scalar
      type: int64
      profile: categorical
    occupation:
      shape: scalar
      type: int64
      profile: categorical
    relationship:
      shape: scalar
      type: int64
      profile: categorical
    race:
      shape: scalar
      type: int64
      profile: categorical
    sex:
      shape: scalar
      type: int64
      profile: categorical
    capital_gain:
      shape: scalar
      type: int64
      profile: numerical
    capital_loss:
      shape: scalar
      type: int64
      profile: numerical
    hours_per_week:
      shape: scalar
      type: int64
      profile: numerical
    country:
      shape: scalar
      type: int64
      profile: categorical
  outputs:
    class:
      shape: scalar
      type: int64
      profile: numerical
---
kind: Application
name: adult_application
singular:
  model: adult_model:1
mkdir -p monitoring_model/src
cd monitoring_model
touch src/func_main.py
import joblib
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import IsolationForest

X_train = pd.read_csv('data/train.csv', index_col=0)

monitoring_model = IForest(contamination=0.04)

train_pred = monitoring_model.fit_predict(X_train) 

train_scores = monitoring_model.decision_function(X_train)

plt.hist(
    train_scores,
    bins=30, 
    alpha=0.5,
    density=True, 
    label="Train data outlier scores"
)

plt.vlines(monitoring_model.threshold_, 0, 1.9, label = "Threshold for marking outliers")
plt.gcf().set_size_inches(10, 5)
plt.legend()

dump(monitoring_model, "monitoring_model/monitoring_model.joblib")
func_main.py
import numpy as np
from joblib import load

monitoring_model = load('/model/files/monitoring_model.joblib')

features = ['age', 'workclass', 'fnlwgt',
            'education', 'educational-num', 'marital-status',
            'occupation', 'relationship', 'race', 'gender',
            'capital-gain', 'capital-loss', 'hours-per-week',
            'native-country']

def predict(**kwargs):
    x = np.array([kwargs[feature] for feature in features]).reshape(1, len(features))
    predicted = monitoring_model.decision_function(x)

    return {"value": predicted.item()}
joblib==0.13.2
numpy==1.16.2
scikit-learn==0.23.1
from hydrosdk.monitoring import MetricSpec, MetricSpecConfig, ThresholdCmpOp

path_mon = "monitoring_model/"
payload_mon = ['src/func_main.py', 
               'monitoring_model.joblib', 'requirements.txt']

monitoring_signature = SignatureBuilder('predict') 
for i in X_train.columns:
    monitoring_signature.with_input(i, 'int64', 'scalar')
monitor_signature = monitoring_signature.with_output('value', 'float64', 'scalar').build()

monitor_contract = ModelContract(predict=monitor_signature)

monitoring_model_local = LocalModel(name="adult_monitoring_model", 
                              install_command = 'pip install -r requirements.txt',
                              contract=monitor_contract,
                              runtime=DockerImage("hydrosphere/serving-runtime-python-3.7", "2.3.2", None),
                              payload=payload_mon,
                              path=path_mon)
monitoring_upload = monitoring_model_local.upload(cluster)
monitoring_upload.lock_till_released()

metric_config = MetricSpecConfig(monitoring_upload.id, monitoring_model.threshold_, ThresholdCmpOp.LESS)
metric_spec = MetricSpec.create(cluster, "custom_metric", model_find.id, metric_config)
kind: Model
name: "adult_monitoring_model"
payload:
  - "src/"
  - "requirements.txt"
  - "monitoring_model.joblib"
runtime: "hydrosphere/serving-runtime-python-3.7:2.3.2"
install-command: "pip install -r requirements.txt"
contract:
  name: "predict"
  inputs:
    age:
      shape: scalar
      type: int64
    workclass:
      shape: scalar
      type: int64
    education:
      shape: scalar
      type: int64
    marital_status:
      shape: scalar
      type: int64
    occupation:
      shape: scalar
      type: int64
    relationship:
      shape: scalar
      type: int64
    race:
      shape: scalar
      type: int64
    sex:
      shape: scalar
      type: int64
    capital_gain:
      shape: scalar
      type: int64
    capital_loss:
      shape: scalar
      type: int64
    hours_per_week:
      shape: scalar
      type: int64
    country:
      shape: scalar
      type: int64
    classes:
      shape: scalar
      type: int64
  outputs:
    value:
      shape: scalar
      type: float64
.
├── monitoring_model.joblib
├── requirements.txt
├── serving.yaml
└── src
    └── func_main.py
hs apply -f serving.yaml

Use private pip repositories

To use private pip repository you must add customized pip.conf file pointing to your custom PyPI repository.

For example, your custom pip.conf file can look like this:

[global]
timeout = 60
index-url = http://pypi.python.org/simple/

If you need to specify the certificate to use during pip install you want to specify the path to it in a pip.conf file e.g.

[global]
timeout = 60
index-url = http://pypi.python.org/simple/
cert = /model/files/cert.pem

You can tell pip to use this pip.conffile in the install-command field inside serving.yaml:

kind: Model
name: linear_regression
runtime: "hydrosphere/serving-runtime-python-3.7:$released_version$"
install-command: "PIP_CONFIG_FILE=pip.conf pip install -r requirements.txt"
payload:
  - "src/"
  - "requirements.txt"
  - "pip.conf"  # location of your pip.conf
  - "cert.pem"  # location of your certificate. It'll be available under /model/files/cert.pem
  - "model.h5"
contract:
  name: infer
  inputs:
    x:
      shape: [-1, 2]
      type: double
  outputs:
    y:
      shape: [-1]
      type: double
A/B Deployment and Traffic Shadowing
Using Deployment Configurations
Train & Deploy Census Income Classification Model
Monitor Anomalies with a Custom Metric
Monitor External Models
How-To
Getting Started Tutorial
Adult Dataset
Platform Installation
Hydrosphere SDK
Automatic Outlier Detection
Application
Predictor
here
Monitoring Anomalies with a Custom Metric
Train & Deploy Census Income Classification Model
census income dataset
Platform Installation
Train & Deploy Census Income Classification Model
documentation
traffic shadowing
Automatic Outlier Detection
How to Install Hydrosphere on Kubernetes cluster
Hydrosphere platform installed in Kubernetes cluster
model version
Deployment Configuration
Application
resource definition
Deployment Configuration
Affinity
Tolerations
ResourceRequirements
HorizontalPodAutoScaler
servables
servable
signature
Runtimes
resource definition
resource definition
Python SDK
Python SDK
Python SDK
CLI
CLI
CLI
previous tutorial
example
Python SDK
YAML Resource definition

How-To

This section offers guides that address technical aspects of working with the Hydrosphere platform.

Invoke applications

Inferencing applications can be achieved using any of the methods described below.

Hydrosphere UI

To send a sample request using Hydrosphere UI, open the desired application, and press the Test button at the upper right corner. We will generate dummy inputs based on your model's contract and send an HTTP request to the model's endpoint.

HTTP Inference

POST /gateway/application/<application_name>

To send an HTTP request, you should send a POST request to the /gateway/application/<applicationName> endpoint with the JSON body containing your request data, composed with respect to the model's contract.

Path Parameters

Name
Type
Description

application_name

string

Name of the application

Request Body

Name
Type
Description

object

Request data, composed with respect to the model's contract.

gRPC

To send a gRPC request you need to create a specific client.

import grpc 
import hydro_serving_grpc as hs  # pip install hydro-serving-grpc

# connect to your ML Lamba instance
channel = grpc.insecure_channel("<host>")
stub = hs.PredictionServiceStub(channel)

# 1. define a model, that you'll use
model_spec = hs.ModelSpec(name="model")

# 2. define tensor_shape for Tensor instance
tensor_shape = hs.TensorShapeProto(
    dim=[hs.TensorShapeProto.Dim(size=-1), hs.TensorShapeProto.Dim(size=2)])

# 3. define tensor with needed data
tensor = hs.TensorProto(dtype=hs.DT_DOUBLE, tensor_shape=tensor_shape, double_val=[1,1,1,1])

# 4. create PredictRequest instance
request = hs.PredictRequest(model_spec=model_spec, inputs={"x": tensor})

# call Predict method
result = stub.Predict(request)
import com.google.protobuf.Int64Value;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.hydrosphere.serving.tensorflow.DataType;
import io.hydrosphere.serving.tensorflow.TensorProto;
import io.hydrosphere.serving.tensorflow.TensorShapeProto;
import io.hydrosphere.serving.tensorflow.api.Model;
import io.hydrosphere.serving.tensorflow.api.Predict;
import io.hydrosphere.serving.tensorflow.api.PredictionServiceGrpc;

import java.util.Random;

public class HydrosphereClient {

    private final String modelName;         // Actual model name, registered within Hydrosphere platform
    private final Int64Value modelVersion;  // Model version of the registered model within Hydrosphere platform
    private final ManagedChannel channel;
    private final PredictionServiceGrpc.PredictionServiceBlockingStub blockingStub;

    public HydrosphereClient2(String target, String modelName, long modelVersion) {
        this(ManagedChannelBuilder.forTarget(target).build(), modelName, modelVersion);
    }

    HydrosphereClient2(ManagedChannel channel, String modelName, long modelVersion) {
        this.channel = channel;
        this.modelName = modelName;
        this.modelVersion = Int64Value.newBuilder().setValue(modelVersion).build();
        this.blockingStub = PredictionServiceGrpc.newBlockingStub(this.channel);
    }

    private Model.ModelSpec getModelSpec() {
        /*
        Helper method to generate ModelSpec.
         */
        return Model.ModelSpec.newBuilder()
                .setName(this.modelName)
                .setVersion(this.modelVersion)
                .build();
    }

    private TensorProto generateDoubleTensorProto() {
        /*
        Helper method generating random TensorProto object for double values.
        */
        return TensorProto.newBuilder()
                .addDoubleVal(new Random().nextDouble())
                .setDtype(DataType.DT_DOUBLE)
                .setTensorShape(TensorShapeProto.newBuilder().build())  // Empty TensorShape indicates scalar shape
                .build();
    }

    public Predict.PredictRequest generatePredictRequest() {
        /*
        PredictRequest is used to define the data passed to the model for inference.
        */
        return Predict.PredictRequest.newBuilder()
                .putInputs("in", this.generateDoubleTensorProto())
                .setModelSpec(this.getModelSpec())
                .build();
    }


    public Predict.PredictResponse predict(Predict.PredictRequest request) {
        /*
        The actual use of RPC method Predict of the PredictionService to invoke prediction.
        */
        return this.blockingStub.predict(request);
    }

    public static void main(String[] args) throws Exception {
        HydrosphereClient client = new HydrosphereClient("<host>", "example", 2);
        Predict.PredictRequest request = client.generatePredictRequest();
        Predict.PredictResponse response = client.predict(request);
        System.out.println(response);
    }
}

Python SDK

import hydrosdk as hs

hs_cluster = hs.Cluster(http_address='{HTTP_CLUSTER_ADDRESS}',
                         grpc_address='{GRPC_CLUSTER_ADDRESS}',)

app = hs.Application.find(hs_cluster, "{APP_NAME}")

predictor = adult_servable.predictor()

data  = ...  # your data
predictor.predict(data)

Develop runtimes

Sometimes our runtime images are not flexible enough. In that case, you might want to implement one yourself.

The key things you need to know to write your own runtime are:

  • How to implement a predefined gRPC service for a dedicated language

  • How to our contracts' protobufs work to describe entry points, such as inputs and outputs

  • How to create your own Docker image and publish it to an open registry

Generate GRPC code

$ git clone https://github.com/Hydrospheredata/hydro-serving-protos
$ mkdir runtime

To generate the gRPC code we need to install additional packages:

$ pip install grpcio-tools googleapis-common-protos

Our custom runtime will require contracts and tf protobuf messages. Let's generate them:

$ python -m grpc_tools.protoc --proto_path=./hydro-serving-protos/src/ --python_out=./runtime/ --grpc_python_out=./runtime/ $(find ./hydro-serving-protos/src/hydro_serving_grpc/contract/ -type f -name '*.proto')
$ python -m grpc_tools.protoc --proto_path=./hydro-serving-protos/src/ --python_out=./runtime/ --grpc_python_out=./runtime/ $(find ./hydro-serving-protos/src/hydro_serving_grpc/tf/ -type f -name '*.proto')
$ cd runtime
$ find ./hydro_serving_grpc -type d -exec touch {}/__init__.py \;

The structure of the runtime should now be as follows:

runtime
└── hydro_serving_grpc
    ├── __init__.py
    ├── contract
    │   ├── __init__.py
    │   ├── model_contract_pb2.py
    │   ├── model_contract_pb2_grpc.py
    │   ├── model_field_pb2.py
    │   ├── model_field_pb2_grpc.py
    │   ├── model_signature_pb2.py
    │   └── model_signature_pb2_grpc.py
    └── tf
        ├── __init__.py
        ├── api
        │   ├── __init__.py
        │   ├── model_pb2.py
        │   ├── model_pb2_grpc.py
        │   ├── predict_pb2.py
        │   ├── predict_pb2_grpc.py
        │   ├── prediction_service_pb2.py
        │   └── prediction_service_pb2_grpc.py
        ├── tensor_pb2.py
        ├── tensor_pb2_grpc.py
        ├── tensor_shape_pb2.py
        ├── tensor_shape_pb2_grpc.py
        ├── types_pb2.py
        └── types_pb2_grpc.py

Implement Service

Now that we have everything set up, let's implement a runtime. Create a runtime.py file and put in the following code:

from hydro_serving_grpc.tf.api.predict_pb2 import PredictRequest, PredictResponse
from hydro_serving_grpc.tf.api.prediction_service_pb2_grpc import PredictionServiceServicer, add_PredictionServiceServicer_to_server
from hydro_serving_grpc.tf.types_pb2 import *
from hydro_serving_grpc.tf.tensor_pb2 import TensorProto
from hydro_serving_grpc.contract.model_contract_pb2 import ModelContract
from concurrent import futures

import os
import time
import grpc
import logging
import importlib


class RuntimeService(PredictionServiceServicer):
    def __init__(self, model_path, contract):
        self.contract = contract
        self.model_path = model_path
        self.logger = logging.getLogger(self.__class__.__name__)

    def Predict(self, request, context):
        self.logger.info(f"Received inference request: {request}")

        module = importlib.import_module("func_main")
        executable = getattr(module, self.contract.predict.signature_name)
        result = executable(**request.inputs)

        if not isinstance(result, hs.PredictResponse):
            self.logger.warning(f"Type of a result ({result}) is not `PredictResponse`")
            context.set_code(grpc.StatusCode.OUT_OF_RANGE)
            context.set_details(f"Type of a result ({result}) is not `PredictResponse`")
            return PredictResponse()
        return result


class RuntimeManager:
    def __init__(self, model_path, port):
        self.logger = logging.getLogger(self.__class__.__name__)
        self.port = port
        self.model_path = model_path
        self.server = None

        with open(os.path.join(model_path, 'contract.protobin')) as file:
            contract = ModelContract.ParseFromString(file.read())
        self.servicer = RuntimeService(os.path.join(self.model_path, 'files'), contract)

    def start(self):
        self.logger.info(f"Starting PythonRuntime at {self.port}")
        self.server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
        add_PredictionServiceServicer_to_server(self.servicer, self.server)
        self.server.add_insecure_port(f'[::]:{self.port}')
        self.server.start()

    def stop(self, code=0):
        self.logger.info(f"Stopping PythonRuntime at {self.port}")
        self.server.stop(code)
© 2020 GitHub, Inc.

Let's quickly review what we have here. RuntimeManager simply manages our service, i.e. starts it, stops it, and holds all necessary data. RuntimeService is a service that actually implements thePredict(PredictRequest) RPC function.

The model will be stored inside the /model directory in the Docker container. The structure of /model is a follows:

model
├── contract.protobin
└── files
    ├── ...
    └── ...

files directory contains all files of your model.

To run this service let's create an another file main.py.

from runtime import RuntimeManager

import os
import time
import logging

logging.basicConfig(level=logging.INFO)

if __name__ == '__main__':
    runtime = RuntimeManager('/model', port=int(os.getenv('APP_PORT', "9090")))
    runtime.start()

    try:
        while True:
            time.sleep(60 * 60 * 24)
    except KeyboardInterrupt:
        runtime.stop()

Publish Runtime

Before we can use the runtime, we have to package it into a container.

To add requirements for installing dependencies, create a requirements.txt file and put inside:

grpcio==1.12.1 
googleapis-common-protos==1.5.3

Create a Dockerfile to build our image:

FROM python:3.6.5 

ADD . /app
RUN pip install -r /app/requirements.txt

ENV APP_PORT=9090

VOLUME /model 
WORKDIR /app

CMD ["python", "main.py"]

APP_PORT is an environment variable used by Hydrosphere. When Hydrosphere invokes Predict method, it does so via the defined port.

The structure of the runtime folder should now look like this:

runtime
├── Dockerfile
├── hydro_serving_grpc
│   ├── __init__.py
│   ├── contract
│   │   ├── __init__.py
│   │   ├── model_contract_pb2.py
│   │   ├── model_contract_pb2_grpc.py
│   │   ├── model_field_pb2.py
│   │   ├── model_field_pb2_grpc.py
│   │   ├── model_signature_pb2.py
│   │   └── model_signature_pb2_grpc.py
│   └── tf
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── model_pb2.py
│       │   ├── model_pb2_grpc.py
│       │   ├── predict_pb2.py
│       │   ├── predict_pb2_grpc.py
│       │   ├── prediction_service_pb2.py
│       │   └── prediction_service_pb2_grpc.py
│       ├── tensor_pb2.py
│       ├── tensor_pb2_grpc.py
│       ├── tensor_shape_pb2.py
│       ├── tensor_shape_pb2_grpc.py
│       ├── types_pb2.py
│       └── types_pb2_grpc.py
├── main.py
├── requirements.txt
└── runtime.py

Build and push the Docker image:

$ docker build -t {username}/python-runtime-example
$ docker push {username}/python-runtime-example

Remember that the registry has to be accessible to the Hydrosphere platform so it can pull the runtime whenever it has to run a model with this runtime.

Write definitions

An entity could be your model, application, or deployment configuration. Each definition is represented by a .yaml file.

Base definition

Every definition must include the following fields:

  • kind: defines the type of a resource

  • name: defines the name of a resource

The only valid options for kind are:

  • Model

  • Application

  • DeploymentConfiguration

kind: Model

A model definition must contain the following fields:

  • contract: an object defining the inputs and outputs of a model.

A model definition can contain the following fields:

  • payload: a list of files that should be added to the container.

  • install-command: a string defining a command that should be executed during the container build.

  • training-data: a string defining a path to the file that will be uploaded to Hydrosphere and used as a training data reference. It can be either a local file or a URI to an S3 object. At the moment we only support .csv files.

  • metadata: an object defining additional user metadata that will be displayed on the Hydrosphere UI.

The example below shows how a model can be defined on the top level.

serving.yaml
kind: "Model"
name: "sample_model"
training-data: "s3://bucket/train.csv" | "/temp/file.csv"
runtime: "hydrosphere/serving-runtime-python-3.6:$released_version$"
install-command: "sudo apt install jq && pip install -r requirements.txt" 
payload: 
  - "./requirements.txt"
contract:
  ...
metadata:
  ...

Contract object

contract object must contain the following fields:

  • inputs: an object, defining all inputs of a model

  • outputs: an object, defining all outputs of a model

contract object can contain the following fields:

  • name: a string defining the signature of the model that should be used to process requests

Field object

field object must contain the following fields:

  • shape: either "scalar" or a list of integers, defining the shape of your data. If a shape is defined as a list of integers, it can have -1 value at the very beginning of the list, indicating that this field has an arbitrary number of "entities". -1 cannot be put anywhere aside from the beginning of the list.

  • type: a string defining the type of data.

field object can contain the following fields:

  • profile: a string, defining the profile type of your data.

The only valid options for type are:

  • bool — Boolean

  • string — String in bytes

  • half — 16-bit half-precision floating-point

  • float16 — 16-bit half-precision floating-point

  • float32 — 32-bit single-precision floating-point

  • double — 64-bit double-precision floating-point

  • float64 — 64-bit double-precision floating-point

  • uint8 — 8-bit unsigned integer

  • uint16 — 16-bit unsigned integer

  • uint32 — 32-bit unsigned integer

  • uint64 — 64-bit unsigned integer

  • int8 — 8-bit signed integer

  • int16 — 16-bit signed integer

  • int32 — 32-bit signed integer

  • int64 — 64-bit signed integer

  • qint8 — Quantized 8-bit signed integer

  • quint8 — Quantized 8-bit unsigned integer

  • qint16 — Quantized 16-bit signed integer

  • quint16 — Quantized 16-bit unsigned integer

  • complex64 — 64-bit single-precision complex

  • complex128 — 128-bit double-precision complex

The only valid options for profile are:

  • text — monitoring such fields will be done with text-oriented algorithms.

  • image — monitoring such fields will be done with image-oriented algorithms.

  • numerical — monitoring such fields will be done with numerical-oriented algorithms.

  • categorical — monitoring such fields will be done with categorical-oriented algorithms.

The example below shows how a contract can be defined on the top level.

name: "infer"
inputs:
  input_field_1:
    shape: [-1, 1]
    type: string
    profile: text
  input_field_2:
    shape: [200, 200]
    type: int32
    profile: categorical
outputs: 
  output_field_1:
    shape: scalar
    type: int32 
    profile: numerical

Metadata object

metadata object can represent any arbitrary information specified by the user. The structure of the object is not strictly defined. The only constraint is that the object must have a key-value structure, where a value can only be of a simple data type (string, number, boolean).

The example below shows, how metadata can be defined.

metadata:
  experiment: "demo"
  environment: "kubernetes"

The example below shows a complete definition of a sample model.

kind: "Model"
name: "sample_model"
training-data: "s3://bucket/train.csv" | "/temp/file.csv"
runtime: "hydrosphere/serving-runtime-python-3.6:$released_version$"
install-command: "sudo apt install jq && pip install -r requirements.txt" 
payload: 
  - "./*"
contract:
  name: "infer"
  inputs:
    input_field_1:
      shape: [-1, 1]
      type: string
      profile: text
    input_field_2:
      shape: [-1, 1]
      type: int32
      profile: numerical
  outputs: 
    output_field_1:
      shape: scalar
      type: int32 
      profile: numerical
metadata:
  experiment: "demo"
  environment: "kubernetes"

kind: Application

The application definition must contain one of the following fields:

  • singular: An object, defining a single-model application;

  • pipeline: A list of objects, defining an application as a pipeline of models.

Singular object

singular object represents an application consisting only of one model. The object must contain the following fields:

  • model: A string, defining a model version. It is expected to be in the form model-name:model-version.

The example below shows how a singular application can be defined.

kind: "Application"
name: "sample_application"
singular:
  model: "sample_model:1"

Pipeline object

pipeline represents a list of stages, representing models.

stage object must contain the following fields:

  • model: A string defining a model version. It is expected to be in the form model-name:model-version.

stage object can contain the following fields:

  • weight: A number defining the weight of the model. All models' weights in a stage must add up to 100.

The example below shows how a pipeline application can be defined.

kind: Application
name: sample-claims-app
pipeline:
  - - model: "claims-preprocessing:1"
  - - model: "claims-model:1"
      weight: 80
    - model: "claims-model:2"
      weight: 20

In this application, 100% of the traffic will be forwarded to the claims-preprocessing:1 model version and the output will be fed into claims-model. 80% of the traffic will go to the claims-model:1 model version, 20% of the traffic will go to the claims-model:2 model version.

kind: DeploymentConfiguration

The DeploymentConfiguration resource definition can contain the following fields:

  • container: An object defining settings applied on a container level

  • deployment: An object defining settings applied on a deployment level

  • pod: An object defining settings applied on a pod level

HPA object

The hpa object must contain:

  • minReplicas : minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down.

  • maxReplicas : integer, upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than minReplicas.

  • cpuUtilization : integer from 1 to 100, target average CPU utilization (represented as a percentage of requested CPU) over all the pods; if not specified the default autoscaling policy will be used.

Container object

The container object can contain:

  • env : object with string keys and string values which is used to set environment variables.

Pod object

The pod object can contain

Deployment object

The deployment object must contain:

  • replicaCount : integer, number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1.

Example

The example below shows how a deployment configuration can be defined.

kind: DeploymentConfiguration
name: cool-deployment-config
hpa:
  minReplicas: 2
  maxReplicas: 10
  cpuUtilization: 80
deployment:
  replicaCount: 4
container:
  resources:
    limits:
      cpu: 500m
      memory: 4G
    requests:
      cpu: 250m
      memory: 2G
  env:
    foo: bar
pod:
  nodeSelector:
    im: a map
    foo: bar
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: exp1
            operator: Exists
          matchFields:
          - key: fields1
            operator: Exists
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: exp2
            operator: NotIn
            values:
            - aaaa
            - bvzv
            - czxc
          matchFields:
          - key: fields3
            operator: NotIn
            values:
            - aaa
            - cccc
            - zxcc
        weight: 100
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: value
            operator: Exists
          - key: key
            operator: NotIn
            values:
            - a
            - b
        namespaces:
        - namespace1
        topologyKey: top
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              key: a
            matchExpressions:
            - key: key1
              operator: In
              values:
              - a
              - b
            - key: value2
              operator: NotIn
              values:
              - b
          namespaces:
          - namespace2
          topologyKey: topo_valur
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: value
            operator: Exists
          - key: key2
            operator: NotIn
            values:
            - a
            - b
          - key: key3
            operator: DoesNotExist
        namespaces:
        - namespace1
        topologyKey: top
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              key: a
            matchExpressions:
            - key: key
              operator: In
              values:
              - a
              - b
            - key: key2
              operator: NotIn
              values:
              - b
          namespaces:
          - namespace2
          topologyKey: toptop
  tolerations:
  - effect: PreferNoSchedule
    key: equalToleration
    tolerationSeconds: 30
    operator: Equal
    value: kek
  - key: equalToleration
    operator: Exists
    effect: PreferNoSchedule
    tolerationSeconds: 30

Monitoring External Models

Overview

Monitoring can be used to track the behavior of external models running outside of the Hydrosphere platform. This tutorial describes how to register an external model, trigger analysis over your requests, and retrieve results.

By the end of this tutorial you will know how to:

  • Register a model

  • Upload training data

  • Assign custom metrics

  • Invoke analysis

  • Retrieve metrics

Prerequisites

For this tutorial, you need to have Hydrosphere Platform deployed on your local machine with Sonar component enabled. If you don't have it yet, please follow this guide first:

You also need a running external model, capable of producing predictions. Inputs and outputs of that model will be fed into Hydrosphere for monitoring purposes.

Model registration

First, you have to register an external model. To do that, submit a JSON document, defining your model.

Request document structure

This section describes the structure of the JSON document used to register external models within the platform.

Top-level members

The document must contain the following top-level members, describing the interface of your model:

  • name: the name of the registered model. This name uniquely identifies a collection of model versions, registered within the Hydrosphere platform.

  • contract: the interface of the registered model. This member describes inputs and outputs of the model, as well as other complementary metadata, such as model signature, and data profile for each field.

A document may contain additional top-level members, describing other details of your model.

  • metadata: the metadata of the registered model. The structure of the object is not strictly defined. The only constraint is that the object must have a key-value structure, where a value can only be of a simple data type (string, number, boolean).

  • monitoringConfiguration: monitoring configuration to be used for this model.

This example shows, how a model can be defined at the top level:

MonitoringConfiguration object

monitoringConfiguration object defines a monitoring configuration to be used for the model version. The object must contain the following members:

  • batchSize: size of the batch to be used for aggregations.

The example below shows how a monitoringConfiguration object can be defined.

Contract object

Thecontract object appears in the document to define the interface of the model. The contract object must contain the following members:

  • modelName: the original name of the model. It should be the same as the name of the registered model, defined on the level above;

  • predict: the signature of the model. It defines the inputs and the outputs of the model.

The example below shows how a contract object can be defined.

Predict object

predict object describes the signature of the model. The signature object must contain the following members:

  • signatureName: The signature of the model, used to process the request;

  • inputs: A collection of fields, defining the inputs of the model. Each item in the collection describes a single data entry, its type, shape, and profile. A collection must contain at least one item;

  • outputs: A collection of fields, defining the outputs of the model. Each item in the collection describes a single data entry, its type, shape, and profile. A collection must contain at least one item.

The example below shows how a predict object can be defined.

Field object

Items in the inputs / outputs collections are collectively called "fields". The field object must contain the following members:

  • name: Name of the field;

  • dtype: Data type of the field.

  • profile: Data profile of the field.

  • shape: Shape of the field.

The only valid options for dtype are:

  • DT_STRING;

  • DT_BOOL;

  • DT_VARIANT;

  • DT_HALF;

  • DT_FLOAT;

  • DT_DOUBLE;

  • DT_INT8;

  • DT_INT16;

  • DT_INT32;

  • DT_INT64;

  • DT_UINT8;

  • DT_UINT16;

  • DT_UINT32;

  • DT_UINT64;

  • DT_QINT8;

  • DT_QINT16;

  • DT_QINT32;

  • DT_QUINT8;

  • DT_QUINT16;

  • DT_COMPLEX64;

  • DT_COMPLEX128;

The only valid options for profile are:

  • NONE

  • NUMERICAL

  • TEXT

  • IMAGE

The example below shows how a single field object can be defined.

Shape object

shape object defines the shape of the data that the model is processing. The shape object must contain the following members:

  • dim: A collection of items, describing each dimension. A collection may be empty — in that case, the tensor will be interpreted as a scalar value.

  • unknownRank: Boolean value. Identifies whether the defined shape is of unknown rank.

The example below shows how a shape object can be defined.

Dim object

dim object defines a dimension of the field. The dim object must contain the following members:

  • size: Size of the dimension.

  • name: Name of the dimension.

The example below shows how a dim object can be defined.

Registering external model

A model can be registered by sending a POST request to the /api/v2/externalmodel endpoint. The request must include a model definition as primary data.

The request below shows an example of an external model registration.

As a response, the server will return a JSON object with complementary metadata, identifying a registered model version.

Response document structure

The response object from the external model registration request contains the following fields:

  • id: Model version ID, uniquely identifying a registered model version within Hydrosphere platform;

  • model: An object, representing a model collection, registered in Hydrosphere platform;

  • modelVersion: Model version number in the model collection;

  • modelContract: Contract of the model, similar to the one defined in the request section above;

  • metadata: Metadata of the model, similar to the one defined in the request section above;

  • monitoringConfiguration: MonitoringConfiguration of the model, similar to the one defined in the request section above;

  • created: Timestamp, indicating when the model was registered.

Note theid field. It will be referred as MODEL_VERSION_ID later throughout the article.

Model object

model object represents a collection of model versions, registered in the platform. The response model object contains the following fields:

  • id: ID of the model collection;

  • name: Name of the model collection.

The example below shows, a sample server response from an external model registration request.

Training data upload

To let Hydrosphere calculate the metrics of your requests, you have to submit the training data. You can do so by:

Currently, we support uploading training data as .csv files and utilizing it for NUMERICAL, CATEGORICAL, and TEXT profiles only.

Upload using CLI

Switch to the cluster, suitable for your current flow.

If you don't have a defined cluster yet, create one using the following command.

Make sure you have a local copy of the training data that you want to submit.

Submit the training data. You must specify two parameters:

  • --model-version: A string indicating the model version to which you want to submit the data. The string should be formatted in the following way <model-name>:<model-version>;

  • --filename: Path to a filename, that you want to submit.

If you already have your training data uploaded to S3, you can specify a path to that object URI using --s3path parameter instead of --filename. The object behind this URI should be available to the Hydrosphere instance.

Depending on the size of your data, you will have to wait for the data to be uploaded. If you don't want to wait, you can use the --async flag.

Upload using an HTTP endpoint

To upload your data using an HTTP endpoint, stream it to the /monitoring/profiles/batch/<MODEL_VERSION_ID> endpoint.

In the code snippets below you can see how data can be uploaded using sample HTTP clients.

Custom metrics assignment

This step is optional. If you wish to assign a custom monitoring metric to a model, you can do it by:

  • using Hydrosphere UI

  • using HTTP endpoint

Using Hydrosphere UI

Using HTTP endpoint

To assign metrics using HTTP endpoint, you will have to submit a JSON document, defining a monitoring specification.

Top-level members

The document must contain the following top-level members.

  • name: The name of the monitoring metric;

  • modelVersionId: Unique identifier of the model to which you want to assign a metric;

  • config: Object, representing a configuration of the metric, which will be applied to the model.

The example below shows how a metric can be defined on the top level.

Config object

config object defines a configuration of the monitoring metric that will monitor the model. The model must contain the following members:

  • modelVersionId: Unique identifier of the model that will monitor requests;

  • threshold: Threshold value, against which monitoring values will be compared using a comparison operator;

  • thresholdCmpOperator: Object, representing a comparison operator.

The example below shows, how a metric can be defined on a top-level.

ThresholdCmpOperator object

thresholdCmpOperator object defines the kind of comparison operator that will be used when comparing a value produced by the metric against the threshold. The object must contain the following members:

  • kind: Kind of comparison operator.

The only valid options for kind are:

  • Eq;

  • NotEq;

  • Greater;

  • Less;

  • GreaterEq;

  • LessEq.

The example below shows, how a metric can be defined on the top level.

The request below shows an example of assigning a monitoring metric. At this moment, both monitoring and the actual prediction model should be registered/uploaded to the platform.

Analysis invocation

In the code snippets below you can see how analysis can be triggered with sample gRPC clients.

Metrics retrieval

A request must contain the following parameters:

  • limit: how many requests to fetch;

  • offset: which offset to make from the beginning.

An example request is shown below.

Calculated metrics have a dynamic structure, which is dependant on the model interface.

Response object structure

A response object contains the original data submitted for prediction, the model's response, calculated metrics and other supplementary metadata. Every field produced by Hydrosphere is prefixed with _hs_ char.

  • _id: ID of the request, generated internally by Hydrosphere;

  • _hs_request_id: ID of the request, specified by user;

  • _hs_model_name: Name of the model that processed a request;

  • _hs_model_incremental_version: Version of the model that processed a request;

  • _hs_model_version_id: ID of the model version, which processed a request;

  • _hs_raw_checks: Raw checks calculated by Hydrosphere based on the training data;

  • _hs_metric_checks: Metrics produced by monitoring models;

  • _hs_latency: Latency, indicating how much it took to process a request;

  • _hs_error: Error message that occurred during request processing;

  • _hs_score: The number of all successful checks divided by the number of all checks;

  • _hs_overall_score: The amount of all successful metric values (not exceeding a specified threshold), divided by the amount of all metric values;

  • _hs_timestamp: Timestamp in nanoseconds, when the object was generated;

  • _hs_year: Year when the object was generated;

  • _hs_month: Month when the object was generated;

  • _hs_day: Day when the object was generated;

Apart from the fields defined above, each object will have additional fields specific to the particular model version and its interface.

  • _hs_<field_name>_score: The number of all successful checks calculated for this specific field divided by the total number of all checks calculated for this specific field;

  • <field_name>: The value of the field.

Raw checks object

_hs_raw_checks object contains all fields, for which checks have been calculated.

The example below shows, how the _hs_raw_checks_ object can be defined.

Check object

check object declares the check, that has been calculated for the particular field. The following members will be present in the object.

  • check: Boolean value indicating, whether the check has been passed;

  • description: Description of the check that has been calculated;

  • threshold: Threshold of the check;

  • value: Value of the field;

  • metricSpecId: Metric specification ID. For each check object this value will be set to null.

The example below shows, how the check object can be defined.

Metrics object

_hs_metrics_checks object contains all fields for which metrics have been calculated.

The example below shows how the _hs_metrics_checks object can be defined.

Metric object

metric object declares the metric, that has been calculated for the particular field. The following members will be present in the object.

  • check: Boolean value indicating, whether the metric has not been fired;

  • description: Name of the metric that has been calculated;

  • threshold: Threshold of the metric;

  • value: Value of the metric;

  • metricSpecId: Metric specification ID.

The example below shows how the metric object can be defined.

The example below shows a fully composed server response.

Reference

This section of the Hydrosphere documentation contains references.

Contribution

Manager, Gateway, and Sonar services are written in Scala while other services are written in Python.

You can explore Github issues with good-first-issue tag to check out where to start.

Consider sharing your experience with Hydrosphere team

Our team is constantly conducting user interviews to learn about what problems our users have and how they solve them.

These typically involve a 30-minute zoom call. Your experiences are of extreme value to us, so please consider participating.

Repositories open for contribution

Check CONTRIBUTING.md inside of the repo to which you will contribute to get any additional information.

Umbrella

Main components

Runtimes

Interfaces

Examples

Contributing Pull Requests

This guide is written for contributing to documentation. It doesn't contain any instructions on installing software prerequisites. If your intended contribution requires any software installations, please refer to their respective official documentation.

Prerequisites

  • Git installed on your local machine

  • GitHub account

Contents

  1. PR Contribution Workflow

  2. Basic Workflow Example

  3. PR Acceptance policy

PR Contribution Workflow

  1. Fork and clone this repository (git clone)

  2. Create a feature branch against master (git checkout -b featurename)

  3. Make changes in the feature branch

  4. Commit your changes (git commit -am "Add a feature")

  5. Push your changes to GitHub (git push origin feature)

  6. Open a Pull Request and wait for your PR to get reviewed

  7. Edit your PR to address the feedback (if any)

  8. See your PR getting merged

1. Fork and Clone this Repository

In order to contribute, you need to make your own copy of the repository you're going to contribute to. You do this by forking the repository to your GitHub account and then cloning the fork to your local machine.

  1. Clone the fork and switch to the project directory by running in your terminal:

    2. Create a New Branch

    It is important to make all your changes in a separate branch created off the master branch.

    Before any modifications to the repository that you've just cloned, create a new branch off of the master branch.

Create a new branch off of the current one and switch to it:

To switch between branches, use the same command without the -b flag. For example, to switch back to the master branch:

This way you can switch between multiple branches when you work on multiple features at once.

Branch Naming Conventions

Give your branch a descriptive name so that others working on the project understand what you are working on. The branch name should include the name of the module that you're contributing to.

Name your branch according to the following template, replacing nginx with the name of the module you're contributing to:

3. Make Changes

Make changes you want to propose. Make sure you do this in a dedicated branch based on the master branch.

4. Commit Changes

Commit changes often to avoid accidental data loss. Make sure to provide your commits with descriptive comments.

Or add and commit all changed files with one command:

5. Push Changes to GitHub

Push your local changes to your fork on GitHub.

For example, if your remote repository is called origin and you want to push a branch named docs/fix:

6. Open a Pull Request

Navigate to your fork on GitHub. Press the "New pull request" button in the upper-left part of the page. Add a title and a comment. Once you press the "Create pull request" button, the maintainers of this repository will receive your PR.

7. Address Feedback

After you submit the PR, one or several of the Hydrosphere repository reviewers will provide you with actionable feedback. Edit your PR to address all of the comments. Reviewers do their best to provide feedback and approval in a timely fashion but note that response time may vary based on circumstances.

8. Your PR Gets Merged

Once your PR is approved by a reviewer, it gets accepted and merged with the main repository. Merged PRs will get included in the next Hydrosphere release.

Basic Workflow Example

PR Acceptance Policy

What will make your PR more likely to get accepted:

  • Having your fixes on a dedicated branch

  • Proper branch naming

  • Descriptive commit messages

  • PR title describing what changed

  • PR comment describing why/where it changed in <80 chars

  • Texts checked for spelling and typos (you can use Grammarly)

  • Code snippets checked with linters (when applicable)

PR Title and Comment Conventions

A PR title should describe what has changed. A PR comment should describe why and what/where. If your changes relate to a particular issue, a PR comment should contain an issue number. Please keep PR comments below 80 characters for readability.

PR title example:

PR comment example:

Minor edits (typos, spelling, formatting, adding small text pieces) may get waved through. More substantial changes normally require more time, reviewers, and back-and-forths, and you might get asked for a PR resubmission or dividing changes into more that one PR. Usually, PRs are getting merged right after the approval.

Runtimes

Python

Tensorflow

Spark

Troubleshooting

This section is to help users solve common known issues.

If you want to ask questions live, you are free to do so in our Hydrosphere channel on Slack:

Distribution of outlier scores

You can learn more about our Python SDK .

There are different approaches to generating client and server gRPC code in . Let's have a look at how to do that in Python.

First, let's clone our repository and prepare a folder for the generated code:

Thecontract.protobin file will be created by the Manager service. It contains a binary representation of the message.

That's it. You have just created a simple runtime that you can use in your own projects. It is an almost identical version of our . You can always look up details there.

describe Hydrosphere entities.

runtime: a string defining the runtime Docker image that will be used to run a model. You can learn more about runtimes .

hpa: An object defining

The hpa object closely resembles the Kubernetes object

resources : object with limits and requests fields. Closely resembles the k8s object

The hpa object is similar to the Kubernetes object.

nodeSelector : which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. .

affinity : pod's scheduling constraints. Represented by an object.

tolerations : array of .

In each case your training data should be represented as a CSV document, containing fields named exactly as in the of your model.

You can acquire MODEL_VERSION_ID by sending a GET request to /model/version/<MODEL_NAME>/<MODEL_VERSION> endpoint. The response document will have a similar structure, already defined @ref.

To find out how to assign metrics using Hydrosphere UI, refer to page.

To send a request for analysis you have to use gRPC endpoint. We have already ProtoBuf messages for the reference.

Create an message that contains metadata information of the model, used to process a given request:

Create a message that contains the original request passed to the serving model for the prediction:

Create a message that contains inferenced output of the model:

Assemble an from the above-created messages.

Submit ExecutionInformation proto to Sonar for analysis. Use the RPC method of the to calculate metrics.

Once triggered, the method does not return anything. To fetch calculated metrics from the model version, you have to make a GET request to the /monitoring/checks/all/<MODEL_VERSION_ID> endpoint.

Hydrosphere platform consists of multiple microservices described in the section.

Fork this GitHub repository: on GitHub, navigate to the and click the Fork button in the upper-right area of the screen. This will create a fork (a copy of this repository in your GitHub account).

If you are using a framework for which a runtime is not yet implemented, you can open an in our Github.

Code is available on .

Code is available on .

Code is available on .

Write definitions
Invoke applications
Monitoring external models
Develop runtimes
Use private pip repository
here
different languages
protos
ModelContract
python runtime implementation
here
HorizontalPodAutoscalerSpec
HorizontalPodAutoscalerSpec
ResourceRequirements
PodSpec
selector
More info
Affinity
Tolerations
Resource definitions
{  
    "name": "external-model-example",
    "metadata": {
        "architecture": "Feed-forward neural network",
        "description": "Sample external model example",
        "author": "Hydrosphere.io",
        "training-data": "s3://bucket/external-model-example/data/",
        "endpoint": "http://example.com/api/external-model/"
    },
    "monitoringConfiguration": {
        "batchSize": 100
    },
    "contract": {
        ...
    }
}
{
    "batchSize": 100,
}
{
    "modelName": "external-model-example",
    "predict": {
        ...
    }
}
{
    "signatureName": "predict",
    "inputs": [
        ...
    ],
    "outputs": [
        ...
    ]
}
{
    "name": "age",
    "dtype": "DT_INT32",
    "profile": "NUMERICAL",
    "shape": {
        ...
    }
}
{
    "dim": [
        ...
    ],
    "unknownRank": false
}
{
    "size": 10,
    "name": "example"
}
POST /api/v2/externalmodel HTTP/1.1
Content-Type: application/json
Accept: application/json

{
    "name": "external-model-example",
    "metadata": {
        "architecture": "Feed-forward neural network",
        "description": "Sample external model example",
        "author": "Hydrosphere.io",
        "training-data": "s3://bucket/external-model-example/data/",
        "endpoint": "http://example.com/api/external-model/"
    },
    "monitoringConfiguration": {
        "batchSize": 100
    },
    "contract": {
        "modelName": "external-model-example",
        "predict": {
            "signatureName": "predict",
            "inputs": [
                {
                    "name": "in",
                    "dtype": "DT_DOUBLE",
                    "profile": "NUMERICAL",
                    "shape": {
                        "dim": [],
                        "unknownRank": false
                    }
                }
            ],
            "outputs": [
                {
                    "name": "out",
                    "dtype": "DT_DOUBLE",
                    "profile": "NUMERICAL",
                    "shape": {
                        "dim": [],
                        "unknownRank": false
                    }
                }
            ]
        }
    }
}
HTTP/1.1 200 OK
Content-Type: application/json

{
    "id": 1,
    "model": {
        "id": 1,
        "name": "external-model-example"
    },
    "modelVersion": 1,
    "created": "2020-01-09T16:25:02.915Z",
    "modelContract": { 
        "modelName": "external-model-example",
        "predict": {
            "signatureName": "predict",
            "inputs": [
                {
                    "name": "in",
                    "dtype": "DT_DOUBLE",
                    "profile": "NUMERICAL",
                    "shape": {
                        "dim": [],
                        "unknownRank": false
                    }
                }
            ],
            "outputs": [
                {
                    "name": "out",
                    "dtype": "DT_DOUBLE",
                    "profile": "NUMERICAL",
                    "shape": {
                        "dim": [],
                        "unknownRank": false
                    }
                }
            ]
        },
    "metadata": { 
        "architecture": "Feed-forward neural network",
        "description": "Sample external model example",
        "author": "Hydrosphere.io",
        "training-data": "s3://bucket/external-model-example/data/",
        "endpoint": "http://example.com/api/external-model/"
    },
    "monitoringConfiguration": {
        "batchSize": 100
    }
}
$ hs cluster use example-cluster
Switched to cluster '{'cluster': {'server': '<hydrosphere>'}, 'name': 'example-cluster'}'
$ hs cluster add --server <hydrosphere> --name example-cluster
Cluster 'example-cluster' @ <hydrosphere> added successfully
$ hs cluster use example-cluster
$ head external-model-data.csv
in,out
0.8744973,0.74737076
0.35367096,0.68612554
0.12600919,0.23873545
0.22988156,0.01602719
0.09958467,0.81491237
0.50324137,0.23527377
0.02184051,0.37468397
0.23937149,0.66311923
0.48611933,0.65467976
0.98475208,0.28292798
$ hs profile push \
    --model-version external-model-example:1 \
    --filename external-model-data.csv
from argparse import ArgumentParser
from urllib.parse import urljoin

import requests


def read_in_chunks(filename, chunk_size=1024):
    """ Generator to read a file peace by peace. """
    with open(filename, "rb") as file:
        while True:
            data = file.read(chunk_size)
            if not data:
                break
            yield data


if __name__ == "__main__": 
    parser = ArgumentParser()
    parser.add_argument("--hydrosphere", type=str, required=True)
    parser.add_argument("--model-version-id", type=int, required=True)
    parser.add_argument("--filename", required=True)
    parser.add_argument("--chunk-size", default=1024)
    args, unknown = parser.parse_known_args()
    if unknown:
        print("Parsed unknown arguments: %s", unknown)

    endpoint_uri = "/monitoring/profiles/batch/{}".format(args.model_version_id)
    endpoint_uri = urljoin(args.hydrosphere, endpoint_uri) 

    gen = read_in_chunks(args.filename, chunk_size=args.chunk_size)
    response = requests.post(endpoint_uri, data=gen, stream=True)
    if response.status_code != 200:
        print("Got error:", response.text)
    else:
        print("Uploaded data:", response.text)
import com.google.common.io.Files;

import java.io.*;
import java.net.*;


public class DataUploader {
    private String endpointUrl = "/monitoring/profiles/batch/";

    private String composeUrl(String base, long modelVersionId) throws java.net.URISyntaxException {
        return new URI(base).resolve(this.endpointUrl + modelVersionId).toString();
    }

    public int upload(String baseUrl, String filePath, long modelVersionId) throws Exception {
        String composedUrl = this.composeUrl(baseUrl, modelVersionId);
        HttpURLConnection connection = (HttpURLConnection) new URL(composedUrl).openConnection();
        connection.setRequestMethod("POST");
        connection.setDoOutput(true);
        connection.setChunkedStreamingMode(4096);

        OutputStream output = connection.getOutputStream();
        Files.copy(new File(filePath), output);
        output.flush();

        return connection.getResponseCode();
    }

    public static void main(String[] args) throws Exception {
        DataUploader dataUploader = new DataUploader();
        int responseCode = dataUploader.upload(
            "http://<hydrosphere>/", "/path/to/data.csv", 1);
        System.out.println(responseCode);
    }
}
{
    "name": "string",
    "modelVersionId": 1,
    "config": {
        ...
    }
}
{
    "modelVersionId": 2,
    "threshold": 0.5,
    "thresholdCmpOperator": {
        ...
    }
}
{
    "kind": "LessEq"
}
POST /monitoring/metricspec HTTP/1.1
Content-Type: application/json
Accept: application/json

{
    "name": "string",
    "modelVersionId": 1,
    "config": {
        "modelVersionId": 2,
        "threshold": 0.5,
        "thresholdCmpOperator": {
            "kind": "LessEq"
        }
    }
}
import uuid
import grpc
import random
import hydro_serving_grpc as hs

use_ssl_connection = True
if use_ssl_connection:
    creds = grpc.ssl_channel_credentials()
    channel = grpc.secure_channel(HYDROSPHERE_INSTANCE_GRPC_URI, credentials=creds)
else:
    channel = grpc.insecure_channel(HYDROSPHERE_INSTANCE_GRPC_URI) 
monitoring_stub = hs.MonitoringServiceStub(channel)

# 1. Create an ExecutionMetadata message. ExecutionMetadata is used to define, 
# which model, registered within Hydrosphere platform, was used to process a 
# given request.
trace_id = str(uuid.uuid4())  # uuid used as an example
execution_metadata_proto = hs.ExecutionMetadata(
    model_name="external-model-example",
    modelVersion_id=2,
    model_version=3,
    signature_name="predict",
    request_id=trace_id,
    latency=0.014,
)

# 2. Create a PredictRequest message. PredictRequest is used to define the data 
# passed to the model for inference.
predict_request_proto = hs.PredictRequest(
    model_spec=hs.ModelSpec(
        name="external-model-example",
        signature_name="predict", 
    ),
    inputs={
        "in": hs.TensorProto(
            dtype=hs.DT_DOUBLE, 
            double_val=[random.random()], 
            tensor_shape=hs.TensorShapeProto()
        ),
    }, 
)

# 3. Create a PredictResponse message. PredictResponse is used to define the 
# outputs of the model inference.
predict_response_proto = hs.PredictResponse(
    outputs={
        "out": hs.TensorProto(
            dtype=hs.DT_DOUBLE, 
            double_val=[random.random()], 
            tensor_shape=hs.TensorShapeProto()
        ),
    },
)

# 4. Create an ExecutionInformation message. ExecutionInformation contains all 
# request data and all auxiliary information about request execution, required 
# to calculate metrics.
execution_information_proto = hs.ExecutionInformation(
    request=predict_request_proto,
    response=predict_response_proto,
    metadata=execution_metadata_proto,
)

# 5. Use RPC method Analyse of the MonitoringService to calculate metrics
monitoring_stub.Analyze(execution_information_proto)
import io.hydrosphere.serving.monitoring.MonitoringServiceGrpc;
import io.hydrosphere.serving.monitoring.MonitoringServiceGrpc.MonitoringServiceBlockingStub;
import io.hydrosphere.serving.monitoring.Metadata.ExecutionMetadata;
import io.hydrosphere.serving.monitoring.Api.ExecutionInformation;
import io.hydrosphere.serving.tensorflow.api.Predict.PredictRequest;
import io.hydrosphere.serving.tensorflow.api.Predict.PredictResponse;
import io.hydrosphere.serving.tensorflow.TensorProto;
import io.hydrosphere.serving.tensorflow.TensorShapeProto;
import io.hydrosphere.serving.tensorflow.DataType;

import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;

import java.util.Random;
import java.util.UUID;
import java.util.concurrent.TimeUnit;


public class HydrosphereClient {

    private final String modelName;         // Actual model name, registered within Hydrosphere platform
    private final long modelVersion;        // Model version of the registered model within Hydrosphere platform
    private final long modelVersionId;      // Model version Id, which uniquely identifies any model within Hydrosphere platform
    private final ManagedChannel channel;
    private final MonitoringServiceBlockingStub blockingStub;

    public HydrosphereClient(String target, String modelName, long modelVersion, long modelVersionId) {
        this(ManagedChannelBuilder.forTarget(target).build(), modelName, modelVersion, modelVersionId);
    }

    HydrosphereClient(ManagedChannel channel, String modelName, long modelVersion, long modelVersionId) {
        this.channel = channel;
        this.modelName = modelName;
        this.modelVersion = modelVersion;
        this.modelVersionId = modelVersionId;
        this.blockingStub = MonitoringServiceGrpc.newBlockingStub(channel);
    }

    public void shutdown() throws InterruptedException {
        channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
    }

    private double getLatency() {
        /*
        Random value is used as an example. Acquire the actual latency
        value, during which a model processed a request.
        */
        return new Random().nextDouble();
    }

    private String getTraceId() {
        /*
        UUID used as an example. Use this value to track down your
        requests within Hydrosphere platform.
        */
        return UUID.randomUUID().toString();
    }

    private TensorProto generateDoubleTensorProto() {
        /*
        Helper method generating TensorProto object with random double values.
        */
        return TensorProto.newBuilder()
                .addDoubleVal(new Random().nextDouble())
                .setDtype(DataType.DT_DOUBLE)
                .setTensorShape(TensorShapeProto.newBuilder().build())  // Empty TensorShape indicates scalar shape
                .build();
    }

    private PredictRequest generatePredictRequest() {
        /*
        PredictRequest is used to define the data passed to the model for inference.
        */
        return PredictRequest.newBuilder()
                .putInputs("in", this.generateDoubleTensorProto()).build();
    }

    private PredictResponse generatePredictResponse() {
        /*
        PredictResponse is used to define the outputs of the model inference.
        */
        return PredictResponse.newBuilder()
                .putOutputs("out", this.generateDoubleTensorProto()).build();
    }

    private ExecutionMetadata generateExecutionMetadata() {
        /*
        ExecutionMetadata is used to define, which model, registered within Hydrosphere
        platform, was used to process a given request.
        */
        return ExecutionMetadata.newBuilder()
                .setModelName(this.modelName)
                .setModelVersion(this.modelVersion)
                .setModelVersionId(this.modelVersionId)
                .setSignatureName("predict")                // Use default signature of the model
                .setLatency(this.getLatency())              // Get latency for a given request
                .setRequestId(this.getTraceId())            // Get traceId to track a given request within Hydrosphere platform
                .build();
    }

    public ExecutionInformation generateExecutionInformation() {
        /*
        ExecutionInformation contains all request data and all auxiliary information
        about request execution, required to calculate metrics.
        */
        return ExecutionInformation.newBuilder()
                .setRequest(this.generatePredictRequest())
                .setResponse(this.generatePredictResponse())
                .setMetadata(this.generateExecutionMetadata())
                .build();
    }

    public void analyzeExecution(ExecutionInformation executionInformation) {
        /*
        The actual use of RPC method Analyse of the MonitoringService to invoke
        metrics calculation.
        */
        this.blockingStub.analyze(executionInformation);
    }

    public static void main(String[] args) throws Exception {
        /*
        Test client functionality by sending randomly generated data for analysis.
        */
        HydrosphereClient client = new HydrosphereClient("<hydrosphere>", "external-model-example", 1, 1);
        try {
            int requestAmount = 10;
            System.out.printf("Analysing %d randomly generated samples\n", requestAmount);
            for (int i = 0; i < requestAmount; i++) {
                ExecutionInformation executionInformation = client.generateExecutionInformation();
                client.analyzeExecution(executionInformation);
            }
        } finally {
            System.out.println("Shutting down client");
            client.shutdown();
        }
    }
}
GET /monitoring/checks/all/1?limit=1&offset=0 HTTP/1.1
Accept: application/json
{
    "<field_name>": [
        ...
    ]
}
{
    "check": true,
    "description": "< max",
    "threshold": 0.9321230184950273,
    "value": 0.2081205412912307,
    "metricSpecId": null
}
{
    "<field_name>": {
        ...
    }
}
{
    "check": true, 
    "description": "string", 
    "threshold": 5.0,
    "value": 4.0,
    "metricSpecId": "bbb34c1f-13e1-4d1c-ad29-6e27c5461c37"
}
HTTP/1.1 200 OK
Content-Type: application/json

[
    {
        "_id": "5e1717f687a34b00086f58d8",
        "in": 0.2081205412912307,
        "out": 0.5551249161117925,
        "_hs_in_score": 1.0,
        "_hs_out_score": 1.0,
        "_hs_raw_checks": {
            "in": [
                {
                    "check": true,
                    "description": "< max",
                    "threshold": 0.9321230184950273,
                    "value": 0.2081205412912307,
                    "metricSpecId": null
                },
                {
                    "check": true,
                    "description": "> min",
                    "threshold": 0.0001208467391203,
                    "value": 0.2081205412912307,
                    "metricSpecId": null
                }
            ],
            "out": [
                {
                    "check": true,
                    "description": "< max",
                    "threshold": 0.9921230184950273,
                    "value": 0.5551249161117925,
                    "metricSpecId": null
                },
                {
                    "check": true,
                    "description": "> min",
                    "threshold": 0.0201208467391203,
                    "value": 0.5551249161117925,
                    "metricSpecId": null
                }
            ],
        },
        "_hs_metric_checks": {
            "string": {
                "check": true, 
                "description": "KNN", 
                "threshold": 5.0,
                "value": 4.0,
                "metricSpecId": "bbb34c1f-13e1-4d1c-ad29-6e27c5461c37"
            },
        },
        "_hs_latency": 0.7166033601366634,
        "_hs_error": "string",
        "_hs_score": 1.0,
        "_hs_overall_score": 1.0,
        "_hs_model_version_id": 1,
        "_hs_model_name": "external-model-example",
        "_hs_model_incremental_version": 1,
        "_hs_request_id": "395ae721-5e68-46e1-8ed6-74e360c614c1",
        "_hs_timestamp": 1578571766000,
        "_hs_year": 2020,
        "_hs_month": 1,
        "_hs_day": 9
    }
]
git clone https://github.com/Hydrospheredata/hydro-serving.git
cd hydro-serving
git checkout -b <your-branch-name>
git checkout master
feature/docs_nginx
git add .
git commit -m "Add description"
git commit -am "Add description"
git push <repo-name> <branch-name>
git push origin docs/fix
git clone https://github.com/Hydrospheredata/hydro-serving.git
cd hydro-serving
git checkout -b docs/fix
git status
git commit -am "Add description"
git push origin docs/fix
Updated docs: README.md and CONTRIBUTING.md.
Updated README.md (minor changes, fixed all typos). 
Updated CONTRIBUTING.md (added a paragraph about 
using linters, added sections: 
"Use Linter to ensure correct syntax and formatting", 
"Push your changes to GitHub" to close issue 42. 

Issue #42
Platform Installation
predefined
ExecutionMetadata
PredictRequest
PredictResponse
ExecutionInformation
Analyse
MonitoringService
analyze
Runtimes
Platform Architecture
Schedule a time slot to talk with Hydrosphere team
hydro-serving
hydro-serving-manager
hydro-serving-gateway
hydro-serving-kafka-gateway
hydro-serving-ui
hydro-serving-protos
hydro-serving-python
hydro-serving-sdk
hydro-serving-cli
hydro-serving-example
main page of the repository
issue
Github
GitHub
GitHub
using CLI
using HTTP endpoint
interface
above
this

Version

Image

Link

3.8

hydrosphere/serving-runtime-python-3.8:2.4.0

3.7

hydrosphere/serving-runtime-python-3.7:2.4.0

3.6

hydrosphere/serving-runtime-python-3.6:2.4.0

Version

Image

Link

1.13.1

hydrosphere/serving-runtime-tensorflow-1.13.1:2.4.0

1.12.0

hydrosphere/serving-runtime-tensorflow-1.12.0:2.4.0

1.11.0

hydrosphere/serving-runtime-tensorflow-1.11.0:2.4.0

1.10.0

hydrosphere/serving-runtime-tensorflow-1.10.0:2.4.0

1.9.0

hydrosphere/serving-runtime-tensorflow-1.9.0:2.4.0

1.8.0

hydrosphere/serving-runtime-tensorflow-1.8.0:2.4.0

1.7.0

hydrosphere/serving-runtime-tensorflow-1.7.0:2.4.0

Version

Image

Link

2.2.0

hydrosphere/serving-runtime-spark-2.2.0:2.4.0

2.1.2

hydrosphere/serving-runtime-spark-2.1.2:2.4.0

2.0.2

hydrosphere/serving-runtime-spark-2.0.2:2.4.0

Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Docker Hub
Hire Provectus Team
Slack Community