A/B Analysis for a Recommendation Model

Estimated completion time: 14 min.

Overview

In this tutorial, you will learn how to retrospectively compare the behavior of two different models.

By the end of this tutorial you will know how to:

  • Set up an A/B application

  • Analyze production data

Prerequisites

Set Up an A/B Application

Prepare a model for uploading

requirements.txt
lightfm==1.15
numpy~=1.18
joblib~=0.15
train_model.py
import sys

import joblib
from lightfm import LightFM
from lightfm.datasets import fetch_movielens

if __name__ == "__main__":
    no_components = int(sys.argv[1])
    print(f"Number of components is set to {no_components}")

    # Load the MovieLens 100k dataset. Only five
    # star ratings are treated as positive.
    data = fetch_movielens(min_rating=5.0)

    # Instantiate and train the model
    model = LightFM(no_components=no_components, loss='warp')
    model.fit(data['train'], epochs=30, num_threads=2)

    # Save the model
    joblib.dump(model, "model.joblib")

Upload Model A

We train and upload our model with 5 components as movie_rec:v1

Upload Model B

Next, we train and upload a new version of our original model with 20 components as movie_rec:v2

We can check that we have multiple versions of our model by running:

Create an Application

To create an A/B deployment we need to create an Application with a single execution stage consisting of two model variants. These model variants are our Model A and Model B correspondingly.

The following code will create such an application:

Invoking movie-ab-app

We'll simulate production data flow by repeatedly asking our model for recommendations.

Analyze production data

Read Data from parquet

Each request-response pair is stored in S3 (or in minio if deployed locally) in parquet files. We'll use fastparquet package to read these files and use s3fs package to connect to S3.

The only file in the feature-lake folder is ['feature-lake/movie_rec']. Data stored in S3 is stored under the following path: feature-lake/MODEL_NAME/MODEL_VERSION/YEAR/MONTH/DAY/*.parquet

Now that we have loaded the data, we can start analyzing it.

Compare production data with new labeled data

To compare differences between model versions we'll use two metrics:

  1. Latency - we compare the time delay between the request received and the response produced.

  2. Mean Top-3 Hit Rate - we compare recommendations to those the user has rated. If they match then increase the hit rate by 1. Do this for the complete test set to get the hit rate.

Latencies

Let's calculate the 95th percentile of our latency distributions per model version and plot them. Latencies are stored in the _hs_latency column in our dataframes.

In our case, the output was 13.0ms against 12.0ms. Results may differ.

Furthermore, we can visualize our data. To plot latency distribution we'll use the Matplotlib library.

Mean Top-3 Hit Rate

Next, we'll calculate hit rates. To do so, we need new labeled data. For recommender systems, this data is usually available after a user has clicked\watched\liked\rated the item we've recommended to him. We'll use the test part of movielens as labeled data.

To measure how well our models were recommending movies we'll use a hit rate metric. It calculates how many movies users have watched and rated with 4 or 5 out of 3 movies recommended to him.

In our case the mean_hit_rate variable is {'v1': 0.137, 'v2': 0.141} . Which means that the second model version is better in terms of hit rate.

You have successfully completed the tutorial! πŸš€

Now you know how to read and analyze automatically stored data.

Last updated

Was this helpful?