hs
) along with Python SDK (hydrosdk
) _**_installed on your local machine. If you don't have them yet, please follow these guides first:hs
know where the Hydrosphere platform runs, configure a new cluster
entity:hs
parse and upload the model correctly. Make sure that the structure of your local model directory looks like this by the end of the model preparation section:train.py
- a training script for our modelrequirements.txt
- provides dependencies for our modelmodel.joblib
- a model artifact that we get as a result of model trainingsrc/func_main.py
- an inference script that defines a function for making model predictionsserving.yaml
- a resource definition file to let Hydrosphere know which function to call from the func_main.py
script and let the model manager understand model’s inputs and outputs.sklearn.LogisticRegression
. For data generation, we will use the sklearn.datasets.make_regression
(link) method.train.py
inside:train.py
file:logistic_regression
folder, create a requirements.txt
file and provide dependencies inside:model.joblib
file./model/files
directory inside the container, so we will look there to load the model.func_main.py
in the /src
folder of your model directory:predict
(or similar) method of your model and return your predictions:func_main.py
we initialize our model outside of the serving function infer.
This process will not be triggered every time a new request comes in.infer
function takes the actual request, unpacks it, makes a prediction, packs the answer, and returns it. There is no strict rule for naming this function, it just has to be a valid Python function name.func_main.py
file, we have to provide a resource definition file. This file will define a function to be called, inputs and outputs of a model, a signature function, and some other metadata required for serving.serving.yaml
in the root of your model directorylogistic_regression
:serving.yaml
we also providerequirements.txt
andmodel.joblib
as payload files to our model:train.py
inside the model directory, it will not be uploaded to the cluster since we are not listing it underpayload
in the resource definition file.logistic_regression
model directory run:Released
, then you can use it.Add New Application
button. In the opened window select the logistic_regression
model, name your application logistic_regression
and click the "Add Application" button.training-data=<path_to_csv>
field to the serving.yaml
file. Run the following script to save training data used in previous steps as a training_data.csv
file:serving.yaml
file:logistic_regresion
model:logistic_regression
model.logistic_regression
application. To update it, we can go to the Application tab and click the "Update" button: