Deploying your ML Model with TorchServe
In this talk, Brad Heintz walks through how to use TorchServe to deploy trained models at scale without writing custom code.
production model-serving torchserve tutorial video

Deploying and managing models in production is often the most difficult part of the machine learning process. TorchServe is a flexible and easy to use tool for serving PyTorch models. In this talk, Brad Heintz walks through how to use TorchServe to deploy trained models at scale without writing custom code.

Don't forget to tag @fbbradheintz in your comment, otherwise they may not be notified.

Authors community post
I work with PyTorch.
Share this project
Similar projects
BentoML
BentoML is an open-source framework for high-performance ML model serving.
Cortex
Build machine learning APIs.
TensorFlow Serving
A flexible, high-performance serving system for machine learning models, designed for production environments.
Efficient Serverless Deployment of PyTorch Models on Azure
A tutorial for serving models cost-effectively at scale using Azure Functions and ONNX Runtime.