Projects

latest | popular

Filter by
ONNX T5
Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX.
onnx pytorch transformers t5
How to Serve ML Models with TensorFlow Serving and Docker
In this tutorial, I’m going to show you how to serve ML models using Tensorflow Serving, an efficient, flexible, high-performance serving system for ML ...
machine-learning deep-learning tensorflow tensorflow-lite
BentoML
BentoML is an open-source framework for high-performance ML model serving.
serving ci-cd bentoml production
TensorFlow.js + Firebase
Use Firebase Hosting to deploy and host a machine learning model at scale.
production serving tensorflow tensorflow-js
Deploying your ML Model with TorchServe
In this talk, Brad Heintz walks through how to use TorchServe to deploy trained models at scale without writing custom code.
serving production torchserve tutorial
Basic Tensorflow Serving Example
Basic example of serving a Tensorflow CNN model for Hand Detection using Tensorflow Serving.
serving production object-detection computer-vision
Serving PyTorch models in production with the Amazon SageMaker
TorchServe is now natively supported in Amazon SageMaker as the default model server for PyTorch inference.
production serving sagemaker aws
projects 1 - 10 of 15
Topic experts
Share a project
Share something you or the community has made with ML.