From Hours to Seconds: 100x Faster Boosting, Bagging, & Stacking
100x Faster Boosting, Bagging, and Stacking with RAPIDS cuML and Scikit-learn Machine Learning Model Ensembling.
rapids cuml scikit-learn emsembling boosting bagging stacking tutorial article code

In this post, we’ll walk through how you can now use RAPIDS cuML with scikit-learn’s ensemble model APIs to achieve more than 100x faster boosting, bagging, stacking, and more. This is possible because of the well-defined interfaces and use of duck typing in the scikit-learn codebase. Using cuML estimators as drop-in replacements mean data scientists can have their cake and eat it, too.

Don't forget to tag @github in your comment, otherwise they may not be notified.

Authors community post
How people build software.
Share this project
Similar projects
Distributed Linear Regression with cuML
How to scale GPU machine learning with Dask (w/ code + data)
Beginner’s Guide to KNN with cuML
What is K-Nearest Neighbors? And how to implement it in Python with RAPIDS cuML.
Target Encoding with RAPIDS cuML: Do More with Categorical Data
Walk through the design of target encoding with RAPIDS cuML.
Beginner’s Guide to Logistic Regression with cuML
What is Logistic Regression? And how to implement it in Python with RAPIDS cuML
Top collections