Improving on our solution iteratively over time.
Repository ยท Video
๐ฌ Receive new lessons straight to your inbox (once a month) and join 20K+ developers in learning how to responsibly deliver value with ML.
Intuition
We don't want to spend months of time developing a complicated solution only to learn that the entire problem has changed. The main idea here is to close the loop, which involves:
- Create a minimum viable product (MVP) that satisfies a baseline performance.
- Iterate on your solution by using the feedback.
- Constantly reassess to ensure your objective hasn't changed.

Creating the MVP for solutions that requires machine learning often involves going manual before ML.
- deterministic, high interpretability, low-complexity MVP (ex. rule based)
- establish baselines for objective comparisons
- allows you to ship quickly and get feedback from users
Deploying solutions is actually quite easy (from an engineering POV) but maintaining and iterating upon it is quite the challenge.
- collect signals from UI/UX to best approximate how your deployed model is performing
- determine window / rolling performances on overall and key slices of data
- monitor (performance, concept drift, etc.) to know when to update
- constantly reassess your objective
- iteration bottlenecks (ex. data quality checks)
Application
For our solution, we'll have an initial set of baselines where we'll start with a rule-based approach and then slowly add complexity (regression → CNN → Transformers).
Note
For the purpose of this course, even our MVP will be an ML model, however we would normally deploy the rule-based approach first as long as it satisfies a performance threshold.
As for monitoring and iterating on our solution, we'll be looking at things like overall performance, class specific performances, # of relevant tags, etc. We'll also create workflows to look at new data for anomalies, apply active learning, ease the annotation process, etc.
Resources
To cite this lesson, please use:
1 2 3 4 5 6 |
|