Using a command line interface (CLI) application to organize our application's processes.
Repository
📬 Receive new lessons straight to your inbox (once a month) and join 20K+ developers in learning how to responsibly deliver value with ML.
Intuition
We want to enable others to be able to interact with our application without having to dig into the code and execute functions one at a time. One method is to build a CLI application that allows for interaction via any shell. It should designed such that we can see all possible operations as well the appropriate assistance needed for configuring options and other arguments for each of those operations. Let's see what a CLI looks like for our application which has many different commands (training, prediction, etc.)
Application
The app
that we defined inside our cli.py
script is created using Typer, an open-source tool for building command line interface (CLI) applications. It starts by initializing the app and then adding the appropriate decorator to each function we wish to use as a CLI command.
1 2 3 4 5 6 7 8 9 |
|
Note
We're combining console scripts (from setup.py
) and our Typer app to create a CLI application but there are many different ways to use Typer as well. We're going to have other programs use our application so this approach works best.
1 2 3 4 5 6 7 8 9 10 11 |
|
Commands
In cli.py
script we have define the following commands:
download-data
: download data from online to local drive.optimize
: optimize a subset of hyperparameters towards an objective.train-model
: train a model using the specified parameters.predict-tags
: predict tags for a give input text using a trained model.
We can list all the CLI commands for our application like so:
# View all Typer commands
$ tagifai --help
Usage: tagifai [OPTIONS] COMMAND [ARGS]
👉 Commands:
download-data Download data from online to local drive.
optimize Optimize a subset of hyperparameters towards ...
train-model Predict tags for a give input text using a ...
predict-tags Train a model using the specified parameters.
...
Arguments
With Typer, a function's input arguments automatically get rendered as command line options. For example, our predict_tags
function consumes text
and an optional model_dir
as inputs which automatically become arguments for the predict-tags
CLI command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
# Help for a specific command
$ tagifai predict-tags --help
Usage: tagifai predict-tags [OPTIONS]
Predict tags for a give input text using a trained model. Make sure that you have a trained model first!
Args:
text (str, optional):
Input text to predict tags for.
Defaults to "Transfer learning with BERT.".
model_dir (Path, optional):
Location of model artifacts.
Defaults to config.MODEL_DIR.
Returns:
Predicted tags for input text.
Options:
--text TEXT [default: Transfer learning with BERT.]
--model-dir TEXT [default: ]
--help Show this message and exit.
Executing
And we can easily use our CLI app to execute these commands with the appropriate options:
# Prediction
$ tagifai predict-tags --text "Transfer learning with BERT."
{
"input_text": "Transfer learning with BERT.",
"preprocessed_text": "transfer learning bert",
"predicted_tags": [
"attention",
"natural-language-processing",
"transfer-learning",
"transformers"
]
}
Note
You'll most likely be using the CLI application to optimize and train you models. We'll cover how to train using compute instances on the cloud from Amazon Web Services (AWS) or Google Cloud Platforms (GCP) in a later lesson. But in the meantime, if you don't have access to GPUs, check out the optimize.ipynb notebook for how to train on Google Colab and transfer to local. We essentially run optimization, then train the best model to download and transfer it's artifacts.
To cite this lesson, please use:
1 2 3 4 5 6 |
|