Skip to content

Data Preprocessing


Preprocessing our dataset, via through preparations and transformations, to use for training.
Goku Mohandas
· ·
Repository ยท Notebook

๐Ÿ“ฌ  Receive new lessons straight to your inbox (once a month) and join 30K+ developers in learning how to responsibly deliver value with ML.

Intuition

Data preprocessing can be categorized into two types of processes: preparation and transformation. We'll explore common preprocessing techniques and then walkthrough the relevant processes for our specific application.

Warning

Certain preprocessing steps are global (don't depend on our dataset, ex. lower casing text, removing stop words, etc.) and others are local (constructs are learned only from the training split, ex. vocabulary, standardization, etc.). For the local, dataset-dependent preprocessing steps, we want to ensure that we split the data first before preprocessing to avoid data leaks.

Preparing

Preparing the data involves organizing and cleaning the data.

Joins

Performing SQL joins with existing data tables to organize all the relevant data you need into one view. This makes working with our dataset a whole lot easier.

1
2
SELECT * FROM A
INNER JOIN B on A.id == B.id

Warning

We need to be careful to perform point-in-time valid joins to avoid data leaks. For example, if Table B may have features for objects in Table A that were not available at the time inference would have been needed.

Missing values

First, we'll have to identify the rows with missing values and once we do, there are several approaches to dealing with them.

  • omit samples with missing values (if only a small subset are missing it)

    1
    2
    3
    4
    5
    6
    # Drop a row (sample) by index
    df.drop([4, 10, ...])
    # Conditionally drop rows (samples)
    df = df[df.value > 0]
    # Drop samples with any missing feature
    df = df[df.isnull().any(axis=1)]
    

  • omit the entire feature (if too many samples are missing the value)

    1
    2
    # Drop a column (feature)
    df.drop(["A"], axis=1)
    

  • fill in missing values for features (using domain knowledge, heuristics, etc.)

    1
    2
    # Fill in missing values with mean
    df.A = df.A.fillna(df.A.mean())
    

  • may not always seem "missing" (ex. 0, null, NA, etc.)

    1
    2
    3
    # Replace zeros to NaNs
    import numpy as np
    df.A = df.A.replace({"0": np.nan, 0: np.nan})
    

Outliers (anomalies)

  • craft assumptions about what is a "normal" expected value
    1
    2
    # Ex. Feature value must be within 2 standard deviations
    df[np.abs(df.A - df.A.mean()) <= (2 * df.A.std())]
    
  • be careful not to remove important outliers (ex. fraud)
  • values may not be outliers when we apply a transformation (ex. power law)
  • anomalies can be global (point), contextual (conditional) or collective (individual points are not anomalous and the collective group is an outlier)

Feature engineering

  • combine features in unique ways to draw out signal
    1
    2
    # Input
    df.C = df.A + df.B
    

Tip

Feature engineering can be done in collaboration with domain experts that can guide us on what features to engineer and use.

We can use techniques such as SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-agnostic Explanations) to inspect feature importance. On a high level, these techniques learn which features have the most signal by assessing the performance in their absence. These inspections can be done on a model's single prediction or at a coarse-grained, overall level.

Cleaning

  • use domain expertise and EDA
  • apply constraints via filters
  • ensure data type consistency
  • images (crop, resize, clip, etc.)
    1
    2
    3
    4
    # Resize
    import cv2
    dims = (height, width)
    resized_img = cv2.resize(src=img, dsize=dims, interpolation=cv2.INTER_LINEAR)
    
  • text (lower, stem, lemmatize, regex, etc.)
    1
    2
    # Lower case the text
    text = text.lower()
    

Transformations

Transforming the data involves feature encoding and engineering.

Scaling

  • required for models where the scale of the input affects the processes
  • learn constructs from train split and apply to other splits (local)
  • don't blindly scale features (ex. categorical features)

  • standardization: rescale values to mean 0, std 1

    1
    2
    3
    4
    5
    6
    7
    8
    # Standardization
    import numpy as np
    x = np.random.random(4) # values between 0 and 1
    print ("x:\n", x)
    print (f"mean: {np.mean(x):.2f}, std: {np.std(x):.2f}")
    x_standardized = (x - np.mean(x)) / np.std(x)
    print ("x_standardized:\n", x_standardized)
    print (f"mean: {np.mean(x_standardized):.2f}, std: {np.std(x_standardized):.2f}")
    
    x: [0.36769939 0.82302265 0.9891467  0.56200803]
    mean: 0.69, std: 0.24
    x_standardized: [-1.33285946  0.57695671  1.27375049 -0.51784775]
    mean: 0.00, std: 1.00
    

  • min-max: rescale values between a min and max

    1
    2
    3
    4
    5
    6
    7
    8
    # Min-max
    import numpy as np
    x = np.random.random(4) # values between 0 and 1
    print ("x:", x)
    print (f"min: {x.min():.2f}, max: {x.max():.2f}")
    x_scaled = (x - x.min()) / (x.max() - x.min())
    print ("x_scaled:", x_scaled)
    print (f"min: {x_scaled.min():.2f}, max: {x_scaled.max():.2f}")
    
    x: [0.20195674 0.99108855 0.73005081 0.02540603]
    min: 0.03, max: 0.99
    x_scaled: [0.18282479 1.         0.72968575 0.        ]
    min: 0.00, max: 1.00
    

  • binning: convert a continuous feature into categorical using bins

    1
    2
    3
    4
    5
    6
    7
    8
    # Binning
    import numpy as np
    x = np.random.random(4) # values between 0 and 1
    print ("x:", x)
    bins = np.linspace(0, 1, 5) # bins between 0 and 1
    print ("bins:", bins)
    binned = np.digitize(x, bins)
    print ("binned:", binned)
    
    x: [0.54906364 0.1051404  0.2737904  0.2926313 ]
    bins: [0.   0.25 0.5  0.75 1.  ]
    binned: [3 1 2 2]
    

  • and many more!

Encoding

  • allows for representing data efficiently (maintains signal) and effectively (learns patterns, ex. one-hot vs embeddings)

  • label: unique index for categorical value

    1
    2
    3
    4
    5
    6
    7
    8
    # Label encoding
    label_encoder.class_to_index = {
    "attention": 0,
    "autoencoders": 1,
    "convolutional-neural-networks": 2,
    "data-augmentation": 3,
    ... }
    label_encoder.transform(["attention", "data-augmentation"])
    
    array([2, 2, 1])
    

  • one-hot: representation as binary vector

    1
    2
    # One-hot encoding
    one_hot_encoder.transform(["attention", "data-augmentation"])
    
    array([1, 0, 0, 1, 0, ..., 0])
    

  • embeddings: dense representations capable of representing context

    1
    2
    3
    4
    5
    # Embeddings
    self.embeddings = nn.Embedding(
        embedding_dim=embedding_dim, num_embeddings=vocab_size)
    x_in = self.embeddings(x_in)
    print (x_in.shape)
    
    (len(X), embedding_dim)
    

  • and many more!

We can also encode our data with hashing or using it's attributes instead of the exact entity itself. For example, representing a user by their location and favorites as opposed to using their user ID. These methods are great when we want to use features that suffer from the curse of dimensionality (lots of feature values for a feature but not enough data samples for each one) or online learning scenarios.

Extraction

  • signal extraction from existing features
  • combine existing features
  • transfer learning: using a pretrained model as a feature extractor and finetuning on it's results
  • autoencoders: learn to encode inputs for compressed knowledge representation

  • principle component analysis (PCA): linear dimensionality reduction to project data in a lower dimensional space.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # PCA
    import numpy as np
    from sklearn.decomposition import PCA
    X = np.array([[-1, -1, 3], [-2, -1, 2], [-3, -2, 1]])
    pca = PCA(n_components=2)
    pca.fit(X)
    print (pca.transform(X))
    print (pca.explained_variance_ratio_)
    print (pca.singular_values_)
    
    [[-1.44245791 -0.1744313 ]
     [-0.1148688   0.31291575]
     [ 1.55732672 -0.13848446]]
    [0.96838847 0.03161153]
    [2.12582835 0.38408396]
    

  • counts (ngram): sparse representation of text as matrix of token counts โ€” useful if feature values have lot's of meaningful, separable signal.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    # Counts (ngram)
    from sklearn.feature_extraction.text import CountVectorizer
    y = [
        "acetyl acetone",
        "acetyl chloride",
        "chloride hydroxide",
    ]
    vectorizer = CountVectorizer()
    y = vectorizer.fit_transform(y)
    print (vectorizer.get_feature_names())
    print (y.toarray())
    # ๐Ÿ’ก Repeat above with char-level ngram vectorizer
    # vectorizer = CountVectorizer(analyzer='char', ngram_range=(1, 3)) # uni, bi and trigrams
    
    ['acetone', 'acetyl', 'chloride', 'hydroxide']
    [[1 1 0 0]
     [0 1 1 0]
     [0 0 1 1]]
    

  • similarity: similar to count vectorization but based on similarities in tokens

  • and many more!

Often, teams will want to reuse the same features for different tasks so how can we avoid duplication of efforts? A solution is feature stores which will enable sharing of features and the workflows around feature pipelines. We'll cover feature stores during Production.

Application

For our application, we'll be implementing a few of these preprocessing steps that are relevant for our dataset.

Feature engineering

We can combine existing input features to create new meaningful signal (helping the model learn). However, there's usually no simple way to know if certain feature combinations will help or not without empirically experimenting with the different combinations. Here, we could use a project's title and description separately as features but we'll combine them to create one input feature.

1
2
# Input
df["text"] = df.title + " " + df.description

And since we're dealing with text data, we can apply some of the common preparation processes:

  1. lower (conditional)
    1
    text = text.lower()
    
  2. remove stopwords (from NLTK package)

    1
    2
    3
    4
    5
    import re
    # Remove stopwords
    if len(stopwords):
        pattern = re.compile(r"\b(" + r"|".join(stopwords) + r")\b\s*")
        text = pattern.sub("", text)
    

  3. Filters and spacing

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    # Separate filters attached to tokens
    filters = r"([-;;.,!?<=>])"
    text = re.sub(filters, r" \1 ", text)
    
    # Remove non alphanumeric chars
    text = re.sub("[^A-Za-z0-9]+", " ", text)
    
    # Remove multiple spaces
    text = re.sub(" +", " ", text)
    
    # Strip white space at the ends
    text = text.strip()
    

    Note

    We could definitely try and include emojis, punctuations, etc. because they do have a lot of signal for the task but it's best to simplify the initial feature set we use to just what we think are the most influential and then we can slowly introduce other features and assess utility.

    Warning

    We'll want to introduce less frequent features as they become more frequent or encode them in a clever way (ex. binning, extract general attributes, common n-grams, mean encoding using other feature values, etc.) so that we can mitigate the feature value dimensionality issue until we're able to collect more data.

  4. remove URLs using regex (discovered during EDA)

    1
    text = re.sub(r"http\S+", "", text)
    

  5. stemming (conditional)
    1
    text = " ".join([porter.stem(word) for word in text.split(" ")])
    

We can apply our preprocessing steps to our text feature in the dataframe by wrapping all these processes under a function.

1
2
3
4
# Define preprocessing function
def preprocess(text):
    ...
    return text
1
2
3
4
# Apply to dataframe
original_df = df.copy()
df.text = df.text.apply(preprocess, lower=True, stem=False)
print (f"{original_df.text.values[0]}\n{df.text.values[0]}")
Comparison between YOLO and RCNN on real world videos Bringing theory to experiment is cool. We can easily train models in colab and find the results in minutes.
comparison yolo rcnn real world videos bringing theory experiment cool easily train models colab find results minutes

Transformations

Many of the transformations we're going to do are model specific. For example, for our simple baselines we may do label encoding โ†’ tf-idf while for the more involved architectures we may do label encoding โ†’ one-hot encoding โ†’ embeddings. So we'll cover these in the next suite of lessons as we implement each of the baselines.

In the next section we'll be performing exploratory data analysis (EDA) on our preprocessed dataset. However, the order of the steps can be reversed depending on how well the problem is defined. If we're unsure about how to prepare the data, we can use EDA to figure it out. In fact in our dashboard lesson, we can interactively apply data processing and EDA back and forth until we have finalized on constraints.


To cite this lesson, please use:

1
2
3
4
5
6
@article{madewithml,
    author       = {Goku Mohandas},
    title        = { Preprocessing - Made With ML },
    howpublished = {\url{https://madewithml.com/}},
    year         = {2021}
}