EvoNorm layers in TensorFlow 2
Presents implementations of EvoNormB0 and EvoNormS0 layers as proposed in Evolving Normalization-Activation Layers by Liu et al.
normalization batch-normalization automl batch-norm-relu tensorflow keras deep-learning article code paper arxiv:2004.02967 wandb research

  • Implements EvoNorm B0 and S0 layers.
  • Tests on Mini Inception architecture with CIFAR10 dataset.
  • Compares against Mini Inception architecture with CIFAR10 dataset with BatchNorm-ReLU layers.
  • Runs Hyperparameter Search on the groups hyperparameters of EvoNormS0 layer.

Don't forget to tag @sayakpaul in your comment, otherwise they may not be notified.

Authors
Calling `model.fit()` @ https://pyimagesearch.com | Netflix Nerd
Share this project
Similar projects
EvoNorms: Evolving Normalization-Activation Layers
We use evolution to design new layers called EvoNorms, which outperform BatchNorm-ReLU on many tasks.
Normalization Techniques for Training Very Deep Neural Networks
How can we efficiently train very deep neural network architectures? What are the best in-layer normalization options? Read on and find out.
Why Batch Norm Causes Exploding Gradients
Our beloved Batch Norm can actually cause exploding gradients, at least at initialization time.
Rethinking Batch Normalization in Transformers
We found that NLP batch statistics exhibit large variance throughout training, which leads to poor BN performance.