EvoNorms: Evolving Normalization-Activation Layers
We use evolution to design new layers called EvoNorms, which outperform BatchNorm-ReLU on many tasks.
automl normalization activations activation-layers batch-normalization relu evonorms group-normalization groupnorm batchnorm research paper arxiv:2004.02967

Key ideas: • (1) to start from low-level primitives, and • (2) to evolve the layers' generalization over multiple architectures. EvoNorms achieved promising results on ResNets, MobileNets, EfficientNets, Mask R-CNN and BigGAN. Pseudocode available in the appendix.

Don't forget to tag @quark0 in your comment, otherwise they may not be notified.

Authors
Research Scientist @ Google Brain
Share this project
Similar projects
EvoNorm layers in TensorFlow 2
Presents implementations of EvoNormB0 and EvoNormS0 layers as proposed in Evolving Normalization-Activation Layers by Liu et al.
UFOD: A Unified Framework for Object Detection
UFOD is an open-source framework that enables the training and comparison of object detection models on custom datasets using different underlying ...
ATLASS: AutoML using Transfer and Semi-Supervised Learning
This repository includes the code, application, and notebooks for the work "AutoML using Transfer and Semi-Supervised Learning". The tools presented here ...
Lazy Predict
Lazy Predict help build a lot of basic models without much code and helps understand which models works better without any parameter tuning.