We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output.
Official Google implementation: https://github.com/google-research/google-research/tree/master/neuraladditivemodels
Don't forget to tag @theSparta in your comment, otherwise they may not be notified.