Paper Projects


💻 Overview

What is it?

An opportunity to implement, extend and apply trending papers in the community. All of the projects can run on a free Google Colab GPU, local CPU or the paper comes with a library which you can use on your own smaller datasets. However, there are no compute restrictions so feel free to work with what you have. Your projects don't have to be exact implementations (most of the papers already come with code) but instead you can do minimal implementations or creative explorations of the major/novel aspects in the paper. Also this is not a competition but instead an opportunity for the community to come together, learn and help each other. You can work in teams and projects can include any of the following:

  • Research: reimplement the entire or parts of a paper. (example)
  • Product: create applications or libraries using the paper. (example)
  • Tutorial: simplify and extend concepts in the paper. (example)
Why should you participate?
  • Learn: one of the best ways to learn is to reimplement or extend research. You'll learn from others working on the same paper and you'll develop skills for implementing research.
  • Build: build projects with a level of completeness that will showcase your skills.
  • Share: share and receive credit for your work in the community.
  • 🔥 Spotlight: your profile and showcase your work in front of industry leaders, give a talk at the Salon, chance to be a W&B Author, have your work featured in the Gallery, and receive a Raspberry Pi.
What is the duration?

The official duration is from Friday, July 3rd, 2020 to Friday, July 31, 2020. This is our first time experimenting with something like this so we may choose to extend the time based on feedback and the progress we see.

📝 TODO

  1. Explore the papers above and choose one that fits with your interests. Some of them come with official implementations as well as implementations by the community (under Similar Projects) so be sure to check those out for inspiration.
  2. Define a task you want to work on and start developing. Be sure to leverage Weights and Biases to keep track of all your iterative progress.
  3. Once you're done with your project, you can add it on Made With ML. Be sure to include the link to the paper (this is how we can see all the projects for a given paper).

✅ Criteria

❓ Frequently Asked Questions (FAQ)

Join the W&B Paper Projects Slack channel (and join the #paper-projects channel) if you have questions not addressed below.

  1. Can I work with others? Absolutely, we encourage collaboration especially since some research can be quite involved.
  2. Can I leverage open-source implementations? Yes but be sure to give proper credit and that you extend beyond what was already provided.
  3. Will this event happen every month? This is our first time experimenting with something like this so it'll depend on how interested the community is and the progress we see.
  4. Do I need GPUs? All of the project can run on a free Google Colab GPU, local CPU or the paper comes with library which you can use on your own smaller datasets to do experiments with.

📚 Resources

This opportunity will involve a lot self-learning and creating small goals along the way. However, we will be proving some guidance along the way as you come across obstacles.

  1. Code: almost all of the papers we've selected below (except the first one) come with official code or community developed implementations. Feel free to use these for inspiration and extend upon them. To see the implementations from the community, just search with the arXiv papers link or click on it's arXiv tag.
  2. Tutorials, libraries and research: Made With ML has an automatically updating, community curated Topics page which has the best resources by topic of all time.
  3. Mentors + community: Weights and Biases has organized experienced mentors from the community to assist participants. You'll also have your fellow participants (who may be working on the same papers) to discuss with all via the W&B Slack channel (join the #paper-projects channel).

📜 Papers

Below are our suggested papers and many of them come with articles, code, demos, etc. Feel free to use the additional resources and extend on them. You are welcome to use other papers but it must be published this year and we need to approve it (email us at hello@madewithml.com).

Smooth Adversarial Training
ReLU activation function significantly weakens adversarial training due to its non-smooth nature. Hence we propose smooth adversarial training (SAT).
adversarial-training adversarial-learning relu sat
Discovering Symbolic Models from Deep Learning w/ Inductive Bias
A general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases.
symbolic-models inductive-bias graph-neural-networks graphs
Implicit Neural Representations with Periodic Activation Function
Leverage periodic activation functions for implicit neural representations & demonstrate that these networks, dubbed sinusoidal representation networks or ...
siren activation-functions tanh relu
Tensorflow Fourier Feature Mapping Networks
Tensorflow 2.0 implementation of fourier feature mapping networks.
fourier-transformations tensorflow code paper
Bootstrap Your Own Latent (BYOL) in Pytorch
Practical implementation of a new state of the art (surpassing SimCLR) without contrast learning and having to designate negative pairs.
self-supervised-learning byol simclr code
Automatic Data Augmentation for Generalization in Deep RL
We compare three approaches for automatically finding an appropriate augmentation combined with two novel regularization terms for the policy and value ...
data-augmentation reinforcement-learning kornia pytorch
Distilling Inductive Biases
The power of knowledge distillation for transferring the effect of inductive biases from one model to another.
inductive-bias knowledge-distillation model-compression research
Q*BERT
Agents that build knowledge graphs and explore textual worlds by asking questions.
bert transformers knowledge-graphs question-generation
Adversarial Latent Autoencoders
Introducing the Adversarial Latent Autoencoder (ALAE), a general architecture that can leverage recent improvements on GAN training procedures.
autoencoders generative-adversarial-networks latent-space disentanglement
Table of Contents
Organizers
Share