README.md

Vel

Bring velocity to deep-learning research,
by providing tried and tested large pool of prebuilt components that are
known to be working well together.

Having conducted a few research projects, I've gathered a small collection of repositories
lying around with various model implementations suited to a particular usecase.
Usually, starting a new project involved copying pieces of code from
one or multiple of these past experiments, gluing, tweaking and debugging
them until the code started working in a new setting.

After repeating that pattern multiple times, I've decided that this is the
time to bite the bullet and start organising deep learning models
into a structure that is designed to be reused rather than copied over.

As a goal, it should be enough to write a config file that
wires existing components together and defines their hyperparameters
for most common applications.
If that's not the case few bits of custom glue code should do the job.

This repository is still in an early stage of that journey but it will grow
as I'll be putting work into it.

If you don't want to run these services, there is included
another example file .velproject.dummy.yaml
that writes training progress to the standard output only.
To use it, just rename it to .velproject.yaml.

Features

Models should be runnable from the configuration files
that are easy to store in version control, generate automatically and diff.
Codebase should be generic and not contain any of the model hyperparameters.
Unless user intervenes, it should be obvious which model was run
with which hyperparameters and what output it gave.

The amount of "magic" in the framework should be limited and it should be easy to
understand what exactly the model is doing for newcomers already comfortable with PyTorch.

All state-of-the-art models should be implemented in the framework with accuracy
matching published results.
Currently I'm focusing on computer vision and reinforcement learning models.

All common deep learning workflows should be fast to implement, while
uncommon ones should be possible. At least as far as PyTorch allows.

Implemented models - Computer Vision

Several models are already implemented in the framework and have example config files
that are ready to run and easy to modify for other similar usecases:

Where PYTORCH_DEVICE is a valid name of pytorch device, most likely cuda:0.
Run number is a sequential number you wish to record your results with.

If you prefer to use the library from inside your scripts, take a look at the
examples-scripts directory. From time to time I'll be putting some examples in there as
well. Scripts generally don't require any MongoDB or Visdom setup, so they can be run straight
away in any setup, but their output will be less rich and less informative.

Here is an example script running the same setup as a config file from above: