If you prefer to jump straight to the conclusion, check out this repo, where I have everything ready for you to clone and start coding your new project immediately!

Note: I’m not claiming to have the final say in any of this and is an ever-evolving subject, so please let me know if you think I could be doing things better :)

Python version

With every new release of python, there are certain features that are added, deprecated or removed, so you wanna make sure that you know with which version of python you are working. The best way is to use pyenv which is a python version manager which lets you install and specify different versions of python for each of your projects. To specify one version run at the root of your project

pyenv local 3.7.2

and file called .python-version will be created with the text 3.7.2 in it. Now when you type python --version you'll get Python 3.7.2, and if your modify the file and write 3.6.8, the same command will output Python 3.6.8.

Package manager & virtual environment

With confidence in the version of your interpreter, we can worry about how to download libraries and where to put them. If this was a couple of years ago I’d be talking about pip, virtualenv or even the standard library’s module venv, but now the coolest kid in town is pipenv, or at least pypa says so. Pipenv handles both the virtual environment and package management. Simply run

pipenv install

and pipenv will create a virtual environment (vm) and two files: Pipfile and Pipfile.lock. If you are coming for javascript this is very similar to package.json and package-lock.json. To see where the vm was created type pipenv --venv and this will give you a clue about how pipenv works: it automatically maps project directories to their specific vms.

Because we are creating a django project, we’ll download the package and add it to our vm with the command pipenv install django. To access the vm’s libraries, you have to prepend pipenv run to every command you run. For instance, to check the installed django version, type pipenv run python -c "import django; print(django.__version__)".

Project layout

In the picture below I show how I like to layout my directory structure.

I don’t like the default django layout that results after we are prompted to give our project a name like awesome, which also happens to be the name of our repo, and we end up typing things like ~/code/awesome/awesome/settings.py, which is just simply awful. It makes much more sense to me to put all your configuration files in a directory called conf. Your project’s name is already the name of the root directory. So, let’s start our projects by

The development server should be working after we type pipenv run python manage.py runserver, and it does, but something is off, that command was super painful to write. Luckily we can create aliases for pipenv in our Pipfile like so

[scripts]server="python manage.py runserver"

Try again, this time with pipenv run server, much better right?

Source control

Now that we have the skeleton of our app ready, it’s a good place to start tracking our changes making sure that we don’t track secret codes, development media and log files etc. Create a file called .gitignore with the following content and you’ll be safe to initialize your repo git init, and create your first commit with git commit -am "Initialize project".

Settings and environment variables

Have you read The Twelve-Factor app? Please go ahead and do so. Too lazy? Let me summarize it for you, at least the part about configurations: You should store your connection information to external services such as databases, external storage, APIs & credentials in your environment. This makes it very easy to transition between the different environments your code will run in such as dev, staging, ci, testing & production.

There's an excellent package called django-environ that will help us here. Go ahead and install it with pipenv install django-environ and modify conf/settings.py to read all its external services configuration and secret values from the environment

If like in this example, we decide to use postgresql we need to make sure we install an adapter like psycopg2 with pipenv install pyscopg2-binary.

Logging

Logging is one of those things, that no one pays too much attention in a project, but it really nice when it’s well done and it’s there. Let’s start with the right foot and modify our conf/settings.py file

Notice that if we decide to turn on sentry with USE_SENTRY=on, we first need to pipenv install sentry-sdk and get our secret url from sentry.io after creating a new project there.

Testing

Testing code is a good idea. Found a bug? Make a test that fails, fix the code, pass the test and commit. This way you’ll never have to worry about that particular bug ever appearing again, even when someone else (which may well be you from the future), touches something completely unrelated that through a Goldberg-like process affects this part of the code and the bug reappear. Not if you write some tests.

You can use django's TestCase or other frameworks, but I like pytest and one of my favourite features is parametrized tests. Let’s go ahead and

pipenv install pytest pytest-django --dev

Notice the --dev, this tells pipenv to keep track of certain dependencies as development only.

There are many names for files where we can configure pytest, but I prefer to do it in one called .setup.cfg, as the filename is shared with other tools and this helps to keep the file count lower. The following is a possible configuration

[tool:pytest]testpaths=testsaddopts=-p no:warnings

Now you can create tests inside tests directory and run them with

pipenv run pytest

Code linting

Code linting is to run software that analyzes your code in some fashion. I’m only going to focus in style consistency tools, but you should know there are plenty others to consider like Microsoft’s pyright.

The first tool I’m going to mention is your IDE itself. Whether is VIM, VSCode, Sublime or others, there is this project called EditorConfig which is a standard specification for telling your IDE how big your indents should be, which string quotes you prefer and that kind of stuff. Just add at the root of your project a file called .editorconfig with something like this

We can run both of these programs with pipenv run flake8 and pipenv run isort respectively.

The final tool we are going to use is Js Beautifier, which will help us maintain order in our html, js and stylesheets. For this one, you can either pipenv install jbbeautifier or just install a plugin for your IDE, and let it show you error messages (this is how I do it). To configure it create a .jsbeautifyrc file in the root of your project with content like this (too large for this post).

Static files

All our javascripts, stylesheets, images, fonts and other static files live inside the assets directory. We will use webpack to compile SASS stylesheets into CSS and ES6 into browser JS (ES5?). Webpack will pick up assets/index.js and use that as the entry point to all our static files. From this file we will import all our javascript and style sheets and webpack will compile, minimize and put it in a nice bundle for us. My typical assets/index.js looks like this

import'./sass/main.sass'import'./js/main.js'

Naturally, we need to at least install webpack, babel and sass compiler. We need to create a file called package.json with the following content

Then run npm install, which will download all the dependencies specified in package.json into a directory called node_modules.

It’s time to configure Webpack. Here’s a link to the configuration I use (too long for this post), but long story short, it defines a series of when you see this kind of file do this, and for this other do that. Put this configuration in a file called webpack.config.js in the root of your directory.

Now it’s time to connect webpack’s output to django templates with the best tool I’ve found for this: django-webpack-loader. As you may be used to by now we need to pipenv install django-webpack-loader to install it and then modify conf/settings.py

INSTALLED_APPS=[...'webpack_loader',]filename=f'webpack-bundle.{ENV}.json'stats_file=os.path.join(root_path('assets/'),filename)WEBPACK_LOADER={'DEFAULT':{'CACHE':notDEBUG,'BUNDLE_DIR_NAME':'bundles/',# must end with slash
'STATS_FILE':stats_file,'POLL_INTERVAL':0.1,'TIMEOUT':None,'IGNORE':['.+\.hot-update.js','.+\.map']

Notice that what we are doing is reading a file that contains the information about where to find webpack’s compiled files (bundles).

We can now add in our templates webpack’s compiled scripts and stylesheets using a template tag

You can now start running code async with pipenv run celery, which should either work or tell you that redis is not available. Don’t worry if it isn’t, we are not gonna use your system’s redis anyways.

Docker compose

To bring all the project pieces online, we have to turn on the web server, celery, redis, postgres & webpack. It can be quite annoying to open several terminals and type all the needed commands. To fix this problem we’ll use docker compose, which is a docker container orchestration tool.

In a nutshell, how docker works, is that it “compiles” your code and all it’s requirements into what's called an image. Then to run our app, we can instantiate this image as a container which is a running version of the image. We will also mount into our containers our code, so that we can make changes in our local machine and see it live in our containers without having to re build the images.

To build the images we need, we will create two files called Dockerfile.web and Dockerfile.worker which will serve as a script for building out web and worker images.

Notice that web service uses Dockerfile.web, and worker service uses Dockerfile.worker.

Notice that we are passing two environment variables files to web and worker service. We create it this way so we can override certain things when running inside Docker. In particular, we need to change the database address to point to the host machine (your machine), and not localhost (inside the docker container). To do this, create .env.docker and add the line

DATABASE_URL=postgres://<user>@host.docker.internal:4321/<db-name>

We mount ./media and ./logs inside your containers so that you can easily read the logs and check the uploaded files in your local machine. Don’t worry, they are git-ignored.

The moment we have been building for … tum tum tum

docker-compose up

Yey! Everything should be online at this point, and while is not time yet to drink that beer, you can start thinking about it.

When we change code in our machine, the server will autoreload as is expected with django’s development server. But when we add a new library with pipenv install, we need to rebuild the images with

docker-compose build

One final thing is that we will make our images lighter by ignoring some files when mounting our code with a file called .dockerignore like this

Extra: Custom user model

I’ve still yet to work on a project where I don’t need to modify django’s built in auth app, either to add/modify fields, add custom behavior or to rename the urls. To have everything at the tip of our fingers, and not floating in the depth of our django depency, we’ll create our first app called users and add a custom user model.

fromdjango.contrib.auth.modelsimportBaseUserManagerclassUserManager(BaseUserManager):defcreate_user(self,email,password,**extra_fields):ifnotemail:raiseValueError('The Email must be set')email=self.normalize_email(email)user=self.model(email=email,**extra_fields)user.set_password(password)user.save()returnuserdefcreate_superuser(self,email,password,**extra_fields):extra_fields.setdefault('is_superuser',True)extra_fields.setdefault('is_staff',True)extra_fields.setdefault('is_active',True)ifextra_fields.get('is_superuser')isnotTrue:raiseValueError('Superuser must have is_superuser=True.')returnself.create_user(email,password,**extra_fields)

With all this in place, we are now in full control of our user’s auth process. If you want to add registration and custom templates to the mix checkout my repo, where I have everything set up working with django-registration-redux.

Final thoughts

I hope this guide was as helpful for you to read as it was for me to write. I encourage you to clone my repo in github and make pull requests if you feel anything can be done better.