In the demonstration package, setup.py reads the lines from requirements.txt and passes them to setuptools.setup() as the install_requires arg. When I execute python setup.py install, setuptools correctly ignores pycapnp, too.

Dear Daniel Holth, thank you for following up with your recommendations.

A great many packages prefer to keep their python dependencies in a single file - requirements.txt. Numenta's packages are no different. Requirements.txt is an easily-readable single source of truth for dependencies.

I considered adding extra logic to the production package's setup.py to parse its requirements.txt, identify the lines that contain environment markers, extract the environment markers, and then pass those entries to setup() via the extras_require arg, while passing the rest via the install_requires arg.

However, it occurred to me that it doesn't make sense for everyone to be complicating the installation code of their packages with such logic. Both pip and setuptools have already made the leap to support this syntax uniformly, and it works beautifully without additional boilerplate. Therefore, it only makes sense for wheel to follow in their footsteps.

I will take a look at the referenced source code. Daniel Holth, are you recommending that I try to submit a patch to the wheel repo to support this feature or that I try patching it in my own package's installation logic?

I think it's shocking that people expect to read requires.txt, which can
also sometimes contain pip command line arguments, directly into
install_requires = []. It is a file format that came up from nowhere but
a lot of people seem to like it. Maybe they do not know about 'pip
install -e .' or maybe their project doesn't even have setup.py . The
way I use requires.txt it would contain the output of 'pip freeze' but
install_requires = [] would contain the abstract dependencies (just the
names of the direct dependencies).

Wheel has to know how to translate requirements from install_requires to
the dist-info format. It is a different code path than what setuptools
uses, and the markers syntax for requires.txt is newer than bdist_wheel.

Thanks everyone for the feedback. I am not sufficiently well-versed in python package best practices and have to consider that it's entirely possible that our projects are using a technique that, while convenient for our own internal work-flow, may be non-standard. I will consult with my colleagues.

Our internal work-flow is to run python setup.py install (or python setup.py develop) in our projects and expect that this automatically pre-installs the requirements, hence the copying from requirements.txt into the setup() arg install_requires.

If you have a project that versions some of its dependencies then your workflow should be pip install -r requirements.txt and then pip install -e . (any unpinned deps will be installed from pypi, but anything pinned via requirements.txt will already be there and the pip install -e . will happily keep the pinned version installed).

Optionally you could bundle the -e . into your requirements.txt and name it something like requirements-dev.txt. This workflow is basically equivalent to your python setup.py develop above except you run pip install -r requirements-dev.txt.

Gentlemen, I have a follow-up question about building the wheel that references the concrete (pinned down dependencies). So, as recommended, I am changing install_requires to contain the abstract dependencies for developers, while having the concrete versioned dependencies in requirements.txt that we use to test official releases of our package.

What's not clear to me now is how I can possibly build a wheel for deployment to PyPi that records the concrete dependencies, such that when users install my wheel from PyPi, the concrete dependencies would be installed (not the loosely-versioned ones). Users that install my wheel from PyPi will not (and should not need to) have access to my package's requirements.txt, after all. E.g.,

However, when I build my wheel via python setup.py bdist_wheel, the resulting wheel will reference only the abstract loosely-versioned dependencies from setup.py. Thus, when users install my wheel from PyPi, the wrong versions of dependencies would be installed instead of the concrete ones that I have in requirements.txt.

I want developers to be able to use the abstract dependencies for experimenting with my package, but installs of my wheel from PyPi need to reference the concrete (pinned-down) dependencies with which my build system actually tested my package.