Piece of shit? The NYC subway system serves 1.7 billion passengers per year and runs 24/7 in nearly all weather conditions. It is one of the longest subway systems in the world and costs the same for all riders, regardless of how far they ride or from how far away. All for $2.75. It's not the cleanest subway but I wouldn't call it shitty considering is successfully services one of the world's busiest cities.

Our subways are the shittiest. You go into the station and it's like you've descended into hell. The heat, the smell, the noise, the useless announcements, and the lack of good signage. The worst in the world. Just go to London or Paris or even fucking Boston.

The console history is really unusable. Why can't they just replicate what ipython console does. E.g. partial matching when pressing up arrow. Fix plotting support. Honestly why can't they just embed ipython. It just works

You should share more context about your application if you want good feedback on how to configure it. What kind of environment are you running this in? What bottleneck are you encountering? What are the performance characteristics of your service? QPS, cpu bound, io bound, memory bound, etc.

Nginx is a reverse proxy, web server, load balancer, etc. etc. Whatever you want to call it and however you're using it, the important thing to understand is that it's independent of python concurrency. All it does is forward requests. If you suspect that nginx is misconfigured or the source of your bottleneck, I can go into a little more detail (I have the least experience working with this layer), but it's unlikely to be your problem.

uWSGI works by creating an instance of the python interpreter and importing the python files related to your application. If you configure it with more than one process, it will fork that instance of the interpreter until there are the required number of processes. This is roughly equivalent to just starting that number of python interpreter instances by hand, except that uWSGI will handle incoming HTTP requests and forward them to your application. This also means that each process has memory isolation—no state is shared, so each process gets its own GIL.

Thus, using only processes for workers will give you the best performance if your aim is just to optimize throughput. However, processes come with tradeoffs. The main problem is that if your application benefits from sharing state and resources within the process, pre-forking makes this untenable. If you want an in-process cache for something, for example, your cache hit ratio would be much greater if all of your workers were housed in one process and could share the same cache. An important implication of this is that processes are very memory inefficient—memory isolation often requires that a lot of data is duplicated. (As a sidenote, there are ways to take advantage of copy on write semantics by loading things at import time, but that's a story for another day.)

For this reason, uWSGI also allows your workers to live within threads in the same process. These threads solve the problems mentioned above regarding shared state—now your workers can share the same cache, for example. However, it also means they share the same GIL. When more than one thread needs CPU time, it will not be possible for them to make progress concurrently. In fact, GIL contention and context switching will make your application run slower, on net.

For IO-bound applications where workers spend so much time waiting for IO that GIL contention is rare, this sounds like it shouldn't be a problem. And if your application is like most web applications that spend a large part of its time talking to other services or a database, it's probably IO bound. So all good right?

In reality, thread based uWSGI workers almost never work flawlessly for any python web application of even moderate complexity. The reason for this is primarily the ecosystem and the assumptions that people make writing python code—many libraries and internal code are flagrantly, unapologetically, and inconsolably NOT threadsafe. Even if your application is running smoothly today with thread based workers, you'll likely run into some hard-to-debug problem involving thread safety sooner than later.

Moreover, so called "IO bound" applications spend way more time on CPU than most developers realize, especially in python. Python code executes very slowly compared to most runtimes, and it's not uncommon for simple CRUD apps to spend ~20% of its time running python code as opposed to blocking on IO. Even with two threads, that's a lot of opportunity for GIL contention and further slowdowns.

My main point is this: Whatever bottleneck you're running into likely has to do with the fact that you're running 2 threads for each process. So unless shared state or memory utilization is very important to you, consider replacing that configuration with 4x processes, 1x threads instead and see what effect it has.

Citation: Nearly 2 years debugging and tuning performance of python applications of many flavors at Uber justtrustmei'veseensomeshit.

It's a shame they didn't write this paper before the LIGO observations. Nothing in the argument for PBH merger rates appeared to depend on the LIGO observations. Would have been a rather amazing prediction

This is incorrect. Gravitational lensing is one method of finding planets around stars - as the planet passes in front of the star, it can increase the stars' apparent brightness by bending light towards us. Also, gravitational lensing due to the Sun, during an eclipse, was the first proof of Einstein's theories.

What these scientist propose is that these black holes are MACHOS (Massive Compact Halo Objects) thought to possibly explain dark matter. Scientists have searched galactic halos for signs of these objects - via lensing - but they didn't find anything. This is my big concern with this article - we would need a lot of primordial black holes in galactic halos to explain dark matter, and we just don't see any.

I have had some issues with unit testing decorated functions. My solution has been to avoid the @ syntax, and test the decorator separately from an undecorated version of my function.
Is there a standard recipe for unit testing which also makes use of the @ syntax?

that's a fair complaint. I agree that open metrics are much better to go with, especially when you have many possible ways to overfit (even with the regularization)
Nevertheless, the method itself can't be too voodoo-- just regularized linear regression

Sue me if I go to fast but the sons of his opponents wish he was their dad
Got a wig for his wig got a brain for his heart
He'll kick you apart
He'll kick you apart

He'll save children but not the British children
He'll save children but not the British children
He'll save children but not the British children
He'll save children but not the British children

He had a pocket full of horses fucked the shit out of bears
He threw a knife into heaven
And could kill with a stair
He made love like an eagle falling out of the sky
Killed his sensei in a duel and he never said why

Did I mention his four nuts
Well he also had four dicks
If you took of his boot you'd see the dicks growing of his feet
I heard that motherf*cker had like thirty goddamn dicks
He once held the hand of one of his opponent's wife's hand in a jar of acid at a party