As soon as a car deviates from the optimum ride height for the undertray effects to work the downforce varies significantly. This is a problem when apex speeds are significantly higher due to the extra downforce created by ground effects.

Hit a bump the wrong way and lose downforce == shoot off the corner at much higher speeds into the barrier.

When we inevitable lose the battle (the government does have a tendency to get their way in these things), do we get to reap the benefits of a total information society? I mean, will there be a searchable database where I can find out where I left my keys? That link to that awesome video i saw on sometube.com that i can't remember? If i remembered to feed the cat?

> Which is what my OP is about, what you suggest we've been there, done that and failed. This is simple functionality that should be handled in kernel with a simple API presented to userspace.

I'd argue that in fact with OSS, we've been there and tried it and it (the Unix write() and ioctl way) fails. (Not to mention you have no idea what I'd advocate, I've just been trying to work out how you'd tackle some of the hard problems with realtime DSP, which you've avoiding answering).

I'll ask again - are you saying to put floating point mixing in the kernel, or fixed point mixing? There are performance and other reasons why floating point register store/restore is avoided in the kernel (in fact some drivers uses these registers as working space IIRC). If you're saying put fixed point mixing in the kernel - this either incurs a penalty in converting everything to that fixed point for mixing _within the realtime processing thread_ or you force every userspace application to output fixed point to the kernel sound API.

Low latency timing information and scheduling is a crap shoot with OSS. How do you know when the sound you are queueing with write() will actually get output? How do you schedule your user-space applications with enough priority that they aren't parked before they get to that critical write() call?

All modern audio APIs have moved away from ioctl and write() to a callback down stack from the interrupt for a reason.

> you can resample your lower-priority audio outside and feed it through a lock-free ringbuffer. This way the deadline constrained realtime audio has no penalty in practice.

So it seems we agree that running non audio graph canonical formats and rates within the real time processing callback is a silly idea, right?

Now - would you agree that nearly all pro-audio applications and plugins internally work in floating point?

If you are advocating for an in-kernel fixed point mixer for the hardware you will need to perform conversions to fixed point for any output. This seems an unnecessary burn of RT time given we can pass floating point to hardware and do any mixing of it in software with it all in floating point.

Also, I'll point it out again:

"if you're not using floating point you're ignoring the possibility for optimisations using SIMD/FMA instructions"

I'm having trouble understanding what it is you are proposing as a solution.

Are you asking for in kernel fixed point mixing? Out of kernel? Please let me know.

I've done a fair of audio programming too, and the positions of the Jack guys and Poettering are (mostly) understandable.

The bit I don't understand is why Pulseaudio wasn't a "relaxed" mode of Jack with the necessary hardware bits bolted on. Then again the Jack project can't agree to work on one codebase or list of functionality and Pulseaudio can't agree to actually do any low latency work (except they do low latency enough for gaming, sure).

> Despite that most commercial hardware has for ages used fixed point math for this, and that even a simple spline interpolation would do fine for all cases since you are upsampling, most of the "audio gurus" of the linux audio development comunity of the time (like Paul Davis, Steve Harris, etc) convinced the kernel people that there would be a terrible quality loss and it was a bad idea to do this.

One of the problems of using varying sample formats and rates when producing low latency audio is you have now introduced format conversions and resampling inside the deadline constrained realtime audio producing routines.

In addition, if you're not using floating point you're ignoring the possibility for optimisations using SIMD/FMA instructions.

Posted
by
timothyon Wednesday August 06, 2014 @06:22PM
from the no-department dept.

v3rgEz (125380) writes As part of MuckRock's Drone Census, the San Jose Police twicedenied having a drone in public records requests — until the same investigation turned up not only a signed bid for a drone but also a federal grant giving them money for it. Now, almost a full year after first denying they had a drone, the department has come clean and apologized for hiding the program, promising more transparency and to pursue federal approval for the program, which the police department had, internally, claimed immunity from previously.

The only other thing I'd mention - you perhaps noticed I kept saying "threads like.." and "with regular threads" because it's basically introduced a number of single points of failure. Due to the lack of back channel or retransmission, things can go silently wrong without notice (network cable failure etc). In an ideal world you'd double up on some of that infrastructure and networking.

I know you need to get something up and running, but it's perhaps something to bear in mind for a later iteration.

First - the problem with python is that because it's a VM you've got a whole lot of baggage in that process out of your control (mutexes, mallocs, stalls for housekeeping).

Basically you've got a strict timing guarantee dictated by the fact that you have incoming UDP packets you can't afford to drop.

As such, you need a process sat on that incoming socket that doesn't block and can't be interrupted.

The way you do that is to use a realtime kernel and dedicate a CPU using process affinity to a realtime receiver thread. Make sure that the only IRQ interrupt mapped to that CPU is the dedicated network card. (Note: I say realtime receiver thread, but in fact it's just a high priority callback down stack from the IRQ interrupt).

This realtime receiver thread should be a "complete" realtime thread - no malloc, no mutexes. Passing messages out of these realtime threads should be done via non-blocking ring buffers to high (regular) priority threads who are in charge of posting to something like zeromq.

Depending on your deadlines, you can make it fully non-blocking but you'll need to dedicate a CPU to spin lock checking that ring buffer for new messages. Second option is that you calculate your upper bound on ring buffer fill and poll it every now and then. You can use semaphores to signal between the threads but you'll need to make that other thread realtime too to avoid a possible priority inversion situation.

> how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received

As mentioned, dedicate a CPU mask everything else off from it and make the IRQ point to it.

> what support from the linux kernel is there to ensure that this happens

With a realtime thread the only other thing that could interrupt it would be another realtime priority thread - but you should make sure that situation doesn't occur.

> is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else

Yes, IRQ mapping to the dedicated CPU with a realtime receiver thread.

> the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process

You might get away with having the realtime receiver thread do the zeromq message push (for example) but the "real" way to do this would be lock-free ring buffers and another thread being the consumer of that.

> what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent

You want to avoid this. Use lockfree structures for correctness - or you may discover that having the realtime receiver thread do the post is "good enough" for your message volumes.

> to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements

No offense, but Linux has support for this kind of scenario, you're just a little confused about how you go about it. Priority inversion means you don't want to do it this way on _any_ operating system, not just Linux.