Designing an oscilloscope software

This is a discussion on Designing an oscilloscope software within the C++ Programming forums, part of the General Programming Boards category; Well, since my sender part, and the protocol is complete, i'm starting to design my oscilloscope receiver, renderer, basically, where ...

Designing an oscilloscope software

Well, since my sender part, and the protocol is complete, i'm starting to design my oscilloscope receiver, renderer, basically, where all the action and smart computing will take place.

I made a simple sample in Visual Basic 6, but such a processing task is far beyond what it can do (or my approach is bad)

Basically, I need something like this (this time, I'll have to develop it in C++). Data is received continously on the serial port, at 115200 (highest speed for a default com port, not sure if the number is right). This data is decoded (it contains two values per 'frame') and split into frames. Each 'frame' goes somewhere in a buffer.

Now, that's not really hard.
The hard part is - all this data has to be rendered. The values are simple bytes (0-255). They represent two values read by the device from an Analog-to-Digital converter. So the rendering will represent a waveform or something similar, depending on what is read.

My problem is, what approach should i use in rendering it? I will need an adjustable timebase, so that I can see small differences if I want to (i know there will be loses, and not everything will get to be rendered), but I can also choose a larger timebase, so I can see what happens for a longer time (and some averaging of values might be needed here). When exactly should I start rendering from the buffer?

I was thinking of two approaches - one would be, let the buffer (normal one) fill, then render, then empty. The other one is, write in a circular buffer, and get the rendering use the last X positions (following writing pointer) every Y ms. The problem is here, data may be overwritten while rendering. What do you think? How should this be taken?

There's also another catch which I'm not really able to figure. Oscilloscopes have a useful feature called "trigger and hold". Basically, if I'm seeing a sinewave, at each update, the sinewave will start from a different phase (position, so to say). If the signal is quick, I'll see a ton of gibberish, since a lot of sinewaves will overlap. In this case, the 'scope draws what it sees, and then it waits untill the current value reaches a threshold. When it reaches, it starts drawing it again. So, the wave will always start from the same position, and will look clean Hold is pretty much what is says - it won't trigger until the time has expired. I suppose this will require a different approach to call the 'render' function, but I'm still not sure.

Please let me know how you would do this.
I'm planning to use wxWidgets as a graphical interface, but if you think something else is better, do let me know.

I know to use uc's, but my C/C++ skills aren't good at all. So I might ask for other stuff, too.

When the software and device will be done and in good shape, I'll be happy to send a few to the ones that helped me most if they'll desire so.

The way an ordinary digital scope works is, as you say, by "trigger and hold", which essentially means that you set it to a trigger level (voltage), and direction (rising or falling edge). Advanced versions also have "trigger on high that is shorter/longer than x time" (where x is a small fraction of a second, say 1 ms or 0.5ms). The really fancy models also have things like "trigger on A being high and B being low (for x time).

But just seeing the voltage pass through a voltage limit in one direction (rising) would be sufficient to make a decent start.

You should also consider a "trigger position". To begin with, you may want to stick that at a fixed distance into the "frame". If a frame is 1024 samples, make it 128 samples from the beginning of the frame.

You probably want to use a circular buffer. When you see a trigger, count elements to frame minus trigger point, and store samples until that point. Start sending data from trigger-point minus trigger position (and you can start sending data as soon as the trigger is seen, assuming it's not preventing the uC from getting the data in correctly).

The problem with buffering is that - the data may come in quicker than I can process it. And this way, I'll have to skip parts of the buffer - so, I can miss trigger events. I was thinking of storing the data in the buffer in a differential way, as it comes in, it is written as the difference between the last value and the current value. This way, I can 'look ahead' trigger conditions. Anyway..

I think my problem now is rendering the data in 'direct' mode. When exactly should I start rendering, how should the input buffer work, and how will it react to the timebase? For example, if there is a long timebase, should it be displayed several times per update interval, showing 'old' samples as they go away? Or just wait the whole update interval (while the buffer fills) and then display the whole buffer?

Sampling should be set to 1 again when the current buffer has been sent to your PC. [You could be clever and start sampling once you've sent enough data to not overwrite data you already sent, or have two different buffers for the sample-data, one that is being sent, and one that is currenly sampled into - but you probably won't gain much from that].

The above code obviously doesn't allow for more complex triggering. It wouldn't be too hard to trigger on falling edges as well. Just add another flag and an or-statement more in the "if (!triggered)" line.

That's an interesting approach. Well, I'm not reading the ADC at all. The uc is sending its values as it's reading the ADC, as fast as it can (but of course, it reaches only about 90&#37; of the serial speed)

My approach was to do that in the uC code, because I suspect that can be done faster than in the PC. If you can send the data across to the PC at 90% of the 115200 bitrate, then I suppose you could try to just decode the data in the PC with a similar approach.

(By the way, using (curpos+1) & (bufsize-1) assumes that bufsize is some number 2^n, e.g. 256, 512, 1024, etc. (It's a short way to do modulo - if you want a different size buffer in the PC, just use (curpos+1) % bufsize - there's so little speed difference between a divide instruction in a PC and a AND operation - only 10-20 times faster, but we don't need to care about that in a PC. But in a uC you may need to do the divide "by separate instructions", which is most like hundreds or thousands of clock-cycles. If so, use the approach of

Code:

curpos++;
if (curpos == bufsize) curpos = 0;

That will be much faster than a divide. The AND operation is most likely faster than BOTH of these tho'.

If you don't send data across the serial port, can you read the ADC faster? [By the way, you are supposed to read the ADC at timed intervals, selectable by the user, but I'd leave that until later if you don't have a way to do that already].

So by 'reading' the adc, you mean something like, sending a byte to the uc, and then the uc sends the read value back?

The whole thing is timed pretty well, since i'm using the hardware USART. there's a read ADC-process (moving bits to the working register) -send loop, which is triggered by a timer at fixed intervals. The problem with timing comes when I send escape characters, as you can imagine but that's another story.

Anyway, I'm still confused about drawing the buffer. Your idea for triggering sounds cool, and I definitely never thought of using powers of two for the buffer, but I think that's a bit too far ahead.

For the moment, I just want to do a simple software that cand display data from the device (wiothout any triggering), and draw wavy lines when i move a pot. I never played with a real CRO, so I can barely imagine how the lines would look there.

So- my current question - as the buffer gets filled, when exactly should I start drawing from it? This will have to do something with the timebase, and definitely with the computers processing power (I might have, for example, to stop the uc from sending data, if the buffer gets completly rewritten in a draw cycle. I doubt this will ever be the case in a modern computer, but heck knows where I'll use this device). A circular buffer sounds ok for the triggering part, but for the normal part, is it good?

I've wrote above the two methods I was thinking of using, but I'm not sure if they're good at all.

Well, I imagined that this would be done on the uC itself, and when I say "readADC", I mean reading the hardware.

If you are processing on the PC, then you read in data for as long as it takes to trigger, then wait for the buffer to fill [this is exactly what a Tektronics DSO [Digital Storage Oscilloscope] does, it even says "filling buffer" if you run it very slowly with lots of samples [2M samples on the model I used, IIRC] - a CRO is slightly different, as it's just drawing based on the input (but using the trigger to sync things)].

Again, you want to see some stuff before the trigger, so you want to save all the incomming data, then when trigger happens, count in X amount of data, where X is less than a full buffer.

If you do the trigger detection on the PC, just use two buffers, one that you are drawing from, and another that you are receiving into. Use two threads, one that does the drawing.

putBuffer, getNextBuffer and releaseBuffer should use some form of events to signal between threads so that for example putBuffer waits for the "other" buffer to be ready before it switches to teh new buffer.

That sounds like a bottleneck... At 11.5kHz, you are getting 10 samples per cycle, which is fairly crude for a 'scope, and I'd say that's your upper limit. (My 100MHz digital 'scope samples at 1.25GHz, which is also in the ballpark of 10 samples per cycle at the upper frequency limit.)

I would take a different approach. I would put all of the "brains" in the microcontroller, and use the PC as a display device. That way, you can just send the data needed to "draw" the waveform. You would be missing a lot of data. It would work more like a "waveform capture" device...

Capture a waveform, send it to the display.
Continue capturing to your circular buffer.
When you are done displaying the waveform, send another waveform.

I suspect my digital 'scope does this to some extent too... In reality, your computer's raster display is only refreshing at 75Hz or so anyway, so you can't display all of the data in real time like you can on an analog 'scope with a vector CRT display.

I wouldn't expect the PC's CPU to be a problem, but screen-updating can be slow. I've never used wxWidgets, so I don't know if it can help with that. On a Windows system, DirectX seems to be the way to get fast graphics, but I haven't used DirectX graphics either.

Hmm, the problem is that, this little device (PIC 16F877) doesn't have enough memory to store all that samples (~ 256 bytes of RAM). I guess I could interface it with an external RAM chip, but then again, I could just buy I chip that uses USB and use that for the sending..