This time I got it to work with actual hardware: An Atmega324, an SparkFun BlueTooth module and a BlueTooth USB dongle. Unfortunately this time I ran into an other brick wall, but this time the problem is the same with my application, HyperTerminal and Putty: I connect at 38400 bps, it works for about 8 seconds and then it stops. When it stops I see the "connect" LED on the SparkFun bluetooth module turns off and the status LED starts blinking - but Windows doesn't seem to think the port is down! I can't reopen the port from an other application (until I close the first) and I get absolutely no errors (COM errors).

@reSpawnYes, that's the format, 16 values from the sensors plus two values from the timers. The values are in decimal notation, separated by spaces and at the end of the line there's some sort of line terminator. I'm not sure what line terminator because my code accepts (CR or LF) or any combination of those two

And an other tip: If you want to do it with direct Windows API calls, as I wanted to do it, ether use Microsoft's compiler or don't use direct API! That's because the smart people at Microsofot used bitfield typedef's in a very important datastructure (DCB) - the problem's that there's no standard for bitfield order/padding! I don't understand why they didn't used the normal OR-ed flags, they just had to push the Microsoft compiler.

I narrowed it down. In my initial tests I was pumping data to the serial port in a continuous stream: As soon as the UDREn bit of UCSRnA register was set I was pushing an new char. By introducing a short pause between lines (25 milliseconds works grate at 57.6 k with an 8Mhz clock) I was able to maintain the stream going for as long as I was willing to test!

I assume the SparkFun module was caching chars and it wasn't able to send them as fast as I generated them. I'm not sure those. The sparkfun thingy should have been able to send data at 38k (that's what I initially used) without problems! An other possible explanation might be that the pause in the stream allows the clock-recover thingy in the SparFun's module to recover after the errors that I was generating (with my 8Mhz clock, at 38k I get 0.16% error rate, at 57.6k I get 3.5% error)

With my next order to Farnells I'll get one of those 7-point-something crystals that allow perfect baud rate generation and figure this out.

I still want to know if this works with the Axon DAQ as it is right now, I assume it does. My next mission is to start make this configurable, but that's for the next Weekend!

I still want to know if this works with the Axon DAQ as it is right now, I assume it does. My next mission is to start make this configurable, but that's for the next Weekend!

Ok good news and bad news. Good news is that it worked, displayed data correctly, etc.

Bad news is when I clicked 'Bulk I/O' and then checked the box. It immediately crashed/blew up the program. Since data is coming in insanely fast anyway, maybe it should be limited to one line per 250ms or something like that?

Do you know how to show the data as a line graph? Perhaps make it look like an oscilloscope (sensor voltage vs time)?

Bad news is when I clicked 'Bulk I/O' and then checked the box. It immediately crashed/blew up the program. Since data is coming in insanely fast anyway, maybe it should be limited to one line per 250ms or something like that?

Well that was expected considering I used a good old ListBox for the Bulk Data display! It was supposed to be an aid with development. I replaced the ListBox with an all-custom control, this one's so efficient it doesn't need a checkbox: It's always ON and doesn't slow down the system.

Quote

Do you know how to show the data as a line graph? Perhaps make it look like an oscilloscope (sensor voltage vs time)?

I've only seen pictures of how "voltage vs time" looks on an oscilloscope but I gave it a try anyway. The newest version does "time series graphs", in real-time. Again, I used an custom control that I made for this purpose: It's efficient, it's fast and it's ugly Because of the way I'm drawing graphics, if you hit "disconnect" right after an interesting event shows on screen, all graphics will freeze and you'd be able to move from one to the other.

wow I really like it!!! Ok first I want to thank you for all the effort you've donated. You've earned high priority for any questions you ask in the forum (just email me the link and I'll help you asap)

As always I have more additions to request:

1) Add an 'About' button somewhere. When the user clicks on it, they see software version number, release date, and author name (your user name or whatever). Any other info that comes to mind.

2) Scaling on the charts. On the left hand side, write 0V to 5V. 0V is at the very bottom, and 5V at the very top. A little dash for every .1V, and a large dash for every .5V (like a ruler almost). If it was labeled, is there a way to automatically modify the horizontal time scale depending on the 'Time interval' selection?

2a) I'm not sure if this will look good, but maybe a underlying grid as in the image below?

3) Add a 'Pause Signal' button for the charts - basically just stop updating the line chart so that the user can have a close look at the signal until he hits 'Resume'.

4) Label it 0 through 15, not 1 through 16 (the ADC on the Axon are 0-15).

5) Add a 'Detect Ports' button so the user doesn't need to plug in the Axon before running your program.

6) A new program name/icon. Like 'Axon Oscilloscope' or something like that. Got an icon with oscilloscope zig-zag lines?

7) After these fixes I'd like to publish your program officially. I'll write up a nice tutorial today and get your approval before release.

Feature Creep ideas:Set these to lower priority unless you consider them very easy . . . but I'm dreaming of options such as 'report max value', 'report min value', 'report average value', data smoothing, print waveform (with a printer), save waveform as .png, and even the ability to overlay several channels in different colors on the same chart.

One idea I have is a 'user scaling equation'. So lets say a person has an equation for a Sharp IR that relates voltage to distance, he just types it into a box and the graph modifies itself somehow to display distance and not voltage.

3) Add a 'Pause Signal' button for the charts - basically just stop updating the line chart so that the user can have a close look at the signal until he hits 'Resume'.

That's a nice idea, I've allready figured out a way to easily implement that. I'll also add some kind of scroll bar to move the "window" so I can see older data, maybe even save the data so I can review it later. Since this requires me to implement an pausing mechanism I might as well implement some kind of "auto-pause": If the sensor value gets stuck on "0" or "255" for longer then 1 second enter auto-pause mode and resume recording only when I start getting other values. The way I'll implement this would also allow me to place a nice vertical red line where the pause happend (auto-pause or pause because the user pressed an button) so it's obvious when looking at the graph that there's something missing.

Quote

4) [...]5) [...]6) A new program name/icon. Like 'Axon Oscilloscope' or something like that. Got an icon with oscilloscope zig-zag lines?

I'm aiming more towards "SoR Oscilloscope" if you like that. The idea's that I want this to work with anything outthere, not only the Axon.

Quote

Feature Creep ideas:Set these to lower priority unless you consider them very easy . . . but I'm dreaming of options such as 'report max value', 'report min value', 'report average value', data smoothing, print waveform (with a printer), save waveform as .png, and even the ability to overlay several channels in different colors on the same chart.

I understand "print waveform" and "save waveform as .png" but I'll need more info for those:"report max/min/average value" - you want those on the graph or in a corner somewhere?"data smoothing" - What are you thinking of? Want me to get rid of the squware-ish lines? A Wikipedia link for the kind of algorithm you'd like to see implemented would be grate.

Quote

One idea I have is a 'user scaling equation'. So lets say a person has an equation for a Sharp IR that relates voltage to distance, he just types it into a box and the graph modifies itself somehow to display distance and not voltage.

That's interesting. Would you mind showing me some equation variants? What would be the preferred notation?

Here's what I'm thinking of: I want to be able to use this little software for other things too (ex: for robots that actually run around the house) - so the insanely fast "DAQ" with the very strict format is not very usefull. I'd like to add commands for updating an arbitrary number of sensors at one time AND for sending status code and text comments. I imagine a GUI that can be configured to look like a dashboard with a number of different kinds of sensors plus a little text box (similar to how the "bulk data" box works) that displays comments sent from the MCU.

Next level would be communication in the opposite direction: Send commands from the GUI to the MCU/Robot. Things like a "command box" would work. I'd also be able to translate keyboard/mouse/joystick into text that can be sent to the MCU.

Some of those options can be implemented fairly fast, so I'd be able to declare the first version complete and send it out into the world But I absolutely want it a bit more configurable before I consider even a first version as complete. I insist on more flexible input: Being able to update just one sensor, maybe two sensors; Being able to send some text to be displayed on the computer screen. I also like the idea of status code that would be translated into actual text on the PC: It saves memory on the MCU because it doesn't need to store the whole text and it makes communication allot easier because it only needs to send an 2-byte code, not the whole

If the sensor value gets stuck on "0" or "255" for longer then 1 second enter auto-pause mode and resume recording only when I start getting other values.

Probably won't work. For some reason some of the ADC ports on the ATmega sometimes won't go lower than 1 or 2, or higher than 253 or 252.

Quote

I'm aiming more towards "SoR Oscilloscope" if you like that. The idea's that I want this to work with anything outthere, not only the Axon.

OK that'll also work

Quote

"report max/min/average value" - you want those on the graph or in a corner somewhere?

On a real oscilloscope these are options you turn on and off. They appear typically as numbers at the bottom or right of the screen, so that when you print/save the image, the values get captured. Something intuitive, just see what you can do . . . start easy for now, we can always upgrade later.

The equation to convert 8 bit numbers to voltage:8_bit*5/255=voltage

Quote

That's interesting. Would you mind showing me some equation variants? What would be the preferred notation?

For example, the Sharp IR GP2D12 sensor uses this equation with the Axon:distance in cm = 1384.4*pow(8_bit_value,-.9988)

So the user just puts this equation in: 1384.4*pow(X,-.9988)where X is the value the Axon returns. If the equation line is blank, no conversion happens.

A certain gyro uses this equation: (15*8_bit_value-180)so the user just enters (15*X-180) and the oscilloscope immediately starts returning degrees per second as a graph

Quote

so the insanely fast "DAQ" with the very strict format is not very useful

If your graph shows pixels per millisecond, I need to make it much faster to view servo PWM (one pixel per 100us). I'm going to try and speed it up. Or maybe make a second DAQ program that only prints one ADC (since USB is the bottle neck).

Quote

Here's what I'm thinking of: I want to be able to use this little software for other things too (ex: for robots that actually run around the house) - so the insanely fast "DAQ" with the very strict format is not very usefull. I'd like to add commands for updating an arbitrary number of sensors at one time AND for sending status code and text comments. I imagine a GUI that can be configured to look like a dashboard with a number of different kinds of sensors plus a little text box (similar to how the "bulk data" box works) that displays comments sent from the MCU.

Perhaps we should come out with more than one program, one specialized only as an oscilloscope, and another specialized for robot communication? DAQs need to be run ultra fast to work.

For the dashboard idea:What if the bootloader can be activated by a button in the GUI? It just needs to run this command:C:\MY_ROB~1\Axon\FBOOT17.EXE -b115200 -c2 -paxon_tst.hex -vaxon_tst.hex(in the user options, the user can modify this command)

And perhaps text is displayed next to sensor data on the graph, so the user will know what sensor data triggered which action - great for debugging! For example, a photovore decides to turn left, it prints 'Left' right on the graph right when it got sensor data to make that decision.

Quote

so I'd be able to declare the first version complete and send it out into the world

How about this. Implement just the ideas in my previous post, and consider the DAQ done. Also, make it so it detects the carriage return - that way if the $50 Robot sends only 5 sensor values, it will still work in the DAQ.

We publish it, call it finished.

Then we focus entirely on a robot dashboard, not caring about DAQ features. We'd have to implement some type of communication language. I can see major feature creep in this, and many people requiring an endless array of new features. As such I think the DAQ should be a separate program.

so the insanely fast "DAQ" with the very strict format is not very useful

If your graph shows pixels per millisecond, I need to make it much faster to view servo PWM (one pixel per 100us). I'm going to try and speed it up. Or maybe make a second DAQ program that only prints one ADC (since USB is the bottle neck).

The "backend" that stores data to be shown on the graph has a resolution of 1 millisecond. Happily that resolution is configurable via a constant. What should be the resolution for the backend? Help me do some math here, to figure out how much the GUI should expect.

The Atmega640 says:Up to 76.9 kSPS (Up to 15 kSPS at Maximum Resolution)

Going on the highest number, 76.9 kSPS per second, that requires a serial transfer speed of 692.1 kbits/sec (76.9 * 9) - Is the Axon capable of pumping that much data over the USB line?

About the math: as an extreme-case scenario, let's assume the Axon is only sampling one analog input pin and it's dumping the result straight into the USART! No conversion, binary data is just fine and makes good use of every single bit

The "backend" that stores data to be shown on the graph has a resolution of 1 millisecond. Happily that resolution is configurable via a constant. What should be the resolution for the backend? Help me do some math here, to figure out how much the GUI should expect.

The Atmega640 says:Up to 76.9 kSPS (Up to 15 kSPS at Maximum Resolution)

Going on the highest number, 76.9 kSPS per second, that requires a serial transfer speed of 692.1 kbits/sec (76.9 * 9) - Is the Axon capable of pumping that much data over the USB line?

The Axon is limited to 115.2kbps, due to crystal frequency and USB limitations. A servo pulse is between 1ms and 2ms long, so you'd need a resolution of 1/10th that to get a decent reading. Hence ~100us. But if that's still not possible, faster is still better.

Quote

About the math: as an extreme-case scenario, let's assume the Axon is only sampling one analog input pin and it's dumping the result straight into the USART! No conversion, binary data is just fine and makes good use of every single bit

Exactly what I was thinking.

I just wrote a new DAQ program for the Axon (see attached). It reads only ADC pin 9, and is at least 16 times faster. It returns only a single 8-bit number, directly followed by a carriage return.

I chose pin 9 because it's the closest to the AVR on the pcb, meaning less potential noise. I also reduced ADC accuracy in code to speed it up. It could actually be more than 100 times faster, but too lazy to calculate clock cycles . . . If its easier for you to just to pin 0, it'll take a second for me to recompile.

I can in theory make it go even faster by sending individual bytes, but I'd have to research some syntax to do it:

uart2SendByte(a9);uart2SendByte(what is a carriage return as a byte?);

First of all: the carriage return is 13, line feed is 10; But you don't need to send an carriage return, just the single byte from the ADC! By sending the result from the ADC as a byte HyperTerminal support goes down the drain, the carriage returns only halves the sampling rate by sending two bytes where one would suffice.

What's in uart2SendByte()? Is that a buffered function or will it simply put the char in UDR and be done with it?I can't test and I can't look because, as you might know, I don't have an Axon I've got the mail with the tracking number today, but unfortunately the Axon itself wasn't in an attachment

Now back to the math. Since the Axon is limited to 115.2kbps, that gives about 115.2/9=12.8 kbytes per second. So the bottleneck is not in the ADC but in the USART, the ADC may stay at maximum resolution, it doesn't make a difference (the datasheet quotes 15 kSPS at maximum resolution). Assuming the code can be written to fully saturate the USART code, we'd be getting 12,800 samples every second. Time resolution would be 0.000078125 seconds, or 0.078125 ms, or 78.125 us! (yep, lower then 100 us so I guess it's worth the trouble!)

As a coding idea for how to saturate the USART, here's what I'd do:Warning, untested code, copy-paste from working code but in a different order

// Init the UART // Init the ADC // Start the first ADC conversion while (1) { // Wait until the USART is ready to receive an other char while((UCSR0A & Bit5) == 0); // Send the result of the last ADC conversion UDR0 = ADCH; // Init the next ADC conversion ADCSRA |= Bit6; }

The idea behind the code: The first time into the loop the USART doesn't block and we've got the result of the first ADC conversion in ADCH. Next we init the next ADC conversion and we loop to the begining of the code. The second time in the loop the USART blocks because it can't send the char that fast, it needs about 78.125 us to finish. The ADC needs less then 78.125 us per conversion so by the time we're cleared by the USART the ADC conversion is done and we have the next result in ADCH. The loop goes on and on, saturating the serial link.

Some more good news: Today I added an "function generator" check box to the top of the page. When checked the program stimulates itself with test data. I also made the program display the rate it's receiving data (in kylo bytes per second). With the "function generator" checked I've clocked about 160 KBytes/sec with the graph refreshing at about 50Hz and more than 500 KByte/sec with no updates to the screen. What I'm saying is that the program can definitely handle the 13 KByte/sec that the Axon might generate!

The bad news: No updates tonight, I've got an bug and I don't want to hunt it down tonight.

What's new:-----------------(1) Includes "function generator". Check the "function generator" and data starts pouring into the test application from thin air, allowing one to test without actually connecting to an real serial port.(2) Shows data rate in kylobytes per second: how much data is pouring into the application from the serial port or the "function generator"(3) Improved resolution for "time base": there's an nice editor for the time base with about 32 possible settings. The lowest resolution is 5 us/pixel and it grows to about 2.5 seconds/pixel. There's an factor of 1.5 between consecutive settings, so this thing allows one to zoom-in very nicely to any signal.(4) Includes a way to review recorded data: On the graph screen there are some move forward / move backwards buttons that tell the application what data to show. By default it shows data in "real time". If one touches any of the position buttons the image stops scrolling and one has the possibility to move back and forward over the whole graph. With the support from the "zoom/time base" facility this allows one to analyze any detail at any point in time.(5) The graph gets horizontal lines for volts (in the background)(6) The graph gets vertical lines for time in the background. Those lines are linked to the "time base/zoom" feature so they're always meaningful. A little more twiking is required.

(2) Shows data rate in kylobytes per second: how much data is pouring into the application from the serial port or the "function generator"

It was going at about 10.3 kbps with axon_DAQ.hex. I noticed it was taking a running average, so its a bit slow in displaying the correct bit rate. Can you speed that up?

Some minor bugs I've noticed. In the graphed data, on the far left, the curve spasses just before it goes off screen on the left. The green lines also get freaky and jump around at particular Time Base selections. Otherwise all good and I like it

And this will probably not be easy so don't worry about it . . . it'd be nice to use the mouse to zoom into a selection on the graph. Either by using the scroll wheel on the mouse, or by drawing a selection box, for zooming in. Yea, feature creep

It was going at about 10.3 kbps with axon_DAQ.hex. I noticed it was taking a running average, so its a bit slow in displaying the correct bit rate. Can you speed that up?

I can do lot's of things

Quote

Some minor bugs I've noticed. In the graphed data, on the far left, the curve spasses just before it goes off screen on the left.

Dumb question: what' "spasses"? Google doesn't know the word and I'm not an native English speaker. Also I'm not home to ask my wife.

Quote

The green lines also get freaky and jump around at particular Time Base selections.

The green lines get freaky when you're looking at data with too much variance per selected time unit. The program needs to apply an aggregation algorithm to come up with a value for the graph, for every pixel. Each pixel shows the agregated value as seen throw a window into the recorded data, the windows is of fixed width: the "time base" width, and the windows are right-aligned to the data, not left-aligned; Since new data is always pouring in the contents seen throw this window constantly changes! This is why I'm saying you're looking at data with too much variance for the selected time base: Select a 2-second time base basically *any* signal has too much variance... unless you're logging temperature change in the room, and then then 2s time base makes a lot of sense!

This could be fixed by only updating the graphic once-per-time-base: If you're looking at an graph that has a timebase of two seconds it makes little sense updating it 50 times per second as it happens right now In my opinion this fix is not worth the trouble.

Quote

And this will probably not be easy so don't worry about it . . . it'd be nice to use the mouse to zoom into a selection on the graph. Either by using the scroll wheel on the mouse, or by drawing a selection box, for zooming in. Yea, feature creep

Zooming with the mouse wheel is easy; Panning with the mouse wheel is easy (fits right into the "paradigm" of what I've got right now). Selecting a part of the graphic and zooming is not that easy because it requires changes to both horizontal (time line) and vertical (volts/whatever). I'll look into it.

Here's my to-do list:

*) Make an File menu with "Save data" and "Load data" options; This would help with development.*) Implement an expression evaluator so I can "translate" graph data*) Implement auto-scale in the graph window as well as configurable scale*) Make it so I can show more then one graph in the same window*) Implement pauses and auto-pauses;

P.S: You've got an "private message" from me, I'm waiting for some comments.

Dumb question: what' "spasses"? Google doesn't know the word and I'm not an native English speaker.

Its some 90's term that comes from spasm. So if it spasms, it 'spasses out' or just 'spasses'. Spasm basically means to shake, freak out, go into convulsion, etc. The line basically jumps around like crazy and doesn't hold its shape as it exits the screen on the left.

Quote

This could be fixed by only updating the graphic once-per-time-base: If you're looking at an graph that has a timebase of two seconds it makes little sense updating it 50 times per second as it happens right now Smiley In my opinion this fix is not worth the trouble.

ok then lets not bother with it for now.

Quote

Here's my to-do list:

*) Make an File menu with "Save data" and "Load data" options; This would help with development.*) Implement an expression evaluator so I can "translate" graph data*) Implement auto-scale in the graph window as well as configurable scale*) Make it so I can show more then one graph in the same window*) Implement pauses and auto-pauses;

Lets leave all these for v2. I like/want these features, but they all fall into feature creep. Just clean out the bugs and tweak what we have and call it v1.

*) Make an File menu with "Save data" and "Load data" options; This would help with development.

That's what I was thinking of Yes, give me a large dump, whatever format you have it it's fine: I'll just open the file and send it to my GUI one byte at a time.Or just wait for tomorrow's version, it will have the save/load options.

New in this version:*) Improved graphics in the "scope" window. I find the colors more pleasant and easier to read.*) File -> Save option, allows one to save all recorded sensor data*) File -> Load option, allows loading the previously saved sensor data*) File -> Playback dump file, asks for a file and plays it back byte-by-byte, pausing at every CR/LF (so it doesn't playback the whole file in 1ms )*) On the "scope" window there's an "Snapshot" button that takes a snapshot of what's on the scope and puts it in the clipboard.*) The program accepts from 1 to 18 values on it's input, in decimal format, separated by spaces and with an CR or LF char at the end of line.*) Fixed some bug in the "scope" window causing the graphic to "'spass out" on the first value