The A7 device is just a fast switching rectifier, it breaks down at 100V and it's being used to clamp the signal to -5V and +5V.

Also take note that the instead of your usual 1M ohm resistor and trimmer capacitor in parallel connected from input signal to ground we have the input signal going through a 909K resistor with the trimmer capacitor in parallel with it, then shunted to ground through a 100K resistor and an SMD capacitor in parallel. To the input, it looks like a 1M resistor to ground, but the signal is being tapped between 909K and 100K, then going to the first Op Amp. I'm not sure a 5V input signal in 1x mode is going to give a 5V signal at the node between the 909K and 100K, but more like 500mv. This would mean the probe in 1x mode is going to be safe all the way up to 50V and 500V in 10x.

Also, the outside of my unit shows a label between the BNC connecors that says "35vpk max" which I am not sure if they mean for 1x or 10x?

I read this like "firmware is downloaded to the device by the driver during initialization".

If that's true and provided that it hasn't changed for the 60x2BE series, there might be chances for easy firmware hacks...

I'm pretty sure all the simple CY7C68013A models are doing it this way, they don't have any on-board flash and the I2C doesn't have firmware in it just model information for USB device detection.

So the .sys driver has the firmware image embedded inside it, and the folks at openhantek.org have a tool to extract it. When the device is plugged in, the driver loads the firmware into the CY7C68013A's 16KB internal memory, and the 4K FIFO is used to store the ADC output.

I'm pretty sure all the simple CY7C68013A models are doing it this way, they don't have any on-board flash and the I2C doesn't have firmware in it just model information for USB device detection.

Correct.

Quote

So the .sys driver has the firmware image embedded inside it, and the folks at openhantek.org have a tool to extract it. When the device is plugged in, the driver loads the firmware into the CY7C68013A's 16KB internal memory, and the 4K FIFO is used to store the ADC output.

Almost. The 4K FIFO is used to buffer the USB data transfers. The ADC output should be fed to a 16Mbit chip (2 channels, with 1 MByte each), though I didn't see one on the board. If they were funneling the data directly from the ADC through the FIFO to USB, it wouldn't be able to keep up with the 48 MHz sample rate of the device. And it's not limited to 4K of samples, so...

I'm pretty sure all the simple CY7C68013A models are doing it this way, they don't have any on-board flash and the I2C doesn't have firmware in it just model information for USB device detection.

Correct.

Quote

So the .sys driver has the firmware image embedded inside it, and the folks at openhantek.org have a tool to extract it. When the device is plugged in, the driver loads the firmware into the CY7C68013A's 16KB internal memory, and the 4K FIFO is used to store the ADC output.

Almost. The 4K FIFO is used to buffer the USB data transfers. The ADC output should be fed to a 16Mbit chip (2 channels, with 1 MByte each), though I didn't see one on the board. If they were funneling the data directly from the ADC through the FIFO to USB, it wouldn't be able to keep up with the 48 MHz sample rate of the device. And it's not limited to 4K of samples, so...

The ADC output traces go directly to the CY7C68013A's Multiplexed input pins (specifically "bidirectional FIFO/GPIF data bus") which can switch between GPIF and FIFO. There is no 1MByte of ram anywhere on the board, or in any of the ICs. The FIFO has a 96 mega byte per second burst rate, so it's highly unlikely they are using the GPIF which would have to go through the address bus just to get to the FIFO.

My guess is they are dumping the 8-bit data from the ADC in real time into the FIFO, and then through firmware they are periodically sampling it via USB possibly using the computer's ram to store the raw data.

Both the CY7C68013A and the ADC can clock at 48Mhz so there is no problem with the full data rate from the ADC getting to the FIFO, and the FIFO doesn't need 1MB of RAM to hold a real time value. If they use 2K of FIFO per channel, they can store 2000 samples of data in FIFO for each channel. All the firmware inside the CY7C68013A has to do is buffer 2000 samples continuously, starting over and overwriting when it reaches 2000 samples.

Since the FIFO is connected directly to the USB Engine (circumventing the Address Bus), this means the PC has high speed access to the FIFO buffer and all the PC has to do is wait for the FIFO to fill up, then read the entire FIFO buffer to PC RAM, and either display it immediately or wait for the FIFO to fill up again, then read it and store in PC RAM until desired sample length is reached.

Since the PC doesn't read the FIFO until it's filled, that means the USB transfer is effectively 4KBs every 24Khz (48Mhz/2000 samples equals 24Khz).

The ADC output traces go directly to the CY7C68013A's Multiplexed input pins (specifically "bidirectional FIFO/GPIF data bus") which can switch between GPIF and FIFO. There is no 1MByte of ram anywhere on the board, or in any of the ICs.

I never saw any either, but assumed it must be there somewhere. Reason being the Hantek claim that unlike the other models in the 6000-series, with only 16K of buffer RAM, the 6022 boasted 1M-samples per channel. Thus I assumed it must have 1MB of RAM per channel, on the board. (That's actually one of the reasons I bought that particular model, quite some time ago.) I never opened it up, or even looked that closely at the PCB shots here that Aurora was kind enough to provide.

Quote

The FIFO has a 96 mega byte per second burst rate, so it's highly unlikely they are using the GPIF which would have to go through the address bus just to get to the FIFO.

My guess is they are dumping the 8-bit data from the ADC in real time into the FIFO, and then through firmware they are periodically sampling it via USB possibly using the computer's ram to store the raw data.

That would be the only plausible explanation. My concern is the ability to continuously maintain a USB transfer, to transfer 500 FIFO buffers worth of data, without any breaks. More on that below.

Quote

Both the CY7C68013A and the ADC can clock at 48Mhz so there is no problem with the full data rate from the ADC getting to the FIFO, and the FIFO doesn't need 1MB of RAM to hold a real time value.

Yes, that part is pretty straightforward.

Quote

If they use 2K of FIFO per channel, they can store 2000 samples of data in FIFO for each channel. All the firmware inside the CY7C68013A has to do is buffer 2000 samples continuously, starting over and overwriting when it reaches 2000 samples.

Well, there needs to be some bit of synchronization, between the Producer side you're describing, and the Consumer side to USB, so that neither outruns the other. But yes. Normally this is done with double-buffering, but chase-mode would work as well, if you have dual-ported memory with DMA.

Quote

Since the FIFO is connected directly to the USB Engine (circumventing the Address Bus), this means the PC has high speed access to the FIFO buffer...

I'm with you up to here.

Quote

...and all the PC has to do is wait for the FIFO to fill up, then read the entire FIFO buffer to PC RAM,

Too late! If it waits until the FIFO is full, it can't possibly read it out without an overrun condition, and lose the continuous data stream.

Quote

and either display it immediately or wait for the FIFO to fill up again, then read it and store in PC RAM until desired sample length is reached.

The thing is, the way you're describing it makes it sound like the PC can just grab a 2,000 byte chunk of data over USB instantaneously. And it can't. Even running USB in synch-transfer mode, it takes time.

Quote

Since the PC doesn't read the FIFO until it's filled, that means it's effectively sampling it at 48Mhz/2000 samples which equals 24Khz.

Well, OK, though I'm not sure describing it as "sampling" at 24 kHz is really meaningful. The USB data still gets serialized, and can only be clocked out at something less than 60 MB/sec. Depending on the system, usually much less than 60. Most rarely attempt more than 24 MB/sec, and some even fail at that. (See all the inexpensive USB-logic analyzers.) I'm wondering just how successful the continuous 48 MB/sec you've described would actually be?

Or are you suggesting that Hantek isn't actually performing a 1M-sample acquisition at 48 MHz, but rather just grabbing discontiguous chunks of 2K of data? I agree, that would be very easy to do. But in that case, Hantek would be dangerously close to fraud in their claims, and most certainly deceptive. However, that would help understand their reluctance to explain any of those issues to me when I asked.

I just don't see how it would be practical to sustain a 48 MB/sec USB transfer rate, even for the 21 msec required to acquire 1M samples of data. Sure, they can sample that, but they can't get it to the PC. That's why I assumed they must have a large local buffer. But maybe I'm missing something.

From numerous comments made here by owners, the 6022BE seems to lack any ability to select a trigger point at a specific spot in an acquisition. E.g., Rick has described how a capture has the trigger point occurring essentially randomly within the sample, making it very difficult to acquire the desired part of the waveform. So you might want to look at 90% post-trigger data, but wind up with all of your data to view being pre-trigger. I.e., a crap shoot.

Yes, Rick documented how at high speed (48 MHz), the module was capturing only 1,016 samples of data. It wasn't until you dropped back to 16 MSa/sec that you got anything bigger (jumped to 128k), and you couldn't get 1M at all, until you dropped the sampling to 1 MSa/sec or less. These were his findings:

At 2us (48Mhz) = 1016Anything faster than 2us, it is still at 48MHz at 1016 samples each channel (2032 total)

I think I'm finally starting to get a picture of what the 6022BE can really do. And why. That explains what RichardK was trying to tell me, and I wasn't quite getting. On one end, it can capture at 48 MSa/sec... for a very short period of time (~20 usec). With 2-channels, of a whopping 1K each. Because, like I noted, it can't get the data to the PC fast enough. But if you sample slowly enough, the "sample buffer" is essentially infinite (PC RAM). This certainly isn't the way they present the product capabilities.

To call this a 20 MHz bandwidth scope, with a 1M buffer is really deceptive. Because with 1M sampling, the bandwidth is actually about 400 kHz. (That's with the Hantek software. It should be possible to do better than that.) And the 1M buffer is actually in your PC.

The thing is, the way you're describing it makes it sound like the PC can just grab a 2,000 byte chunk of data over USB instantaneously. And it can't. Even running USB in synch-transfer mode, it takes time.

It's probably set up so once the buffer is full it stops sampling, calls an interrupt to USB and once the PC grabs the data it starts sampling again.

This would explain the pseudo randomness of the device at times, because how long it has to wait to sample new data depends on how long it takes the USB and PC software to do it's part.

If anyone was wondering what the unpopulated components opposite the front end were for, they are an alternate isolated supply for the front end. They are using an LTC3440 Buck regulator for the +5V and a classic 7660 Charge Pump for the -5V supply.

The question remains, why is it there if they already have an isolated DC-DC supply (specifically a Mornsun A0505S-2W)?

Take a quick look at the datasheet for the Mornsun DC-DC under Applications and you'll see why:

Doesn't sound like something you'd use to power an Analog Oscilloscope Front End does it?

Clearly this was a cost cutting, not performance decision and this Mornsun DC-DC doesn't cut it in the higher end models, so I wouldn't be surprised if those boards had the DC-DC unpopulated and the Charge Pump + Buck Regulator populated instead.

In the datasheet for the DC-DC it specifies that the switching frequency is 75KHz but it doesn't specify what kind of input or output capacitors are used, or if there are inductors or even if the inside is shielded.

Looking at the Applications notation, specifically the part where they say "Regulated and low ripple noise is not required" makes me think that if there is any input/output capacitors inside, it's bare minimum and highly unlikely that there is any sort of inductor in there other than the transformer, and forget about shielding.

I plan on adding copper foil shielding around the DC-DC package, ground strapping the ADC heatsink, and canning both Front Ends as well as adding extra SMD capacitors before and after the DC-DC, at the bypass capacitors near the ADC and USB Micro.

Later on I'll order some parts and populate the alternate DC-DC supply and remove the Mornsun DC-DC and see how it works.

If anyone was wondering what the unpopulated components opposite the front end were for, they are an alternate isolated supply for the front end. They are using an LTC3440 Buck regulator for the +5V and a classic 7660 Charge Pump for the -5V supply.

Good catch.

Quote

I plan on adding copper foil shielding around the DC-DC package, ground strapping the ADC heatsink, and canning both Front Ends as well as adding extra SMD capacitors before and after the DC-DC, at the bypass capacitors near the ADC and USB Micro.

Later on I'll order some parts and populate the alternate DC-DC supply and remove the Mornsun DC-DC and see how it works.

Sounds like a good plan. My guess is your initial efforts will be effective in reducing noise in small doses... perhaps cutting it in half. The big win though will be eliminating the Mornsun, and you may be able to get noise down into the 1-2 mV range. (Or maybe even a bit less, since this isn't a wideband device... it's not quite as susceptible as most scopes would be to high-freq noise.)

It's probably set up so once the buffer is full it stops sampling, calls an interrupt to USB and once the PC grabs the data it starts sampling again.

This would explain the pseudo randomness of the device at times, because how long it has to wait to sample new data depends on how long it takes the USB and PC software to do it's part.

I agree about the pseudo-randomness. Especially so if it just sends whatever is in the buffer as soon as it sees a trigger condition in that pass, when sampling at high speed. I.e., it's constantly refilling the 1k buffer, and if the trigger fired any time during that chunk, it sends that block to the PC. If not, it discards and keeps collecting.

[Note that the reports of randomness were at high speed, where the chunk size is small (very small). At slower speeds, where it can collect larger data sets, it can afford to throw some of the head away to align things.]

However, I'm not sure about the first part. Based on Rick Law's report, it can't be stopping and starting like that. The crossover point seems to be at 5us/div (16Mhz), where it returns 127k of data per channel (instead of 1kB -8B). If they were doing so in discontiguous chunks, then something as simple as a sine wave would be broken, and have 127 discontinuities as you scrolled though it. I've never heard any reports of that. (Rick?)

So that would have to be the data rate it can maintain continuously. To do so, it must be double buffering, and sending one full block of data over USB, while acquiring the next block. With no pauses to dump while not sampling. If it's doing so on both channels, then it's pumping an aggregate 32 MB/sec over USB, which is very good performance. If it's only managing one channel at that rate, then the 16 MB/sec that represents would be fairly normal and uneventful. But I think Rick indicated both channels were active.

Hantek has an SDK manual for the Display DLL, and one for the acquisition DLL (HTMarch). I concentrated on the later.

There are a set of support functions, which mostly make sense. Other than the fact that some of them are worthless (redundant), since the data they set gets overridden whenever you actually do a Read call. Here's my summary, and notes:

// just checks to see if a device is present. usually only Dev=0 is true.

HTMARCH_API short WIN_API dsoOpenDevice(unsigned short DeviceIndex);

// cals are 32B of 'proofreading' data. short the inputs to Ground, then run dsoCalibrate and retrieve data.// presumably, the cal sets (16B/channel) are dependent on timePerDiv and voltsPerDiv, which means that// dsoCalibrate would need to be run in multiple passes (64 combos). it also suggests that dsoSetCalLevel // must be run every time either is changed. what dsoGetCalLevel is good for is unknown.// cal data also needs to be reloaded on every power-on, since theres no NVMEM. [not really. it says it// sends the Cal data to the device, but it doesn't. The SDK uses it to adjust the data before returning it.]

// [it looks like dsoSetCalLevel/dsoGetCalLevel are also worthless, for the same reasons as SetTime/Volts (below).]

// worthless functions, because there's no way to get any data after setting them, w/o running a dsoReadHardData, which overrides them!// however, they do serve to document the needed index values. (perhaps there are undocumented Read functions that were dropped.)

What is interesting there is the CalLevels, since they claim you send the data to the device, but I suspect you do NOT. I doubt any correction is ever applied in the module, but rather in the SDK interface routine, which applies a cal correction after a Read and before it passes the data back. I wonder how significant the args are for selecting channel sensitivity and sample rates, because in the worst case there are 8 sensitivities for each of Chan 1 and 2, and 8 sample rates, for a total of 512 combinations! They appear to combine the CalLevels for both channels into one block, of 32B.

The actual data reading is more interesting, since it appears to wrap up everything into one call. I guess you need to spawn this in a thread, so you can kill it later if the trigger condition never occurs, since there's no defined timeout. And you'd hang, otherwise.

There are a number of not well-defined things here, but two that stand out as puzzling. The biggest is what a DisplayLength and D-value Mode are doing in the acquisition routine? They're both relating to Display, which is (and should be) separate. And the second is what the purpose of having a SweepMode of Normal is for?

Auto means just trigger immediately (ignoring the sh!tload of trigger parameters), and Single waits for one trigger hit to capture and return a sample set. But Normal implies there is a continuous sampling process occurring. And since there's no callback function or any other mechanism to allow that, I wonder if it's just that Normal=Single here, or if they partition the one block of data that's allocated into slices, and fill it with multiple samples in Normal mode? (I'd guess not, since that's far too sophisticated for them. So Single probably means nothing at all.)

I'd be interested in thoughts anyone may have as to the purpose of the Display params in the dsoRead, though.

I know the DisplayLength has a meaning in the DrawWave function, it's just used to clip the data and only display a portion of it, what it's being used for in GetRaw I'm not sure... perhaps it's clipping the data also?

D-Value mode is how the data is interpolated when you are at 2us or lower, the options are as follows: 1. Step - Interpolate in right angle steps2. Line - Interpolate in obtuse angles3. SinX - Interpolate in sine

The reason the Get Raw data function asks for these arguments is because it's doing the interpolation.

As for the Calibration functions, the firmware might be storing the cal data in the I2C IC.

I have been tinkering with the code today and I got the grid displaying and both channels, but so far I have not added any manual controls, just displaying hard coded settings.

I know the DisplayLength has a meaning in the DrawWave function, it's just used to clip the data and only display a portion of it, what it's being used for in GetRaw I'm not sure... perhaps it's clipping the data also?

Maybe. Not sure why it would though.

Quote

D-Value mode is how the data is interpolated when you are at 2us or lower, the options are as follows: 1. Step - Interpolate in right angle steps2. Line - Interpolate in obtuse angles3. SinX - Interpolate in sine

Right. That part I know.

NB: Whoops! I need to slown down & read more carefully. I missed your "at 2 us or lower". Even then, it shouldn't have to interpolate anything until Display time though.

Quote

The reason the Get Raw data function asks for these arguments is because it's doing the interpolation.

Huh? What interpolation? I ask it to collect N samples at a certain rate. It does so and returns them to me. What is here for it to interpolate? It would only do so if it was sampling fewer points than the # I requested. And needed to create the missing intermediate values. Are you suggesting that's what it's doing? [I suppose it could be. ]

[BTW, I expect if I ask a device to sample at, say, 8M Sa/s, that it actually does so. Not sample at some lower rate, then generate fake points to make it look like it was running properly. If that's what it's actually doing, my interest in it just dropped to 0. The speed is already disappointingly low, but still quite usable for certain things. If it's even slower than that, it's worthless to me. I have no use for a device that provides me with 8M of "data", with 7M of it filler, and 1M actual samples. I hope this is not what is happening.]

Quote

As for the Calibration functions, the firmware might be storing the cal data in the I2C IC.

Well, it could store the data anywhere it wanted to. But I don't see what value that would have? It has to be used in some way to adjust the raw data values provided by the ADC, and I don't see it having either the time or the horsepower to do so on the fly. That should be being done on the PC.

Quote

I have been tinkering with the code today and I got the grid displaying and both channels, but so far I have not added any manual controls, just displaying hard coded settings.

Huh? What interpolation? I ask it to collect N samples at a certain rate. It does so and returns them to me. What is here for it to interpolate? It would only do so if it was sampling fewer points than the # I requested. And needed to create the missing intermediate values. Are you suggesting that's what it's doing? [It could be. ]

I don't think the device supports sampling at lower than 2us, so they are hacking support by interpolating. Why would they do this? Maybe there is too much noise? Not sure. The interpolation doesn't take effect at timebases above 2us, and if you look at the stock software the interpolation buttons are disabled above 2us presumably because they don't do anything above 2us.

Quote

Well, it could store the data anywhere it wanted to. But I don't see what value that would have? It has to be used in some way to adjust the raw data values provided by the ADC, and I don't see it having either the time or the horsepower to do so. That should be being done on the PC.

It's a USB scope, so you might not be using the same PC to utilize it, thus storing the cal data in the I2C would be convenient, and I don't think the firmware is touching the cal data, it's just stored there and the calibration offsetting happens in the PC software, thus the need for the GetCalLevel function.

If they store it on the PC, you have to recalibrate every time you use a different PC,If they store it in USB Micro's 16KB RAM you have to recalibrate every time you plug it in,If they store it in the I2C you only have to calibrate it when it needs it.

I don't think the device supports sampling at lower than 2us, so they are hacking support by interpolating.

You are correct. Once it hits 2 uS (that's perDIV), it's sampling at its max 48 MHz, and it goes no faster than that. At that rate, they're acquiring 96 samples/div, and as they drop below that (or zoom in), they need a method to "connect the dots".

Quote

Why would they do this? Maybe there is too much noise? Not sure.

Could be. In my mind, it just seems like Capture and Display should be independent. If this only kicks in at 48 MHz, I'm less concerned though.

Quote

The interpolation doesn't take effect at timebases above 2us, and if you look at the stock software the interpolation buttons are disabled above 2us.

That's a relief!

Quote

Quote

Well, it could store the data anywhere it wanted to. But I don't see what value that would have? It has to be used in some way to adjust the raw data values provided by the ADC, and I don't see it having either the time or the horsepower to do so. That should be being done on the PC.

It's a USB scope, so you might not be using the same PC to utilize it, thus storing the cal data in the I2C would be convenient.

Oh, you mean, as a temporary cache? Thus needing the dsoGetCal, to pull it into the SDK. Yeah, could be. That would save time and avoid needing a recal just because you switched from your desktop to a laptop PC. If they have enough room for all of it, in EEPROM.

(I'm still not sure how many sets of 32B they need to characterize the entire range for the scope... but the Calibrate function does have sampleRate, and channelSensitivity (for _2_ channels) as input args. I was assuming that a Cal process would not be a single call to that function, but ramp through all the possibilities. But even though they complicate it by wedding Chan1&2 in a Cal set, that doesn't mean they need to redundantly store all those combos. You'd just need one 16B Cal per channel for each Sensitivity & Sample rate pair. So 64 per channel = 2 KB total. Doable.)

That would also explain why they have the "useless" Set functions for sampleRate and chanSensitivity. These values get overridden in the dsoRead, but you can't access a specific set of CalLevels w/o first setting the TB and mvPerDIV. So you'd use the two Set routines to cycle through, and perform a GetCalLevels for each. Then you'd be able to provide them to the SDK on a dsoRead, and not have to do a full Cal to regen the values between PC. Makes sense. You're knocking down some of the unknowns.

It's a USB scope, so you might not be using the same PC to utilize it, thus storing the cal data in the I2C would be convenient, and I don't think the firmware is touching the cal data, it's just stored there and the calibration offsetting happens in the PC software, thus the need for the GetCalLevel function.

If they store it on the PC, you have to recalibrate every time you use a different PC,If they store it in USB Micro's 16KB RAM you have to recalibrate every time you plug it in,If they store it in the I2C you only have to calibrate it when it needs it.