Information on DDS device tree for FMCOMMS2/3

I'm working on arbitrary waveform generation using the DDS device's DAC DMA buffer. I followed the source in the fmcomms2 plugin to see how it was implemented but I'm not getting anything, which are probably due to my poor understanding of the DDS device tree. For example, what is the difference between out_altvoltage0_TX1_I_F1_frequency and out_altvoltage1_TX1_I_F2_frequency? Is there any relationship between these and the DMA buffer?

iio:device3/out_altvoltage0_TX1_I_F1_frequency

iio:device3/out_altvoltage1_TX1_I_F2_frequency

iio:device3/out_altvoltage2_TX1_Q_F1_frequency

iio:device3/out_altvoltage3_TX1_Q_F2_frequency

iio:device3/out_altvoltage4_TX2_I_F1_frequency

iio:device3/out_altvoltage5_TX2_I_F2_frequency

iio:device3/out_altvoltage6_TX2_Q_F1_frequency

iio:device3/out_altvoltage7_TX2_Q_F2_frequency

Is *_raw the file to enable the channel? Is there any documentation of what all of the channels and their attributes do? I couldn't find any related posts and I don't think it's clear.

What are the sampling frequencies for and how do they differ from the PHY sampling frequency, i.e.

iio:device3/out_altvoltage_TX1_I_F1_sampling_frequency

As an aside, when I reboot my board I always see a bunch of spurs on my TX port over a 20 MHz band at 2400 MHz. If I set the DDS to disabled, they do not go away. My suspicion is that from testing maybe my devices files are corrupted because I can't recall ever looking at the TX port and observing this?

As for my arbitrary waveform generation from file, here's my experiment and observations:

Obviously the -3 MHz shouldn't be there. The signal level is low even though I have the TX attenuation set to 0 dB, but I suspect that's in part because the RX data isn't full 16-bits dynamic range since the RSSI is never close to 0 dBFS.

You can find a description of all the files in the DAC DDS core Linux driver documentation. For each I and Q channel there is one dual-tone generator implemented in the HDL. F1 and F2 are the two frequencies that are generated by the core.

If you want to do arbitrary waveform generator the DMA buffer interface is the way to go.

You are right that you should not see a tone at -3MHz. Are you sure that the I and Q data is correctly formatted in the buffer that you push?

You can find a description of all the files in the DAC DDS core Linux driver documentation. For each I and Q channel there is one dual-tone generator implemented in the HDL. F1 and F2 are the two frequencies that are generated by the core.

If you want to do arbitrary waveform generator the DMA buffer interface is the way to go.

You are right that you should not see a tone at -3MHz. Are you sure that the I and Q data is correctly formatted in the buffer that you push?

I found the issue causing the image with my code when casting the floats to unsigned integers for the device buffer.

It looks like the maximum value should be (2^15-1) for a 16-bit signed word. I assume the fpga is taking the 12 MSBs of this word for the DAC?

I was observing very low output power initially, but then I realized I was observing TX on the wrong port so I was actually seeing TX leakage, not the desired signal.

By the way, is there any particular reason the DAC DMA is lumped in with the DDS device? It seems like the functionality is entirely separate. Also, when the DAC DMA device is disabled I assume that means the DDS can be enabled. Does the DAC DMA have priority?

Yes, the output word is a signed 12-bit value store MSB aligned in a 16-bit word.

I agree with you that ideally the DDS and DMA functionality should be more clearly separated from each other. We might see this at some point, but currently it is the way it is and we need to be a bit careful with changing it to not break backwards compatibility.

At a driver level the DMA takes precedence, so if the buffer is enabled the DDS is turned off. At the hardware level this is implemented as a simple MUX and the driver switches between the two inputs of the MUX depending on whether DMA or DDS is active..

Now that I have DAC DMA working I'm trying to tackle the DDS configuration. The behavior isn't as I expect.

First off, when I boot the Zedboard+FMCOMMS3 and the IIO Oscope launches, it comes up with the default device settings. The first thing I do is set the TX LO frequency. If I query the device files, the frequencies match the UI and all of the *_raw files are set to 1. By default the Scope UI is set to "One CW Tone", but I clearly see 2 tones from the DDS and they match the frequencies in the device files.

It seems to me that I can't write to individual channel raw files in my own application. If I write 1 to one *_raw file, all of them are updated to 1 and vice versa for 0. Here I try configuring the DDS channels slightly differently. While I can write unique values to the frequency/scale/phase attributes, I can't do that for raw. Any ideas, or is this known behavior? Given the design of the scope UI, I would guess no, so perhaps there is a trick I'm missing.

I updated my device using adi_update_tools.sh. The scope UI has a new field and the DDS appears to work correctly for me. I can set 1 or 2 CW tones now.

It appears that the TX DDS channel/unit enable is controlled via the *_scale value and not *_raw value, as the latter just toggles all channels as you say. Using the scope UI the disabled channels/units have the minimum scale factor. It didn't appear to be doing this on the older version I had for whatever reason so I was confused. Anyway, hopefully this helps someone else.

Step 2. "In scan_elements" folder you can find out_voltage*_en attributes that should be used to enable the DAC channels. The enabled channels will then get their data from DMA.

Step 3. Create iio_buffer using Libiio API, push data to buffer, etc

As for this, when I use the scope UI and DMA on one of the TX channels I observe scan_elements/out_altvoltage*_en values set to 0, not 1. My understanding based on Lars' feedback was that creation of the buffer enables the DMA and has priority over the DDS.