BladeRF Two RX Channel Sampling Issue

Discussions related to modulation techniques, filtering, error correction and detection, wireless link layer implementations, etc
Post Reply
ramesrl
Posts: 1
Joined: Fri Nov 19, 2021 6:44 am

BladeRF Two RX Channel Sampling Issue

Post by ramesrl »

Hi!
I am using bladeRF 2.0 micro xA4 in a project targeting a sampler. The bladeRF 2.0 micro xA4 is reached on USB3.0 via a RaspBerry Pi 4B 8GB of RAM running the kernel "Linux raspberrypi 5.10.63-v7l+ #1459 SMP Wed Oct 6 16:41:57 BST 2021 armv7l GNU/Linux", on which I installed (from GitHub), compiled and executing properly the bladeRF-cli.

Basically, what I want to achieve is to record (to sample) for a given amount of time (let's say from seconds to minutes - assuming we have sufficient RAM and SSD space) the two RX synchronized channels in realtime with no information loss, once other settings are passed (e.g. rx frequency, rx gain, rx samplerate and rx).

I am using the following script, changing parameters inside of it, but that is passed to the bladeRF-cli:

Code: Select all

# Begin of the script
set agc off
set frequency rx 310M
set bandwidth rx 550K
set samplerate rx 20M
set gain rx 6
rx config file=/tmp/rx_test_in_ram format=bin n=400M channel=1,2
rx start
rx wait 22s
# End of the script
Now let's come to the issue.

Any time I run the script I get a different file size written to the /tmp/rx_test_in_ram file. In addition to that, the effective employed time to acquire the samples is affected by a variance. I run several tests taking note exactly about file dimension and employed time to be sure about the encountered issue.

I also tried to launch the bladeRF with an increased verbosity level and I get the following message:

Code: Select all

”[DEBUG @ host/libraries/libbladeRF/src/streaming/sync_worker.c:101] RX overrun @ buffer 15'' .
I attempted to adjust the sample rate, samples, buffers and xfers and I also removed the timeout value in the last line. Furthermore, I ran also from different hosts (ARM A9 and Intel Core I5).
The results is always the same: when I set the sample rate higher than 3 or 4 MS/s I encounter the problem (the number of the buffer overrun occurrence is not the same, obviously).

So it seems that the bladeRF 2.0 is not able to work as expected and this is very strange!
Moreover, I tried to look into the code and I saw that part is probably still under TODO after many years.

Code: Select all

 /* Get the index of the buffer that was just filled */
    samples_idx = sync_buf2idx(b, samples);

    if (b->resubmit_count == 0) {
        if (b->status[b->prod_i] == SYNC_BUFFER_EMPTY) {

            /* This buffer is now ready for the consumer */
            b->status[samples_idx] = SYNC_BUFFER_FULL;
            b->actual_lengths[samples_idx] = num_samples;
            pthread_cond_signal(&b->buf_ready);

            /* Update the state of the buffer being submitted next */
            next_idx = b->prod_i;
            b->status[next_idx] = SYNC_BUFFER_IN_FLIGHT;
            next_buf = b->buffers[next_idx];

            /* Advance to the next buffer for the next callback */
            b->prod_i = (next_idx + 1) % b->num_buffers;

            log_verbose("%s worker: buf[%u] = full, buf[%u] = in_flight\n",
                        worker2str(s), samples_idx, next_idx);

        } else {
            /* TODO propagate back the RX Overrun to the sync_rx() caller */
            log_debug("RX overrun @ buffer %u\r\n", samples_idx);

            next_buf = samples;
            b->resubmit_count = s->stream_config.num_xfers - 1;
        }
    } else {
        /* We're still recovering from an overrun at this point. Just
         * turn around and resubmit this buffer */
        next_buf = samples;
        b->resubmit_count--;
        log_verbose("Resubmitting buffer %u (%u resubmissions left)\r\n",
                    samples_idx, b->resubmit_count);
    }
Can anyone support me?

Thank you!
Post Reply