Fighting with the sync TX interface - MinGW32 and Windows 7

Discussions related to embedded firmware, driver, and user mode application software development
Post Reply
F4GKR
Posts: 2
Joined: Sat Jun 28, 2014 8:36 am

Fighting with the sync TX interface - MinGW32 and Windows 7

Post by F4GKR »

Hi all,

I am fighting to have transmit working ...
Briefly:
- Compiled host code under MinGW32, ok with tricks (functions missing in MinGW)
- Implementing Qt wrapper around sync interface
- Rx is ok : using external generator, I am able to see my signal at correct location on spectrum, fine & good.
Here I am using settings given as example in libBladeRF.h :
#define DEFAULT_STREAM_XFERS 64
#define DEFAULT_STREAM_BUFFERS 5600
#define DEFAULT_STREAM_SAMPLES 2048
#define DEFAULT_STREAM_TIMEOUT 0

rc = bladerf_sync_config(this->bladerf_device,
BLADERF_MODULE_RX,
BLADERF_FORMAT_SC16_Q12,
DEFAULT_STREAM_BUFFERS,
DEFAULT_STREAM_SAMPLES,
4,
DEFAULT_STREAM_TIMEOUT
);
Rocks, great.

- Tx : depending on config settings, never gets correct transmit or gets timeouts.
Sampling rate set at 1MHz, I fill a 1024x1024 sample array, containing a complex sine, expecting to see just a single carrier for 1 second (ok, 1024*1024 at 1MHZ is a bit more than 1 sec).
Depending on settings, sometimes works but transmission is less than 1 second (about 300 millisecs on waterfall), or transmission dies with timeout in the middle, never at the same amount of samples.
setting verbose mode does not help, no message explaining where the problem is.
Googling on bladerf_sync_tx just shows OpenBTS code (https://github.com/Nuand/OpenBTS/blob/m ... Device.cpp), and using same parameters does not fix the issue.

Does anybody use this interface with success ? What should be correct settings for bladerf_sync_config ?
Last question... does bladerf_sync_tx free the passed samples ? looks like yes ..

thanks for your help
sylvain - F4GKR
jynik
Posts: 455
Joined: Thu Jun 06, 2013 8:15 pm

Re: Fighting with the sync TX interface - MinGW32 and Window

Post by jynik »

Hi there,

I'll try to address some questions and comments in-line. Apologies this got so long... I meant it to be quick, but that never quite seems to happen. ;)
- Compiled host code under MinGW32, ok with tricks (functions missing in MinGW)
What tricks were needed? Let's file some issues on the tracker regarding these; if you have to go through hoops, then those are problems that we should get fixed.

If you happen to somehow find that something about the MinGW build is causing problems, note that you can use the (free) Visual Studio Express tools to build a windows lib... at least until whatever issues might exist get fixed.
rc = bladerf_sync_config(this->bladerf_device,
BLADERF_MODULE_RX,
BLADERF_FORMAT_SC16_Q12,
DEFAULT_STREAM_BUFFERS,
DEFAULT_STREAM_SAMPLES,
4,
DEFAULT_STREAM_TIMEOUT
);
Rocks, great.

Not relevant to any issues, but I recommend that you change that format to BLADERF_FORMAT_SC16_Q11. The Q12 was a misnomer, and has been kept as a #define to the former for reverse compatibility. However, this macro is scheduled for deprecation and removal, so it's best to do this now.
- Tx : depending on config settings, never gets correct transmit or gets timeouts.
Are you able to transmit samples OK via the bladeRF-cli (uses the async interface currently)? If not, perhaps some other underlying issue exists...
setting verbose mode does not help, no message explaining where the problem is.
In order to keep lots of debug output out of view, verbose output in the sync datapath is not enabled. You can enable it at build time via

Code: Select all

-DENABLE_LIBBLADERF_SYNC_LOG_VERBOSE=ON
After configuring with that and rebuilding, you should see a lot of verbose output about the state of the API-facing side of the sync interface, as well as the background worker's state while handling buffers.
Does anybody use this interface with success ?
I guess I don't really count as an unbiased user, being one of the authors. :P

But I can say that one readily available program is the libbladeRF_sync_test program that gets built with libbladeRF. I'm using this right now to transmit a sinusoid to my spectrum analyzer, and everything looks good in the FFT, polar IQ, and RF envelope views. If you want a sanity check or find yourself in disbelief of this, I'll happily grab screenshots from the VSA later.

The --help option provides a quick overview of some of the knobs you can turn.

I've attached a binary file (SC16 Q11 data) containing 1 period of a sinusoid, consisting of 4096 points. Repeating this file 1000 times will give a bit over 4-seconds of a transmission:

Code: Select all

./output/libbladeRF_test_sync -i sinusoid_sc16q11.bin -s 1M -r 1000
What should be correct settings for bladerf_sync_config ?
If you didn't happen to see them, the function descriptions should hopefully lead you in the right direction towards understanding what settings you need.

As noted in the bladerf_sync_config description, the bladerf_init_stream description provides some information about these parameters (because the sync interface is built atop the async one). In particular, make note of the formula shown there, which relates the samplerate to the parameters. (In this forumla, the Sample Rate is in Hz, and the Timeout is in seconds, to keep the units consistent.) Violating this relationship will likely result in timeouts and dropped samples -- it's defining how fast you need to produce/consume samples to keep up with the flow of data.

I know it's a bit complicated, but perhaps the following will help better clarify things:
  • Unless you're really trying to achieve low-latency, or only sending very small amounts of data, using bigger values (e.g., 4096, 8192, 16384) for samples_per_buffer will generally help you maintain decent throughput
  • num_transfers defines how many buffers may be "in flight" in the USB stack at any given time; to my knowledge, the max value here depends on the USB stack/driver.
  • On the other hand, num_buffers defines the total number of sample buffers in an underlying circular queue.
  • Unless you're really striving to minimize latency, you'll want probably 2x more buffers than transfers. This will help ensure you have some some wiggle room if you either momentarily get caught not processing samples, or have already have a lot of samples available up front that you can prepare for transmission.
Last question... does bladerf_sync_tx free the passed samples ? looks like yes ..
bladerf_sync_tx internally allocates and uses its own buffers under the hood; this is different than the samples pointer you provide. Those internal buffers are freed when the underlying sync interface is deinitialized, which occurs when:
  • You call bladerf_enable_module(dev, module, false);
  • You close the device handle
However, it does not free() the samples pointer you pass to it; it merely copies the sample data to the internal circular buffer ring.

Cheers,
Jon
F4GKR
Posts: 2
Joined: Sat Jun 28, 2014 8:36 am

Re: Fighting with the sync TX interface - MinGW32 and Window

Post by F4GKR »

Good evening,

Thanks for the time spent to reply. Lots of things, maybe not that easy to keep the track in one single thread .

First topic is around compiling under MinGW (I was using GCC 4.6.1). After running cmake, I had (as far as I cand quickly diff the files):
- problem with generated host_config, functions inserted to manage endianness are not working (microsoft specific), #ifdef not correct.
For example #include <xmmintrin.h> does not exist in MinGW, has some equivalent but different functions etc, was too difficult to fix all. I finally removed all the tests and calls, replaced by :

#define HOST_TO_LE16(val) (val)
#define LE16_TO_HOST(val) (val)
#define HOST_TO_BE16(val) bswap_16(val)
#define BE16_TO_HOST(val) bswap_16(val)
#define HOST_TO_LE32(val) (val)
#define LE32_TO_HOST(val) (val)
#define HOST_TO_BE32(val) bswap_32(val)
#define BE32_TO_HOST(val) bswap_32(val)
#define HOST_TO_LE64(val) (val)
#define LE64_TO_HOST(val) (val)
#define HOST_TO_BE64(val) bswap_64(val)
#define BE64_TO_HOST(val) bswap_64(val)

- improper generation of the Makefiles, specifically the functions defined in clock_gettime.h and ptw32_timespec.h where not find at link time (using Cmake 3.0, no error reported)
-> this means function clock_gettime() is not found for example, gets a long list of errors as this one is used at many places.
- macro PTW32_TIMESPEC_TO_FILETIME_OFFSET in ptw32_timespec : does not compile like is written with GCC... complains about the constant 3577643000 ! (too big) I had to fight with this one and finally rewrote the macro to please gcc

#define PTW32_TIMESPEC_TO_FILETIME_OFFSET \
( ((int64_t) 27111902 << 32) + (int64_t) 3577643*1000 + 8 )
this is really stupid... but the 3577643*1000 was the only way to have it compiling without overflow

- file_ops.c , function "get_home_dir" rewritten to just return "./" ; conditional sections (#ifdef) not processed correctly

I will give a try to the options you gave. My feedback in the coming days (too much work at the moment)
best regards, and thanks again for the support you provide.

sylvain F4GKR (amateur radio callsign).
jquirke
Posts: 12
Joined: Sat Oct 25, 2014 12:51 am

Re: Fighting with the sync TX interface - MinGW32 and Window

Post by jquirke »

I've found number of transfers of 64 to cause problems in Cygwin (errors coming back from libusb_handle_events). Maybe this is relevant to MinGW32 too.

Try reducing the number of transfers.
Post Reply