Sending Audio Frames

To submit audio frames, start by building a structure of type NDIlib_compressed_packet_t that has the compressed AAC audio data within it.

The following example creates a compressed audio frame when you have compressed audio data of size audio_data_size, at pointer p_audio_data with audio_extra_data_size at pointer p_audio_extra_data:

// See notes above
uint8_t* p_audio_data; 
uint32_t audio_data_size;

// See notes above
uint8_t* p_audio_extra_data;
uint32_t audio_extra_data_size;

// Compute the total size of the structure
uint32_t packet_size = sizeof(NDIlib_compressed_packet_t) + audio_data_size +
                                                            audio_extra_data_size;

// Allocate the structure
NDIlib_compressed_packet_t* p_packet = (NDIlib_compressed_packet_t*)malloc(packet_size);

// Fill in the settings
p_packet->version = NDIlib_compressed_packet_t::version_0;
p_packet->fourCC = NDIlib_FourCC_type_AAC;
p_packet->pts = 0; // These should be filled in correctly if possible.
p_packet->dts = 0;

As noted in the AAC support section of this document, this would almost always be two bytes.

Once you have the compressed data structure that describes the frames, then you need simply create a regular NDIlib_audio_frame_v3_t to pass to NDI SDK functions as shown in the following example:

Once this is sent the audio data will be transmitted, and you may free or re-use any audio data pointers that you allocated to represent the data. It is obviously possible to use a pool of memory to build audio packets without per-packet memory allocations.

The send audio function is thread-safe and may be called on a separate thread from compression transmission.

Last updated

Was this helpful?