29/5/11

Dattorro's Reverb, Part 2

In my last entry I wrote about the Dattorro's reverberation network and post an audible example of it. The implementation of the DSP that I used at that moment produced an annoying background noise, thus could not closed the issue. Furthermore, I promised sources!
I finally decided to discard all code, use some powerful api like FMOD to manage the reading and reproduction of sound and thus focus on the implementation of the effect.

Looking at the Dattorro's structure, we will realize that it only uses three components:delays, low pass filters and all pass filters. So I decided to implement the logic of these three elements and then combine the blocks to achieve the desired result.

Delay

Let's start with the simplest of them, the delay.


We simply must get a delay of N samples in the output branch. To achieve this we will use the next class.

Delay header ( located inside SignalProcesing.h)
// Audio delay class
public class Delay
{

public:

// Constructor
Delay(
int size          // Size of delay (expressed in samples)
 );

//Destructor
~Delay();

//Process one sample
void Process(
  float* sample // Address to store processed sample value
  );

//Get sample delayed
float GetDelayedSample(
int delay // Delay expressed in samples
);

//Reads a sample
float ReadSample();

//Writes a sample
void  WriteSample(float sample);

private:

int m_size;  // Size of delay in samples
int m_read;  // Read position
int m_write;  // Write position,(read + size) % size
float* m_buffer; // Samples buffer

};

Delay implementation ( located inside SignalProcesing.cpp)

// Constructor
Delay::Delay(
int size // Size of delay (expressed in samples)
)
{
//Initialize values
m_size = size;
m_read = 0;
m_write = size - 1;

//Ensure that initial sample values at buffer are 0.0f
m_buffer = new float[m_size];
for(int i=0; i < size; ++i)
{
m_buffer[i] = 0.0f;
}
}

//Destructor
Delay::~Delay()
{
delete[] m_buffer;
}

//Writes a sample into buffer
void Delay::WriteSample(
float sample // Sample Value
)
{
//write sample
m_buffer[m_write] = sample;

//update write position
m_write = (m_write + 1) % m_size;
}

//Reads a sample from buffer
float Delay::ReadSample()
{
float retVal;
//read sample
retVal = m_buffer[m_read];

//update read position
m_read = (m_read + 1) % m_size;
return retVal;
}

//Process a given sample
void Delay::Process(
float* sample  // Address to store processed sample value
)
{
//Write sample into delay's buffer
WriteSample(*sample);

//Update current value of sample with delayed value
*sample = ReadSample();
}

//Reads a delayed sample from buffer
float Delay::GetDelayedSample(
int delay   // Delay expressed in samples
)
{
int sampleIndex = (m_read - delay) % m_size;
return sampleIndex >= 0 ? m_buffer[sampleIndex]
                         : m_buffer[m_size + sampleIndex];
}



Main logic resides in Process method. In this case, we write the input sample at write position and read the value at read position (write always be (read + size)% size ).

Low Pass Filter

Next component is low pass filter. It's structure is showed at the picture.

And it's implementation is next class:

LowPassFilter header ( located inside SignalProcesing.h)
// Implements a low pass filter
public class LowPassFilter
{

public:
// Constructor
LowPassFilter();

// Destructor
LowPassFilter::~LowPassFilter();

// Set gain coefficient
void SetGain(
  float gain  // New gain coefficient
 )
{
m_gain = gain;
};


// Process one sample
void Process(
  float* sample // Address to store processed sample value
  );

private:

float  m_gain;  // Gain coefficient
Delay* m_delay;  // Audio delay

};
LowPassFilter implementation ( located inside SignalProcesing.cpp)
// Constructor
LowPassFilter::LowPassFilter()
{
// Create a delay of one sample
m_delay = new Delay(1);
}

// Destructor
LowPassFilter::~LowPassFilter()
{
delete m_delay;
}

//Process one sample
void LowPassFilter::Process(
float* sample  // Address to store processed sample value
)
{
*sample = *sample * m_gain
        + m_delay->ReadSample() * (1.0f - m_gain);
m_delay->WriteSample(*sample);            
}
Like Delay, main logic resides in the Process method. We compute the value of the output sample from the delayed sample, following the operations showed in the image.

All Pass FIlter

Finally we have all pass filter, with the following structure and implementation.


AllPassFilter header ( located inside SignalProcesing.h)
//Implements an all pass filter
public class AllPassFilter
{

public:

// Constructor
AllPassFilter(
int delaySize,   // Size of delay in samples
bool changeSign   // Sign used in sums
  );

// Destructor
~AllPassFilter();

// Set gain coefficient
void SetGain(
  float gain   // New gain coefficient
 )
{
m_gain = gain;
};


// Process one sample
void Process(float* sample);

// Get sample delayed
float GetDelayedSample(
int delay // Delay expressed in samples
    );

private:

float  m_gain;  // Gain coefficient
bool   m_changeSign; // Change sign in the summations
Delay* m_delay;  // Audio delay
float  m_predelayNode; // Pre delay node value
float  m_postDelayNode; // Post delay node value
};
If we look at Process method, in this case the point is at post and pre delay nodes, from which one can calculate the filter output value.

Building the structure

Once we have the necessary components, only need to follow the structure and build the output signal using the scheme. The main point here is when defining the FMOD dps, provide as userdata the address of a ReverbDsp object, as following:

//Initializes audio system
void AudioManager::Initialize()
{
...

//Create the DSP reverb effect.
FMOD_DSP_DESCRIPTION  m_dspdesc;

memset(&m_dspdesc, 0, sizeof(FMOD_DSP_DESCRIPTION));

m_dspdesc.channels     = 0;
m_dspdesc.read         = &DspCallback;

// Set as user data our ReverbDsp object
m_dspdesc.userdata     = m_dspEffect;

m_result = m_system->createDSP(&m_dspdesc, &m_reverbDsp);
ErrCheck();

//Inactive by default.
m_reverbDsp->setBypass(true);
m_dspActive = false;

m_result = m_system->addDSP(m_reverbDsp, 0);

  ...
}

Where m_dspEffect is a pointer to a ReverbDsp object, and DspCallback is the callback function called from FMOD to do the signal processing. Inside that callback we must cast properly the provided userdata as follows:

// DSP Callback method, called from FMOD
FMOD_RESULT F_CALLBACK DspCallback(
FMOD_DSP_STATE *dsp_state,  // Dsp state
float *inbuffer,   // Address of input data
float *outbuffer,   // Address to store output data
unsigned int length,   // Size of data
int inchannels,    // Number of input channels
int outchannels    // Number of output channels
)
{
unsigned int count;
FMOD::DSP *thisdsp = (FMOD::DSP *)dsp_state->instance;

 // Get ReverbDsp
    ReverbDsp* dsp;
    thisdsp->getUserData((void **)&dsp);

// Process samples
for (count = 0; count < length; count++)
{
  dsp->ProcessFrame(&outbuffer[(count * outchannels)],
                       &outbuffer[(count * outchannels) + 1],
                       inbuffer[(count * inchannels)],
                       inbuffer[(count * inchannels)+1]);
}
return FMOD_OK;
}

And finally, inside ReverbDsp::ProcessFrame, Dattorro´s network is implemented. Input samples are processed and output samples are generated.

// Process one audio frame (stereo)
void ReverbDsp::ProcessFrame(
float* outL, // Output left channel sample
float* outR, // Output right channel sample
float inL,  // Input left channel sample
float inR  // Input right channel sample
)
{
float accumulator, x1, x2, x3;

//Implements Datorro's reverberation network

x1 =(inL + inR)/2.0f;

m_predelay->Process(&x1);
m_lowPass1->Process(&x1);
m_tank1->Process(&x1);
m_tank2->Process(&x1);
m_tank3->Process(&x1);
m_tank4->Process(&x1);

x2 = x1 + m_delay4->ReadSample() * m_decayFactor;
x3 = x1 + m_delay2->ReadSample() * m_decayFactor;

m_tank5->Process(&x2);
m_delay1->Process(&x2);
m_lowPass2->Process(&x2);
x2 *= m_decayFactor;
m_tank7->Process(&x2);
m_delay2->WriteSample(x2);

m_tank6->Process(&x3);
m_delay3->Process(&x3);
m_lowPass3->Process(&x3);
x3 *= m_decayFactor;
m_tank8->Process(&x3);
m_delay4->WriteSample(x3);

//Compute output values
accumulator = m_scaleFactor * m_delay3->GetDelayedSample(266);
accumulator += m_scaleFactor * m_delay3->GetDelayedSample(2974);
accumulator -= m_scaleFactor * m_tank8->GetDelayedSample(1913);
accumulator += m_scaleFactor * m_delay4->GetDelayedSample(1996);
accumulator += m_scaleFactor * m_delay1->GetDelayedSample(1990);
accumulator -= m_scaleFactor * m_tank7->GetDelayedSample(187);
*outL = accumulator
    - m_scaleFactor * m_delay2->GetDelayedSample(1066);

accumulator = m_scaleFactor * m_delay1->GetDelayedSample(353);
accumulator += m_scaleFactor * m_delay1->GetDelayedSample(3627);
accumulator -= m_scaleFactor * m_tank7->GetDelayedSample(1228);
accumulator += m_scaleFactor * m_delay2->GetDelayedSample(2673);
accumulator -= m_scaleFactor * m_delay3->GetDelayedSample(2111);
accumulator += m_scaleFactor * m_tank8->GetDelayedSample(353);
*outR = accumulator
    - m_scaleFactor * m_delay4->GetDelayedSample(121);
}
This way we get a clean sound without background noise ;)

Sample Visual Studio 2010 project can be downloaded here, in order to make it compile you must add to the VC++ Include Directories the path to your FMOD sdk include folder also as add to the VC++ Library Directories the path to FMOD sdk lib folder.

An executable of it can be downloaded here (needs fmodex.dll to be at same directory).

30/6/10

Reverb Effect

My first contact with the digital audio, a work done during my academic studies. Basically it was to implement the audio processing illustrated below.



If we study this scheme can be seen that the stereo input signal is converted to mono and that there isn't any output branch. That's because the output is a linear combination of the different nodes of the scheme, each of them delayed a number of samples.

Left channel output
float outputL = null;
outputL = scale_factor * NODE1[266];
outputL += scale_factor * NODE1[2974];
outputL -= scale_factor * NODE2[1913];
outputL += scale_factor * NODE3[1996];
outputL -= scale_factor * NODE4[1990];
outputL -= scale_factor * NODE4[187];
outputL -=
scale_factor * NODE5[1066];

Right channel output
float outputR = null;
outputR = scale_factor * NODE6[353];
outputR += scale_factor * NODE6[3627];
outputR -= scale_factor * NODE4[1228];
outputR += scale_factor * NODE5[2673];
outputR -= scale_factor * NODE1[2111];
outputR -= scale_factor * NODE2[335];
outputR -= scale_factor * NODE3[121];


Listening the code


All this stuff has no sense without any actual implementation in reality.

Here you can listen a piece of an original composition, without the effect applied.


And finally here is the same piece of song with the reverb applied. Sometimes you hear a very annoying background noise, but to be my first effect (about year 2003) I am satisfied.


You can download source code an read more about here.





27/6/10

XNA Tracker Player

My current personal project. Under development. Stay tuned!

26/6/10

Starting whatever

For some time is rounding my head the idea of having a blog in which to go about all the things that I look interesting related to the world of the audio programming. Now, finally, I'm deeping into audio programming and would like to share with all of you.

Enjoy!