Using this library was pretty straight forward. I was able to do what I wanted, but the performance left a lot to be desired.
Here's what I was trying to do. Maybe there's a faster way to implement it? If not, I think I have a proposed solution.
I am using an FPGA and high-speed DAC to synthesize some sinusoidal tones. We are aiming for a SFDR of 70+ dB. The DAC update frequency is 30 MHz. Right now, I'm using a linear interpolation scheme to reduce the size of my sine look-up tables. I have a 28-bit phase accumulator, and I have two LUTS containing 18-bit, signed, fixed point slopes and offsets. I use the upper bits of the phase accumulator to address the LUTs, and I use 17 of the lower bits as an unsigned "distance". I then multiple the distance by the slope and add the offset. I send the resulting sample stream to the DAC.
We're seeing some spurious peaks in the FFT of the DAC output. I wanted to do some analysis to see what level of spurious peaks I should expect from my linear interpolation scheme. That's where the fixedpoint library comes in. I dumped my slope and offset LUTs from my HDL simulator and loaded them into Python as lists of FixedPoint values. Then, I calculated my fixed point distances and performed the same multiplication and addition.
The problem is that I want my simulated FFT to match the real FFT. The real FFT uses 10 ms of data, so I need to generate 300,000 FixedPoint results.
Right now, this is the best I've come up with. dist and addr are numpy arrays of integers. I couldn't find a way to initialize the raw fractional bits of dist_fp directly from the integers in dist, so I had to convert from int to str and back. However, creating dist_fp "only" takes 5-6 seconds. Computing the samples takes something like 30+ seconds.
dist_fp = [FixedPoint(hex(d), True, 1, FRAC_BITS) for d in dist]
samples = []
for a, d in zip(addr, dist_fp):
offset = offsets[a]
slope = slopes[a]
samples.append(offset + slope * d)
Is there a better way to do this? My gut tells me I might be able to improve it slightly, but I think the better solution would be to offer something like a FixedPointArray that is backed by numpy arrays underneath. I think it is common to have arrays of fixed point numbers with identical types, so I think it would be useful. You would probably have to limit the total number of bits to 64, and use dtype="u8" under the hood, but that doesn't seem like a major limitation to me.
Thoughts?
Using this library was pretty straight forward. I was able to do what I wanted, but the performance left a lot to be desired.
Here's what I was trying to do. Maybe there's a faster way to implement it? If not, I think I have a proposed solution.
I am using an FPGA and high-speed DAC to synthesize some sinusoidal tones. We are aiming for a SFDR of 70+ dB. The DAC update frequency is 30 MHz. Right now, I'm using a linear interpolation scheme to reduce the size of my sine look-up tables. I have a 28-bit phase accumulator, and I have two LUTS containing 18-bit, signed, fixed point slopes and offsets. I use the upper bits of the phase accumulator to address the LUTs, and I use 17 of the lower bits as an unsigned "distance". I then multiple the distance by the slope and add the offset. I send the resulting sample stream to the DAC.
We're seeing some spurious peaks in the FFT of the DAC output. I wanted to do some analysis to see what level of spurious peaks I should expect from my linear interpolation scheme. That's where the
fixedpointlibrary comes in. I dumped my slope and offset LUTs from my HDL simulator and loaded them into Python as lists ofFixedPointvalues. Then, I calculated my fixed point distances and performed the same multiplication and addition.The problem is that I want my simulated FFT to match the real FFT. The real FFT uses 10 ms of data, so I need to generate 300,000
FixedPointresults.Right now, this is the best I've come up with.
distandaddrarenumpyarrays of integers. I couldn't find a way to initialize the raw fractional bits ofdist_fpdirectly from the integers indist, so I had to convert frominttostrand back. However, creatingdist_fp"only" takes 5-6 seconds. Computing the samples takes something like 30+ seconds.Is there a better way to do this? My gut tells me I might be able to improve it slightly, but I think the better solution would be to offer something like a
FixedPointArraythat is backed bynumpyarrays underneath. I think it is common to have arrays of fixed point numbers with identical types, so I think it would be useful. You would probably have to limit the total number of bits to 64, and usedtype="u8"under the hood, but that doesn't seem like a major limitation to me.Thoughts?