compensate delay option of headphone effect filter allocates 16 GB
Posted: 28 Apr 2018 04:35
My computer has 16 GB of memory, but a lot was being used by other programs, so the hard drive started thrashing badly, and it took me quite a while to kill VLC because even the mouse pointer didn't respond for a long time.
Anyway, I took a look at the source code and I think I know what the problem is.
The delays computed by ComputeChannelOperations() are all supposed to be nonnegative, which is why the i_delay field of struct atomic_operations_t is an unsigned int, but for some channels (LFE, I believe) the computed delay is actually slightly negative, so i_delay ends up being very large when that negative value is assigned to it. Then, at the end of Init(), an overflow buffer is allocated whose size is proportional to the largest delay.
The reason why the delay is negative is that Init() computes d_min incorrectly (too large), and then passes it to ComputeChannelOperations() as the parameter d_compensation_length, where it is used to compute d_compensation_delay, which is then subtracted from the delay.
I think the best way to fix this is to get rid of d_min and d_compensation_length and d_compensation_delay altogether. Instead of trying to compute them beforehand, wait till all the delays have been computed, and then, at the end of Init(), just before computing how big the overflow buffer needs to be, run through all the delays and find the smallest, the same way that Init() currently finds the largest. Then, update all the previously computed delays by subtracting the smallest delay from each of them. Finally, use the updated largest delay to compute the overflow buffer size.
Anyway, I took a look at the source code and I think I know what the problem is.
The delays computed by ComputeChannelOperations() are all supposed to be nonnegative, which is why the i_delay field of struct atomic_operations_t is an unsigned int, but for some channels (LFE, I believe) the computed delay is actually slightly negative, so i_delay ends up being very large when that negative value is assigned to it. Then, at the end of Init(), an overflow buffer is allocated whose size is proportional to the largest delay.
The reason why the delay is negative is that Init() computes d_min incorrectly (too large), and then passes it to ComputeChannelOperations() as the parameter d_compensation_length, where it is used to compute d_compensation_delay, which is then subtracted from the delay.
I think the best way to fix this is to get rid of d_min and d_compensation_length and d_compensation_delay altogether. Instead of trying to compute them beforehand, wait till all the delays have been computed, and then, at the end of Init(), just before computing how big the overflow buffer needs to be, run through all the delays and find the smallest, the same way that Init() currently finds the largest. Then, update all the previously computed delays by subtracting the smallest delay from each of them. Finally, use the updated largest delay to compute the overflow buffer size.