Hello, here is my question
I've opened a video MPEG-2 stream from ethernet input, and the program shows the content correctly after 1 o 2 minutes. Before it, the image is pixelized. Does the program make this because it's proving all the codecs that it implements, and when it finds the best one, it uses it?
If the program works like this, I'd like to know how it decides to discart a codec. I'd like to know which is the method (the .c file) that the program should call in this case to verify if that codec works ok.
Thank you very much