I'm writing to ask how the resolution compression algorithm works. In short, when converting a 1280x1024 video to 640x512 with the scale factor of 0.5, is the resulting video just throwing away 75% of the data, or are the final pixel values some average of the input?
I'm using these videos to train a neural network, and less noisy video is useful. If there scale factor just throws away the data, then it's probably easier to just record the video in the smaller resolution to begin with. If it takes an average of 4 pixels to create one pixel in the smaller video, then I expect the smaller video to be higher quality.
Thanks!