Page 1 of 1

Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 07:57
by oviano
e.g. I get this output:

swscale debug: 1920x1080 (1920x1090) chroma: I420 -> 1920x1090 (1920x1090) chroma: I420 with scaling using Bicubic (good quality)
core debug: using video filter module "swscale"
core debug: Filter 'Swscale' (0x7f4888001838) appended to chain
core debug: original format sz 1920x1090, of (0,0), vsz 1920x1080, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0

Just curious as to whether the above wastes any CPU - is it related to having to rescale the height from 1080 to 1090 or something to suit the alignment?

Or maybe it does nothing and doesn't matter?

Thanks!

-Oliver

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 17:23
by Rémi Denis-Courmont
Well it does. The video output requests 1090 lines, and the source has 1080 lines.

It could be a bug in the video output, but meanwhile, scaling is necessary.

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 20:18
by oviano
Well the problems seems to be in vmem, and the code doesn't really make a lot of sense.

It passes "width" and "height" down to the setup function, but what I'm finding is that in my sources width x height is 1920 x 1090, but visible width x visible height is 1920 x 1080.

The problem is that after calling setup it sets "visible width" to "width" and "visible height" to "height" which later gets detected in swscale as a reason to do the scaling. If I remove those lines, it doesn't load swscale if the chroma is the same, which is what I would have expected.

I would have thought the logic needs to be something more like this:

- pass visible width and height down to setup_cb
- if these come back unchanged then don't change anything
- if a different visible width and height are returned, then set width and height to visible width and height.

Either that, or pass down all of width, height, visible width and visible height (and maybe x_offset and y_offset too which are also getting explicitly set to zero), which might be cleaner.

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 20:24
by Rémi Denis-Courmont
As documented, vmem is inefficient. This is pretty much by its API design.

You must use window handles or write your own video output plugin if you want efficiency.

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 20:32
by oviano
Remi

I get this, and maybe longer term I will have a go at writing some kind of OpenGL plugin that I can hook into my SDL code, but as mentioned elsewhere I'm having success with this method for my usage. I have no problem decoding HEVC at 1080p at 60 frames per second on moderate hardware.

The stuff above is just as much as anything else trying to make sure everything is as good as it can be for *this* method which presumably is a good thing for people who for various reasons can't use Window handles or don't have the knowledge to write a custom video plugin?

Or to put it another way - if there is a bug in vmem causing swsxale to be invoked unnecessarily you'd want that fixed correct?

Regards

Oliver

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 20:40
by Rémi Denis-Courmont
vmem cannot distinguish between visible resolution, video resolution and buffer resolution. In most cases, this requires memory copying or scaling.

Re: Why does it use swscale when it doesn't need to resize or convert chroma?

Posted: 07 Nov 2016 20:53
by oviano
Yes, and my suggestion is to enhance vmem so that it can differentiate, by passing these parameters down to the format_cb so that the API user can adjust if necessary, but if they don't then vmem won't go making any changes that end up invoking swscale unnecessarily.