I am working on a video intensive application and have chosen to use libvlc for playback. Technically I'm interacting with libvlc through the Vlc.DotNet wrapper, but I don't think my issues are wrapper related. The app can have as many as 8 1080 videos up, and the content being played is 30fps. A user can both play the videos, or use a slider or mousewheel to skip around the videos. The videos are relatively small and so I am able to bring them completely into memory before passing them to vlc. Everything is synced up, so all 8 play or seek at once.
This is all works surprisingly smoothly on my laptop when running on the Intel 530 integrated chip. Playback is not perfectly smooth but completely acceptable for my use and seeking is basically real time. With defaults, playback generally shows maybe 50% load and fast seeking it can hit 80% or so. I don't fully understand the skips during playback with 50% GPU load, but as I said it definitely performs well enough.
Things are massively different when testing with a discrete GPU, both of which are Quadros in my case. On my laptop, I force the app to use the Quadro and performance is at least an order of magnitude worse. It will go to 100% usage right away for any video operation. Playback is terrible and seeking is basically unusable. I was hoping it was just an issue with my laptop, but testing on a workstation with an Xeon E5, 40GB, Quadro M2000 yielded basically identical results as the laptop.
I've fiddled with some of the various command line arguments for performance and actually seen noticeable improvements in the GPU usage numbers, but the end result in the videos is still very similar to passing no args at all.
I'm on the latest version from the official nuget package, 3.0.4, video drivers up to date, windows up to date, etc.
Is there some obvious setting I'm missing that would cause this?