Hehe, thanks for the quick reply!
That won't be possible given the way VLC video filters work (they work on the decoded frame, the video output is responsible for the scaling if you have a decent graphics card ... which you do)
Argh! Oh no!
What a bummer. That means that pixelated (i.e., low res, you know, "700MB type movies") will only look a little better. (i.e., the CODEC pixelation/smoothing is broken up, but the noise will become apparent, since frame pixels are 2.5x2.5 physical pixels on normal PC screens.)
So basically always change the grain for every new frame? (This is already what it's doing)
Perfect! Don't change a thing
So noise amplitude should depend on the base luminosity of a pixel? Or is that something different? (I guess that grain was due to irregularities in the film used when shooting a movie, and the reproduction process. Must mean that the grain isn't really dependant on the frame's content ... except maybe the amount of light it was exposed to)
I read up a bit, and apparently grain is the difference in density (manufacturing processes aren't perfect) of the silver halide layer. The grain becomes more apparent when more photons hit that layer, such as when you overexpose a poorly lit scene. Film makers do this for effect (I know this from my 3D @ Uni) - they decrease the shutter width and/or shutter time, so that fewer photons make it onto the film, and then overexpose it back so you can see something
The photons that DID affect the layer are few, only here and there, and so you get a more uneven (exaggerated) layer.
This density difference becomes grain when a projector shines with a constant light level through the film and the layer - since the layer has uneven density, more or less light is blocked by the layer.
What this means is that when an .avi has become pixelated and codec-smoothed, the grain isn't there. To reproduce it, you must emulate it, by giving dark areas more (visually, maybe not mathematically) grain and light areas less. You can see that constant grain (as in your picture of the guy with the katana) gives no visible grain on his body (dark) but grain on the skies.
(If someone is a film expert, please correct me. But I think I got it roughly correct.)
Currently it's a fixed size (for speed issues) but changing the grain size would be really easy so I'll probably make it possible (at least chose in a list of presets).
Framerate is currently set to the original movie's framerate and I don't intend on changing that (as it would imply a completely different type of plugin, which I don't like).
As for % of grain effect, I guess that you mean something like the std deviation of the grain's randomness compared to the allowed luminosity range? (Movies allow 256 levels of luminance, which is basically luminosity, and I was currently using a range of random numbers in the [-15;15] range to add noise, followed by a convolution by a 5x5 gaussian which is what 'sets' the grain size mentioned earlier before applying that noise to the original frame. The two chroma planes are left untouched.)
More comments are always welcome
If you have knowledge about other kinds of algorithms that can be used to add grain to a movie (or precise desciptions about the physical phenomenon involved in the grain's creation in old movies) I would be glad to read those to improve the current algorithm.
To me, the main thing with grain size is to have it reduce frame resolution pixelation. The usual movies are around 600x350, which makes each film pixel about 2x2 to 3x3 physical pixels on 1280x1024 screens. With the usual filtering, it's ok for these movies, but any lower and you can see the pixelation clearly. If it's impossible to do grain smaller than the .avi resolution, I don't see any need for grain size. I mainly meant some way of adjusting it if the grain is too big, such as when the .avi is low res.
% is simply "amount of graininess". I bet there are as many methods as there are programmers. I'm not sure luminosity covers it all, the main thing is that grain is visually equal both in dark and light areas, I think.
I don't intend these ideas as telling you what to do, I hate it when someone does that
Just ideas. Here is my idea, if I knew how to code stuff for VLC.
I always think of the most luxurious or quality solution first. If it can't be coded (too high PC specs required) or too much work or there is a nifty solution, then that is worth more consideration. I just think this filter would make the average user, with 700MB type .avis, look at VLC and be blown away with that his small movie looks so great in VLC than others!
I feel that it definitely needs to be done on the screen pixel level. If VLC renders with some engine like DDraw or OpenGL, or similar, then it could be easily applied to the screen buffer? Don't know.
1) Make one or more pixel masks in the form of an [optionally] repeated huge texture, like 2048x2048. I would bring a frame into Photoshop and try the built-in filters first until the frame looks good and less pixelated when the texture is overlaid at a certain scale. This is just to build antipixelation into the grain.
2) I would then use this texture to make a layer mask for a noise filled texture with size=screen/window pixel size. This is the texture applied with "subtractive blend" to the zoomed frame. Black pixels don't go below rgb(0,0,0), so only lighter areas are affected.
3) I'd use the same texture (antipixel pattern) inverted with additive blend to make sure there is live in black areas too. White pixels can't get whiter.
So basically, make your noise pattern as usual. Multiply it with the antipixel pattern to make the noise break up the pixelation, and subtract it. Invert it (same noise), multiply, additive.
I write too much as usual
Tell me if this is impossible. If it's possible then I'd be happy to do some PS experiments
And tell me if it's old, but I think this is new technology?