When square pixels aren’t square
15 points by eBPF
15 points by eBPF
Makes me want to bust out the old pixels are never squares drum.
I think this PDF would do a lot better job if it had actual examples of what various reconstruction figures look like; there isn't really an illustration of what makes one reconstruction filter better than another.
(The bit about displays also feels extremely outdated.)
I wonder if it’s being introduced by a processing step somewhere? I don’t understand why, but I don’t have to – I’m only displaying videos, not producing them.
I'm curious now. Why bother with this? Why not encode and store the frames in the resolution they were meant to be viewed at? I understand that it enables some backwards compatibility, but why would new videos make use of this?
I can see how you might get these ratios when using anamorphic lenses when filming, but I’m not sure what would lead to this sort of thing in a normal processing pipeline. Do cinema projects set the aspect ratio through the lens these days?
I don't know much about modern video cameras, but I assume you can't always find (or afford) a camera that has all the features you want and also has a sensor with the same aspect ratio you want to film in. Say your camera has a sensor with a 4:3 aspect ratio, but you want to film in 16:9. You can either use a normal lens and then crop the resulting video to 16:9 discarding lots of data, or you can use an anamorphic lens and use all the pixels in exchange for having to deal with SAR vs DAR.
Some video codecs require that the height and width of the encoded frames be multiples of some specific number. So sometimes you have to stretch (or squash) the original frames to encode them. Then you set the display aspect ratio so the player knows to un-stretch/squash the video to play at the original, intended aspect ratio. The alternative would be to crop the video.