A video records the emitted and/ or reflected light intensity from the objects in a scene that is observed by a camera. When the scene is observed by a camera, only those wavelengths to which the camera is sensitive will be visible.
A video is sequence of images being shown. In video these images are called frames and the amount of images shown per second are frames per second. The more frames per second the video has, the more realistic and statistic it will appear.
The color value at any point in a video frame records the emitted or reflected light at a particular 3-D point in the observed scene. A color video should be specified by three functions, each describing one color component, in a tri stimulus color representation. A video in this format is known as component video.
To conduct video processing, it is necessary to define an objective criterion that can measure the difference between the original and processed signal. This is especially important, for example, in video coding applications where one must measure the distortion caused by compression. Ideally such a measure should correlate well with the perceived difference between two video sequences. Most video processing systems are designed to minimise the mean square error (MSE) between those two video sequences. For video, the MSE is computed separately for each colour component.
In all video processing applications- from production, storage, transmission and reception-video technology is quickly being replaced by its digital video counter part. A typical studio video processing function includes accepting uncompressed video and manipulating it and either storing it on a server or compressing it for transmission. Typical processing functions include scaling, de-interlacing, chroma resampling, color space conversion and mixing.
No comments:
Post a Comment