You’ve probably stumbled upon some of these and we’re going to use them in our guides and tutorials as well. Just to make sure you’re always able to follow, here are some of the most important photography terms explained:
- Aperture – the opening in your lens which lets light pass from the outside world to the camera’s sensor. The aperture is defined by an “f-number” like f2.8 or f16. The smaller the number, the larger the opening and the larger the number, the smaller the opening (counter intuitive at times). The larger the opening, the more light is getting through to the sensor. Besides getting you more light, a larger opening also gives you a more shallow depth of field (see below). Many action and drone cameras have a fixed aperture and control exposure only through shutter speed and ISO (see below).
- Blur – a blurry or “non-sharp” image can have two main causes: either a wrong focus (see below) or motion (of the camera and/or the subject)
- Depth of field – also called focus range, is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image.
- ISO sensitivity – how sensitive your sensor reacts to light, normally given in numbers like ISO100, ISO200, … With higher ISO, your image gets brighter but you also get more image noise (see below)
- Exposure – the amount of light reaching the camera’s sensor, as determined by shutter speed, aperture and scene luminance. The higher the exposure, the brighter the image.
- Focus – the distance between the camera and the objects in the scene which appear sharpest. Many action and drone cameras have a fixed focus and a very large depth of field, meaning that – except for very close objects – everything you capture is in focus.
- Flickering – can occur in video, when adjacent frames are exposed differently. Usually happens with timelapses and in certain lighting conditions.
- FPS (frames per second) – the number of individual frames in one second of video material. For playback, the most common FPS values are 24, 25, 29.97 and 30. The human eye can resolve up to 24 individual frames per second, meaning that a video with less than 24 FPS will appear like a slideshow whereas a video with more than 24 FPS will look, well, like an actual video where you can’t make out the individual frames. Many cameras allow you to record video with more FPS (like 60 or 120) which gives you the freedom to slow video down later on, creating a slow-motion effect while still having the 24 or 25 frames necessary for a fluent motion picture. Example: If you record at 120 FPS, you can slow down your video by x4.8 and still get a fluent 25 FPS clip.
- Noise (image noise, that is) – also known as grain, image noise are small dots of varying brightness and color scattered through your image. Noise is caused by high ISO settings mainly used in low light conditions. If you’re normally shooting in broad daylight, you probably haven’t noticed noise in your images – but if you’re sometimes shooting at night or indoors, noise can become quite obvious.
- Quality – this is a very subjective and broad term. When it comes to photos and videos, it normally refers to the amount of compression which is applied when a file is stored to disk. Higher quality means less compression and results in a larger file. In the same way, lower quality means more compression. Compression (in photo, video and audio) is normally “lossy” – meaning that some of the original image data is discarded in order to achieve a smaller file size. Low compression settings are often near-lossless, meaning that although some of the original image data is lost, the perceived decrease in image quality is non-existent. With increasing compression, the decrease in quality becomes more and more notable, eventually creating visible artifacts like a lack of detail, pixelated edges or square blocks of solid color instead of smooth gradients.
It’s important to note that it’s impossible to bring back any lost detail: Once you’ve saved a file at a low quality setting, opening and re-saving it at a higher quality setting won’t bring you any improvement (though it can lead to a bigger file size). If your image/video editing workflow involves several steps, we recommend you to stick with a high quality setting for all intermediate steps (you can delete these intermediate files later on).
- Resolution – the number of pixels in your photo or video. For photos it is normally given in a megapixel count (e.g. 18MP), for video usually in one of the standard video resolutions like VGA (640x480), HD (1920x1080) or 4K (3840x2160).
Resolution, quality and FPS are completely independent – this means that for example a HD video with only little compression can look a lot better and show more details than a 4K video with a lot of compression.
- Saturation – the intensity of colors in an image. A black and white image has a saturation of zero.
- Shutter speed – the time span for which light is allowed to reach the sensor until the shutter closes and a frame (either a video frame or a single photo) is saved. It is given in fractions of a second like 1/4, 1/100, 1/400, 1/2000 or in full seconds (1, 4, 10, …). The slower the shutter speed (= longer time span), the more light reaches the sensor, but the higher the risk of getting motion blur.
- Stabilization – the process of avoiding or reducing camera shake and vibration. The most effective one is just using a tripod – obviously not always an option. Other methods are stabilized lenses and camera sensors, external gimbals (external devices with typically 2 or 3 axes and motors and inertial sensors which detect and counteract motion) and software-based stabilization (performed either in post-production or during recording).
- White balance – the global adjustment of the intensities of colors (typically red, green, and blue). An important goal of this adjustment is to render specific colors – particularly neutral colors like white, grey and black – correctly. For more details, you may want to read this.