Wednesday, April 29, 2009
Having Problems with your HP5500PS
Just a quick note. If you're experiencing printing problems with your HP5500ps and you're getting the error code 0d0004 036e013a then try cleaning out your server's hard drive. There's no information on the HP site, in the HP service manual, or on the web that I could find regarding this error code. So, hopefully this will help someone out!
Saturday, September 20, 2008
The Tamron AF 17-50mm F/2.8 XR Di-II LD SP ZL vs. Nikon 18-70mm f/3.5-4.5 G-AFS ED-IF DX
There's been a lot of tests comparing these lenses with other lenses, but I was hardpressed to find any that compared them directly to each other. I know one is a 2.8 constant zoom and the other is a 3.5-4.5 zoom but they are similar in their range and similar in price. I can see someone wondering if their old trusty 18-70mm is as good as the 'off brand' Tamron.
I got a hold of a demo Tamron 17-50 and spent a day making a challenge. As far as physically, these two lenses are very similar in size. The tamron seems weightier and very solidly put together. It is made of plastic, but then again, so is the 18-70 Nikon. I have no problems with either of the lenses zoom rigidity or smoothness. The Tamron has a longer lens hood and the AF to MF switch seems to be a little weaker than the Nikon's but I wouldn't worry about it. The Tamron also has a zoom lock which I'm assuming is for transport even though there is no lens creep that I can see. (I have a sigma lens that I'd like this on though)
Now, what everyone cares about. The images.. All of these were done on a rock solid tripod with a remote shutter. I didn't correct lens distortion and these are jpgs straight out of the camera. These are also 100% crops. The focal point was the A.K. Dewdney book.
To be fair, the Nikon lens has been through the ringer as far as usage and as mentioned, the Tamron is a demo model. And, if you decide you want to buy one of these, use my link to amazon or adorama.. It's free for you, and it makes it so I can make more of these tests.
Up first, I decided I would pit the Tamron at 17mm 2.8 vs the Nikon at 18mm 3.5 .. Both camera's wide open end.
Image #1 is the Tamron at 17mm & 2.8 ..
The second image is the Nikon at 18mm and 3.5
After I saw both of these in the back of the camera, I thought there must have been some mistake. Some camera twitch. Some weird thing that I wasn't even thinking about. So, later that day, I ran the Nikon shot again.. with identical result.
All I can say is, holy crap. The Tamron at its focal point is light years sharper than the Nikon. The Tamron was half a stop wider open too! I wonder what the corner shots are going to look like! Let's see..
this is bottom left of the photo. The focus is still on that Dewdney book as above.
Tamron 2.8 at 17mm
the nikon at 3.5 and 18mm
Once again, you shouldn't even need the full size image to see the results. The Tamron, even at f2.8 is hands down better than the Nikon 17-80mm at f3.5.
Do we even need to see the f8 photos? I am kinda curious just to see how the nikon recovers.. I did shoot the Tamron at 3.5 just to compare apples to apples as much as possible, but really, the test above proves that it doesnt really matter.
F8
The tamron f8 at 17mm
the nikon at f8 and 18mm
The nikon recovered quite well at f8.. In fact, there's not a whole lot in between them. Maybe the Tamron is a bit more contrasty, maybe not.
How about the corners you ask? well here you go!
The corner look at the f8 photo for the Tamron
The corner crop of the Nikon at f8
the look similar to me. The Tamron is slightly better, but its barely noticable.
the f16 and f22 were nearly identical in quality and all they really confirmed was that I needed to clean my sensor!
I got a hold of a demo Tamron 17-50 and spent a day making a challenge. As far as physically, these two lenses are very similar in size. The tamron seems weightier and very solidly put together. It is made of plastic, but then again, so is the 18-70 Nikon. I have no problems with either of the lenses zoom rigidity or smoothness. The Tamron has a longer lens hood and the AF to MF switch seems to be a little weaker than the Nikon's but I wouldn't worry about it. The Tamron also has a zoom lock which I'm assuming is for transport even though there is no lens creep that I can see. (I have a sigma lens that I'd like this on though)
Now, what everyone cares about. The images.. All of these were done on a rock solid tripod with a remote shutter. I didn't correct lens distortion and these are jpgs straight out of the camera. These are also 100% crops. The focal point was the A.K. Dewdney book.
To be fair, the Nikon lens has been through the ringer as far as usage and as mentioned, the Tamron is a demo model. And, if you decide you want to buy one of these, use my link to amazon or adorama.. It's free for you, and it makes it so I can make more of these tests.
Up first, I decided I would pit the Tamron at 17mm 2.8 vs the Nikon at 18mm 3.5 .. Both camera's wide open end.
Image #1 is the Tamron at 17mm & 2.8 ..
The second image is the Nikon at 18mm and 3.5
After I saw both of these in the back of the camera, I thought there must have been some mistake. Some camera twitch. Some weird thing that I wasn't even thinking about. So, later that day, I ran the Nikon shot again.. with identical result.
All I can say is, holy crap. The Tamron at its focal point is light years sharper than the Nikon. The Tamron was half a stop wider open too! I wonder what the corner shots are going to look like! Let's see..
this is bottom left of the photo. The focus is still on that Dewdney book as above.
Tamron 2.8 at 17mm
the nikon at 3.5 and 18mm
Once again, you shouldn't even need the full size image to see the results. The Tamron, even at f2.8 is hands down better than the Nikon 17-80mm at f3.5.
Do we even need to see the f8 photos? I am kinda curious just to see how the nikon recovers.. I did shoot the Tamron at 3.5 just to compare apples to apples as much as possible, but really, the test above proves that it doesnt really matter.
F8
The tamron f8 at 17mm
the nikon at f8 and 18mm
The nikon recovered quite well at f8.. In fact, there's not a whole lot in between them. Maybe the Tamron is a bit more contrasty, maybe not.
How about the corners you ask? well here you go!
The corner look at the f8 photo for the Tamron
The corner crop of the Nikon at f8
the look similar to me. The Tamron is slightly better, but its barely noticable.
the f16 and f22 were nearly identical in quality and all they really confirmed was that I needed to clean my sensor!
Friday, May 23, 2008
Balancing White Balance in our Heads
After you understand the inner workings of your camera, you see another knob/setting/button on your camera and you wonder what it is. It's white balance. White balance, like many other things, has its roots in film photography. White balance, as we will see, will also illustrate yet another way our eyes are superior to a camera.
One way the human brain keeps you on an even keel visually is adjusting your eye's own white balance. If it didn't, you would see things like a camera that had no white balancing capabilities would.
The big secret is... light (and white) isn't all the same color. The lights in your home, the sun, a cloudy sky, a scientific lab, a sporting arena.. all have output devices that produce very different kinds of light. The normal incandescent lights in your home are called tungsten lights. They make a very warm light. Warm in the photography world means a color that is closer to orange than blue. A midday outdoor scene makes a much cooler (blue) light due to the blueness of the sky reflecting on everything. This is why days that are heavily overcast are warmer than cloudless days. There is no blue sky (or very little) to bathe the world in blue light.
Sunsets are typically very warm. The orange, yellows and salmon colors in a sunset cast a very warm light on the earth and can give everything a golden sheen.
The scientific lab normally has overhead fluorescent lights. These lights give off a decidedly green color. Sodium vapor lights, HID, mercury halide lights can all put out differing colors of light as well. It is best to try to evaluate each of them on their own when that time comes.
Now that you know that all light isn't the same. Of course your next logical step is to wonder, how the scientists classified these different colors of light. Right?. .. Right?!? (crickets chirping) Ok, so maybe you didn't even think about that, but it's worth it to take a second to understand it. It might come in handy and if nothing else, you can really impress the babes.
As scientists are prone to do, they want to put numbers to things and figure out how to calculate the world. Someone decided they wanted to put numbers to the different colors of light. And made this really elaborate contraption that burned things and registered what color was produced as the temperature got hotter. Just like in a campfire (or blowtorch) the lowest temperature fire is a yellow and orange fire. And the hottest fire is a blue or indigo fire. The same is true with color. The lowest temperature light is an orange light the coolest, a blue.
One of the paradoxes of photography (and there's a few if you haven't noticed already) is that a light that is a low temperature is referred to as being warm. And a high temperature light thought of as being a cool light. Just keep warm = orange (like a campfire) and blue = cold (like those cool new Coors lite bottles)
You can see the different numbers on that scale and those numbers represent the temperature (in kelvin) of the light source that makes them. For example, a typical incandescent bulb is 2870 Kelvin. You can see from the chart that that is a very orange color of light. Daylight (and flashes) are estimated to be around 5600-5700 Kelvin; a much cooler color than the lights in your home.
A short list of other light sources:
Matches - 1700K
Sodium Vapor - 2100k
Midday Sunlight - 5600k
Xenon Short Arc - 6400k
Typical Summer Day - 6500
Blue sky - 12,000-20,000k
Having learned all this awesome information you can now begin to make some educated choices on the white balance of your camera. Normally you have these settings to choose from
Tungsten - usually represented by a light bulb
Daylight - represented by a sun
Cloudy - represented by a cloud
Shade - represented by a house with a weird triangle shade thing
Fluorescent - represented by a blinking bar looking thing.
Flash - represented by a lightning bolt
Of course your mileage may vary and your camera might have different pictures, but these are supposed to be intuitive, so in theory, i shouldn't even have to point these out!
Some advanced cameras (Like a Nikon D200) will let you choose your own Kelvin number for you white balance. This is useful if you know what you're doing. If you don't, stick to the pictures until you have a firm grasp of it or you have some specific reason for using a custom Kelvin number.
Another useful item is the white balance value that you can set yourself. If you are in a room with a mixed bag of lighting ( fluorescent lights on top, sunlight streaming in a window, tungsten bulbs around a beauty/makeup mirror) you might be faced with a whole cornucopia of light colors. None of your presets are going to be 100% correct. Your only real choice besides picking one and hoping for the best is to measure it yourself. (If pressed, I would choose the fluorescent in this situation because fluorescent light looks terrible with other white balances and i would hope this setting would be the lesser of all evils) You could buy a Kelvin reader for a few thousand dollars (hey get it from my sponsor Adorama! link on the upper right side) or you could get a white sheet of paper and make your own white balance setting.
To make a custom white balance reading and setting you will have to consult your manual as each camera is different. On my Nikon D50, with the camera active, you can hold down the white balance button and the PRE begins to flash. you then take a photo of a white card (a white piece of paper) and the camera uses that as its white balance until you change it. You can also take a photo and then go through a menu system on the camera to tell it to use that photo for white balance purposes. My Nikon D200 is a bit different in that you have to move the slider to PRE while holding the WB button down too. So, as you can see, each camera is a bit different in the details but all will produce the same results. This ought to at least get you in the ball park and keep those skin tones from looking like they are the next love interest of Captain Kirk.
One way the human brain keeps you on an even keel visually is adjusting your eye's own white balance. If it didn't, you would see things like a camera that had no white balancing capabilities would.
The big secret is... light (and white) isn't all the same color. The lights in your home, the sun, a cloudy sky, a scientific lab, a sporting arena.. all have output devices that produce very different kinds of light. The normal incandescent lights in your home are called tungsten lights. They make a very warm light. Warm in the photography world means a color that is closer to orange than blue. A midday outdoor scene makes a much cooler (blue) light due to the blueness of the sky reflecting on everything. This is why days that are heavily overcast are warmer than cloudless days. There is no blue sky (or very little) to bathe the world in blue light.
Sunsets are typically very warm. The orange, yellows and salmon colors in a sunset cast a very warm light on the earth and can give everything a golden sheen.
The scientific lab normally has overhead fluorescent lights. These lights give off a decidedly green color. Sodium vapor lights, HID, mercury halide lights can all put out differing colors of light as well. It is best to try to evaluate each of them on their own when that time comes.
Now that you know that all light isn't the same. Of course your next logical step is to wonder, how the scientists classified these different colors of light. Right?. .. Right?!? (crickets chirping) Ok, so maybe you didn't even think about that, but it's worth it to take a second to understand it. It might come in handy and if nothing else, you can really impress the babes.
As scientists are prone to do, they want to put numbers to things and figure out how to calculate the world. Someone decided they wanted to put numbers to the different colors of light. And made this really elaborate contraption that burned things and registered what color was produced as the temperature got hotter. Just like in a campfire (or blowtorch) the lowest temperature fire is a yellow and orange fire. And the hottest fire is a blue or indigo fire. The same is true with color. The lowest temperature light is an orange light the coolest, a blue.
One of the paradoxes of photography (and there's a few if you haven't noticed already) is that a light that is a low temperature is referred to as being warm. And a high temperature light thought of as being a cool light. Just keep warm = orange (like a campfire) and blue = cold (like those cool new Coors lite bottles)
You can see the different numbers on that scale and those numbers represent the temperature (in kelvin) of the light source that makes them. For example, a typical incandescent bulb is 2870 Kelvin. You can see from the chart that that is a very orange color of light. Daylight (and flashes) are estimated to be around 5600-5700 Kelvin; a much cooler color than the lights in your home.
A short list of other light sources:
Matches - 1700K
Sodium Vapor - 2100k
Midday Sunlight - 5600k
Xenon Short Arc - 6400k
Typical Summer Day - 6500
Blue sky - 12,000-20,000k
Having learned all this awesome information you can now begin to make some educated choices on the white balance of your camera. Normally you have these settings to choose from
Tungsten - usually represented by a light bulb
Daylight - represented by a sun
Cloudy - represented by a cloud
Shade - represented by a house with a weird triangle shade thing
Fluorescent - represented by a blinking bar looking thing.
Flash - represented by a lightning bolt
Of course your mileage may vary and your camera might have different pictures, but these are supposed to be intuitive, so in theory, i shouldn't even have to point these out!
Some advanced cameras (Like a Nikon D200) will let you choose your own Kelvin number for you white balance. This is useful if you know what you're doing. If you don't, stick to the pictures until you have a firm grasp of it or you have some specific reason for using a custom Kelvin number.
Another useful item is the white balance value that you can set yourself. If you are in a room with a mixed bag of lighting ( fluorescent lights on top, sunlight streaming in a window, tungsten bulbs around a beauty/makeup mirror) you might be faced with a whole cornucopia of light colors. None of your presets are going to be 100% correct. Your only real choice besides picking one and hoping for the best is to measure it yourself. (If pressed, I would choose the fluorescent in this situation because fluorescent light looks terrible with other white balances and i would hope this setting would be the lesser of all evils) You could buy a Kelvin reader for a few thousand dollars (hey get it from my sponsor Adorama! link on the upper right side) or you could get a white sheet of paper and make your own white balance setting.
To make a custom white balance reading and setting you will have to consult your manual as each camera is different. On my Nikon D50, with the camera active, you can hold down the white balance button and the PRE begins to flash. you then take a photo of a white card (a white piece of paper) and the camera uses that as its white balance until you change it. You can also take a photo and then go through a menu system on the camera to tell it to use that photo for white balance purposes. My Nikon D200 is a bit different in that you have to move the slider to PRE while holding the WB button down too. So, as you can see, each camera is a bit different in the details but all will produce the same results. This ought to at least get you in the ball park and keep those skin tones from looking like they are the next love interest of Captain Kirk.
Thursday, May 22, 2008
Whats a grey card? And how does my camera's meter work?
One of the most misunderstood items in photography is the grey card and how to use it. Some even debate whether its useful in the digital age. Quietly there's also been a big debate on what grey percentage the card should be.
One such person who suggests that the whole grey card percentage is off is the well respected Thom Hogan.
His contention is that the roots of 18% grey being 'half way between white and black' goes back to the print industry. That the printing industry determined that visually, a block printed with 18% black coverage looked half way between white and black. That's fine. We can assume that's true.
He also contends that Ansel Adams was the major mouthpiece many years ago convincing Kodak to use 18% for their grey cards instead of the hotly debated 15% and 12% grey.
what does all this mean to you? Well it depends. If you go with the flow of the world, it doesn't mean a lot. 18% will get you close to a correct exposure regardless. You'll be slightly underexposed but that's not the worst thing in the world. Of course, 18% could be the way to go and all the other talk is wrong.
I mean, can you even tell the difference between the three?
So, with that bit of uncertainty out of the way, why do you need a grey card? Your camera sensor sees in luminance. Luminance is defined as:
Optics. the quantitative measure of brightness of a light source or an illuminated surface, equal to luminous flux per unit solid angle emitted per unit projected area of the source or surface.
Whats that mean? well, everything (nearly) reflects light. Light from the sun.. Light from a flash.. Light from a street lamp.. Before the advent of color film, black and white film only measured luminance of objects. Red and Green could have the same luminance and would look identical on the black and white photo. This is similar to what your sensor does when it determines what the luminance of a scene is. It takes all the color in the scene (as long as you're using matrix metering) and figures out how to make that exposure fit the typical scene programmed into its memory. According to generally accepted thought, most scenes in nature average out to an average luminance of 18% grey. (Thom contends that Japanese engineers who design and build Nikons disagree, but that's beyond the scope of this entry)
Now, will all scenes you photograph average out to 18% grey? NO! Will the camera try to tell you that you should expose every scene to 18% grey? YES! The camera is a 'dumb device'. It can't think. It can't evaluate a scene. It doesn't even know what the subject of your photo is. Therefore, left to its own devices it will do what it is told to do. And unless you intervene, it will decide that a scene should be 18% grey and will suggest an exposure to fit that range.
This thinking by your camera sensor is a problem in a number of scenarios. One of these scenarios is snow. A scene with a large amount of snow (something that is nearly white) makes the camera think that, in general, the scene is very bright. It will suggest you either, raise the shutter speed, stop down the aperture, or lower the ISO (in digital cameras). Should you? maybe. Doing what the camera thinks you should do will cause the parts of your scene that aren't snow to be underexposed. They will be darker than they should be and the detail of those items could possibly be lost. This goes back to our discussion of dynamic range. You, the living thinking human, have to decide what is important and what is not. If you want white snow, take the exposure that makes your snow white. This will undoubtedly blow out the highlights in the snow. But, this is the trade off you have to make. If you want to show the details in the snow, you will have to underexpose the snow to make it more grey. The trade off, as we mentioned, its an unnatural darkening of the non-snow items in the scene.
The same sort of rules go for night exposure too. Lets say you wanted to shoot a photo of the night sky. If you pointed the camera up at the sky, it would see a majority black. It would determine that, for a proper exposure, you would need to open the shutter more, have a longer exposure, or raise the ISO. If you wanted to show the black of the night sky as black, the camera's thinking would be incorrect. If you figure that 10% of the scene is brightness from stars and 90% of the scene is black, then you would have to compensate for that in your exposure, despite what the camera tells you is 'right'.
So how can you tell the camera to expose a scene correctly? One way that's been used for many many years is the grey card. If you point the camera at the grey card (and fill the viewfinder with said grey card) your camera's brain will think, OK, how do i make this scene 18% grey. That's a good thing because the card IS 18% grey. Now once you get that exposure, you can use that setting to accurate render the real scene behind the grey card.
Is it really that easy? almost. The only caveat is that the grey card has to be getting the same sort of luminance from whatever light source is lighting your subject. If its a landscape photo, you will be using the sun as your light source. So the grey card needs to be in the sun getting the same exposure as the scene. If your subject is in the shade, you will need to have the grey card in the shade.
Do you point the card directly at the camera? Not really. General consensus is to split the difference on the angle between the camera and the light source. If the sun is 90 degrees to the left of your scene, and you are looking at the scene from straight on, then you would position the grey card at 45 degrees to get your reading.
Hopefully this clears up a little bit about how the camera's sensor works and now you can understand how sometimes the camera is wrong and how you, as the human operator, need to make decisions about the scene that the camera simply cant do. This is yet another reason why you need to get the camera out of automatic mode and take control! Your photos will thank you!
One such person who suggests that the whole grey card percentage is off is the well respected Thom Hogan.
His contention is that the roots of 18% grey being 'half way between white and black' goes back to the print industry. That the printing industry determined that visually, a block printed with 18% black coverage looked half way between white and black. That's fine. We can assume that's true.
He also contends that Ansel Adams was the major mouthpiece many years ago convincing Kodak to use 18% for their grey cards instead of the hotly debated 15% and 12% grey.
what does all this mean to you? Well it depends. If you go with the flow of the world, it doesn't mean a lot. 18% will get you close to a correct exposure regardless. You'll be slightly underexposed but that's not the worst thing in the world. Of course, 18% could be the way to go and all the other talk is wrong.
I mean, can you even tell the difference between the three?
So, with that bit of uncertainty out of the way, why do you need a grey card? Your camera sensor sees in luminance. Luminance is defined as:
Optics. the quantitative measure of brightness of a light source or an illuminated surface, equal to luminous flux per unit solid angle emitted per unit projected area of the source or surface.
Whats that mean? well, everything (nearly) reflects light. Light from the sun.. Light from a flash.. Light from a street lamp.. Before the advent of color film, black and white film only measured luminance of objects. Red and Green could have the same luminance and would look identical on the black and white photo. This is similar to what your sensor does when it determines what the luminance of a scene is. It takes all the color in the scene (as long as you're using matrix metering) and figures out how to make that exposure fit the typical scene programmed into its memory. According to generally accepted thought, most scenes in nature average out to an average luminance of 18% grey. (Thom contends that Japanese engineers who design and build Nikons disagree, but that's beyond the scope of this entry)
Now, will all scenes you photograph average out to 18% grey? NO! Will the camera try to tell you that you should expose every scene to 18% grey? YES! The camera is a 'dumb device'. It can't think. It can't evaluate a scene. It doesn't even know what the subject of your photo is. Therefore, left to its own devices it will do what it is told to do. And unless you intervene, it will decide that a scene should be 18% grey and will suggest an exposure to fit that range.
This thinking by your camera sensor is a problem in a number of scenarios. One of these scenarios is snow. A scene with a large amount of snow (something that is nearly white) makes the camera think that, in general, the scene is very bright. It will suggest you either, raise the shutter speed, stop down the aperture, or lower the ISO (in digital cameras). Should you? maybe. Doing what the camera thinks you should do will cause the parts of your scene that aren't snow to be underexposed. They will be darker than they should be and the detail of those items could possibly be lost. This goes back to our discussion of dynamic range. You, the living thinking human, have to decide what is important and what is not. If you want white snow, take the exposure that makes your snow white. This will undoubtedly blow out the highlights in the snow. But, this is the trade off you have to make. If you want to show the details in the snow, you will have to underexpose the snow to make it more grey. The trade off, as we mentioned, its an unnatural darkening of the non-snow items in the scene.
The same sort of rules go for night exposure too. Lets say you wanted to shoot a photo of the night sky. If you pointed the camera up at the sky, it would see a majority black. It would determine that, for a proper exposure, you would need to open the shutter more, have a longer exposure, or raise the ISO. If you wanted to show the black of the night sky as black, the camera's thinking would be incorrect. If you figure that 10% of the scene is brightness from stars and 90% of the scene is black, then you would have to compensate for that in your exposure, despite what the camera tells you is 'right'.
So how can you tell the camera to expose a scene correctly? One way that's been used for many many years is the grey card. If you point the camera at the grey card (and fill the viewfinder with said grey card) your camera's brain will think, OK, how do i make this scene 18% grey. That's a good thing because the card IS 18% grey. Now once you get that exposure, you can use that setting to accurate render the real scene behind the grey card.
Is it really that easy? almost. The only caveat is that the grey card has to be getting the same sort of luminance from whatever light source is lighting your subject. If its a landscape photo, you will be using the sun as your light source. So the grey card needs to be in the sun getting the same exposure as the scene. If your subject is in the shade, you will need to have the grey card in the shade.
Do you point the card directly at the camera? Not really. General consensus is to split the difference on the angle between the camera and the light source. If the sun is 90 degrees to the left of your scene, and you are looking at the scene from straight on, then you would position the grey card at 45 degrees to get your reading.
Hopefully this clears up a little bit about how the camera's sensor works and now you can understand how sometimes the camera is wrong and how you, as the human operator, need to make decisions about the scene that the camera simply cant do. This is yet another reason why you need to get the camera out of automatic mode and take control! Your photos will thank you!
Wednesday, May 7, 2008
Hissy The Histogram
Now that we are in the digital age, how can you tell if your picture is exposed properly? In years gone by, you would have to rely on your light meter and experience and hope for the best in the darkroom. There were various things you could do in the film room to help yourself out in a pinch when mistakes were made. Today, we have different methods for review and confirmation that our photos came out the way we intended them to.
Introducing Hissy the Histogram. Hopefully your camera has something similar to it. This is a quick and dirty guide to give you some information about the exposure of the image you just took. You might have to read the manual to learn how to activate this screen on your LCD first. Once you've got that down you can begin learning a little bit about what it means. Think of the Histogram as a graph comparing two different concepts, one on the horizontal axis and one on the vertical axis.
The Horizontal axis is the relative brightness of the image; dark being on the left side of the graph and light being on the right side of the graph. If any part of the image is too light you will see that represented on the last pixel on the right side of the graph. The same concept in the reverse applies to underexposed parts of the photo. The vertical axis of the graph represents the relative amount of pixels found at that brightness. Taking the blue graph as an example, you can see two blue humps in the graph. This means there are more of those two 'brightnesses' in the photo than there are any other brightness. Overexposure is represented as the last pixel. You will remember that after you get past your camera's dynamic range, everything will appear white or black.
The higher end cameras will show four separate histograms. One for the overall exposure, one for the green channel, the red channel, and the blue channel. Lesser cameras will show only the overall exposure of the image. The overall exposure is less useful than the broken down histograms but it should only be treated as a rough estimate only.
Photos can be exposed in any way you want them to be exposed. Just because you have a majority of pixels underexposed so much that they're outside the dynamic range of the camera (or so many overexposed that they're also outside the dynamic range) it doesn't mean the photo is bad. High key photos, for example, will have a lot of overexposure. As long as the photo came out the way you wanted it to, that is fine. The histogram is just a tool in your tool bag. It isn't gospel and shouldn't be treated as such. That said, it can help you take better exposed photos if you're just starting out.
Introducing Hissy the Histogram. Hopefully your camera has something similar to it. This is a quick and dirty guide to give you some information about the exposure of the image you just took. You might have to read the manual to learn how to activate this screen on your LCD first. Once you've got that down you can begin learning a little bit about what it means. Think of the Histogram as a graph comparing two different concepts, one on the horizontal axis and one on the vertical axis.
The Horizontal axis is the relative brightness of the image; dark being on the left side of the graph and light being on the right side of the graph. If any part of the image is too light you will see that represented on the last pixel on the right side of the graph. The same concept in the reverse applies to underexposed parts of the photo. The vertical axis of the graph represents the relative amount of pixels found at that brightness. Taking the blue graph as an example, you can see two blue humps in the graph. This means there are more of those two 'brightnesses' in the photo than there are any other brightness. Overexposure is represented as the last pixel. You will remember that after you get past your camera's dynamic range, everything will appear white or black.
The higher end cameras will show four separate histograms. One for the overall exposure, one for the green channel, the red channel, and the blue channel. Lesser cameras will show only the overall exposure of the image. The overall exposure is less useful than the broken down histograms but it should only be treated as a rough estimate only.
Photos can be exposed in any way you want them to be exposed. Just because you have a majority of pixels underexposed so much that they're outside the dynamic range of the camera (or so many overexposed that they're also outside the dynamic range) it doesn't mean the photo is bad. High key photos, for example, will have a lot of overexposure. As long as the photo came out the way you wanted it to, that is fine. The histogram is just a tool in your tool bag. It isn't gospel and shouldn't be treated as such. That said, it can help you take better exposed photos if you're just starting out.
Thursday, May 1, 2008
But Mom, it didn't look like that to my eyes.
How many times have you said, "if only my photo had looked exactly as good as it looked when I looked through the viewfinder"? Sadly, the film and camera sensor industry hasn't been able to match what mother nature has invented when it came to light sensing equipment. One reason photos don't come out as they appear in the viewfinder is due to a limitation of the film or the sensor. The eye is capable of seeing both very bright things and very dark things in the same scene and determining detail from both. This concept is called dynamic range.
Webster's defines dynamic range as:
And expressed another way:
dynamic range –noun Audio. the ratio of the loudest to faintest sounds reproduced without significant distortion, usually expressed in decibels.
This audio definition is good enough for the photographic arena as well with a few term modifications. Instead of the loudest to faintest sound it would be brightest to least bright light. It is commonly accepted that the eye has a general dynamic range of around 20 stops. Remember that any one stop is 100% brighter than the stop below it. That means that the eye has an extraordinary ability to see all the different intensities of light in a scene. Film has the ability to see around 7 stops of light in a scene and camera sensors have around 5 stops of ability.
While this will no doubt improve over time (sensors anyway, there is probably not going to be much in the way of research dollars going into film technology anymore) at the moment it forces you to make decisions about the scene you are looking at. You must decide what the important part of the scene is and make sure that range of stops is properly done in your photo.
Now that you know this, how can you combat it? Knowledge my friend. As mentioned before, you must make some decisions about what is important and what is not. You must make sure your subject falls within your dynamic range in order for it to be seen properly. In addition to that, you can "compress the dynamic range" by making brighter things darker through the use of neutral density filters. Alternately you can make darker things lighter though the use of strobes. We'll explore those concepts in depth in a later entry just as long as you have the concept down for now.
Webster's defines dynamic range as:
Main Entry: | dynamic range1 |
Part of Speech: | n |
Definition: | the ratio of a specified maximum possible level of a parameter to the minimum detectable or acceptable value of that parameter |
And expressed another way:
dynamic range –noun Audio. the ratio of the loudest to faintest sounds reproduced without significant distortion, usually expressed in decibels.
This audio definition is good enough for the photographic arena as well with a few term modifications. Instead of the loudest to faintest sound it would be brightest to least bright light. It is commonly accepted that the eye has a general dynamic range of around 20 stops. Remember that any one stop is 100% brighter than the stop below it. That means that the eye has an extraordinary ability to see all the different intensities of light in a scene. Film has the ability to see around 7 stops of light in a scene and camera sensors have around 5 stops of ability.
While this will no doubt improve over time (sensors anyway, there is probably not going to be much in the way of research dollars going into film technology anymore) at the moment it forces you to make decisions about the scene you are looking at. You must decide what the important part of the scene is and make sure that range of stops is properly done in your photo.
Now that you know this, how can you combat it? Knowledge my friend. As mentioned before, you must make some decisions about what is important and what is not. You must make sure your subject falls within your dynamic range in order for it to be seen properly. In addition to that, you can "compress the dynamic range" by making brighter things darker through the use of neutral density filters. Alternately you can make darker things lighter though the use of strobes. We'll explore those concepts in depth in a later entry just as long as you have the concept down for now.
Wednesday, April 30, 2008
How does my camera sensor work?
Before we can continue our exploration into what exposure is, and how to get more out of your camera, we have to hash out a few issues. These issues revolve around what the eye can discern when looking at a scene and what the camera can discern when looking at the same scene. This entry will revolve around the difference between the two and some of the why behind it.
The eye is truly a wonderful instrument. It has a dynamic aperture and can recognize billions of colors. Through evolution the eye has been attuned to certain colors better than others. The human eye is more sensitive to green than it is to red. This makes sense because our world is more green than red. Having a hypersensitivity to green would have helped man to hunt. It's no wonder that the Nikon D200's sensor is "CFA Pattern: GREEN RED BLUE GREEN". 50% of your camera's sensor is capable of only reading green!
This is what a cross section of your sensor would look like:
This is what it would look like if you could see the light coming through your lens and striking your sensor. The light passes through your lens, past your shutter, and is filtered by colored filters on top of each pixel sensor. The filter only absorbs one color in the light spectrum and this is recorded onto that pixel.
So, if this is true, why isn't your picture nothing but red blue and green pixels when you zoom in far enough? The camera's brain reads the intensity of the color from each sensor and determines how intense it is in comparison to its neighbor.
For example. If you shot a picture of a nice green lawn, the light would be hitting the sensor and the green sensor would be getting a lot of light intensity. The red would be getting next to nothing and the blue a small amount also. The camera, would compare these three values and say, "this is mostly green" and would blend the color through all the neighboring sensors. This process is called interpolation and is common in all sorts of image manipulation. The downside of interpolation is that it will soften the focus of your image. this is why people will sharpen their images in photoshop before printing, to offset some of the softening that the electronic sensor automatically does when determining proper color representation.
The next lesson will explore why your eye can see more than your film can record.
The eye is truly a wonderful instrument. It has a dynamic aperture and can recognize billions of colors. Through evolution the eye has been attuned to certain colors better than others. The human eye is more sensitive to green than it is to red. This makes sense because our world is more green than red. Having a hypersensitivity to green would have helped man to hunt. It's no wonder that the Nikon D200's sensor is "CFA Pattern: GREEN RED BLUE GREEN". 50% of your camera's sensor is capable of only reading green!
This is what a cross section of your sensor would look like:
This is what it would look like if you could see the light coming through your lens and striking your sensor. The light passes through your lens, past your shutter, and is filtered by colored filters on top of each pixel sensor. The filter only absorbs one color in the light spectrum and this is recorded onto that pixel.
So, if this is true, why isn't your picture nothing but red blue and green pixels when you zoom in far enough? The camera's brain reads the intensity of the color from each sensor and determines how intense it is in comparison to its neighbor.
For example. If you shot a picture of a nice green lawn, the light would be hitting the sensor and the green sensor would be getting a lot of light intensity. The red would be getting next to nothing and the blue a small amount also. The camera, would compare these three values and say, "this is mostly green" and would blend the color through all the neighboring sensors. This process is called interpolation and is common in all sorts of image manipulation. The downside of interpolation is that it will soften the focus of your image. this is why people will sharpen their images in photoshop before printing, to offset some of the softening that the electronic sensor automatically does when determining proper color representation.
The next lesson will explore why your eye can see more than your film can record.
Subscribe to:
Posts (Atom)