I have the same camera. The video is pathetic. Morié, rolling shutter, very soft image, etc. The video from my Samsung Galaxy S3 phone blows it away.
The very soft video compared to the still image is normal for the T2i. If you want good video get a better camera and quit wasting your time trying to fix garbage.
You can add the Fast Color Corrector and set the Input Level to 16, or maybe even 22 for a slightly crushed look. This will increase the apparent contrast and sharpness. But you won't get anywhere near as sharp as the JPG, which simply has a much higher resolution.
So I guess I have a camera that shoots "1080p" but doesn't really.
The JPG is the same exact resolution as the video - 1920x1080. There's just more detail in the JPG. The video is just a blur of pixels.
Can anyone recommend a DSLR that *actually* shoots and resolves 1920x1080? T3i? T4i?
I'm getting some really excellent looking shots with my GH2. In fact, it's so sharp that the newest (best?) hack actually softens the image up a bit, as folks were complaining it looks too 'videoish'.
But the reality is that very few (if any) cameras will actually record the full resolution they're supposed to be capable of. Even the RED cameras can't fully resolve 4K. That's just par for the course.
There is a whole lotta 1080 that looks like dung. What you want to know is if the codec being used is any good, and is the bit rate high enough (less compression) to support decent looking video.
So many people are getting hung up on "The Cinema Look" and "Sharpness v Fuzziness" etc and maybe the actual content of the film is being neglected. I have seen some pitiful clips that were razor sharp (if that is what you want) but had no cinematographic redemption whatsoever.
With your $800 camera you are not going to compete with a $15000 model if you are trying to examine scientific measurements. I have spent more than fifty years in the darkroom and have heard it all about this developer/film combination gives sharper resolution that the other one etc. Perhaps in scientific measuring they can prove the acutance etc but you know what?........the human eye has its own resolution limitations. If it looks sharp enough for me or the next chap than it is sharp enough.
I am more hopeful of viewing great content with good technical skills than comparing whose clip is sharper than whose.
While I am not a fan of the content of this clip please check it out and see if you think it is acceptable enough. I think the contributor shooting with his T2i/60D did a good job but then maybe I am not being critical enough.
Best wishes to all.
Maybe you took my comment out of context - the rest of the sentence i.e. scientific measurements is the point.I could not actually find reference on the link you sent to show where technical comparisons are equal. Maybe I missed that. Anyway, Jim, my tired old gray matter can't always put accurately my thoughts. Maybe I should just read posts instead of responding :-)
The statement below from your link I think reinforces the point I was trying to make. That the content is important and that we all "see" the results differently.
<Part Two proves that beauty truly is in the eye of the beholder. Does talent, experience and creativity trump technology? You decide!
Read More In Part One of Revenge of the Great Camera Shootout 2012 you were presented with unlabeled footage of a complex party scene designed by Bruce Logan ASC, the test administrator. We heard form legendary DP’s in the industry talking about what being a “cinematographer” really means to them; in many cases challenging us to rethink our understanding of camera technology and how it relates to filmmaking–and how all of that relates to talent, creativity, collaboration and experience.
In Part Two you’ll see in the audience reactions and discussions, the results are subjective and everyone has their own personal opinion on which camera looks best. While some are delightfully surprised at the image quality of one camera over another, others were upset.
Each camera DP specialist interpreted the creative shot depending on their own taste, personal style and experience, not necessarily showing the best dynamic range of their camera but to make the scene look pleasing to them. You will see actual dynamic range tests with no variables changed in Part Three coming August 15th.
The cameras you chose may surprise you, but don’t think of this as an end-all indication of which camera you should shoot your next project with. Think of it as an education in what options are available. As many have said not every camera is right for every job.>
Shooting with sharpness=0 in the camera is very misunterstood. It's done to achieve very specific things and by definition (pun intended) if you need 'sharpness' to see detail, you don't have the data to begin with. Two pixels next to each other, one black and one white, will look perfect no matter what the sharpness setting is - however a DSLR never records (or even samples) every pixel, it takes averages. You end up with two grey values, one a little lighter than the other, and adding sharpness expands the difference in the grays. It will never get back to the black and white pixel, as pushnig the contrast that far will wipe out everything else on screen.
Sharpness in the camera is better at preserving details because it happens before the H.264 compression, but it's not all that clever and can lead to artifacts, such as haloes. Turning it off and sharpening in post (using an unsharp mask) allows you to use better algorithms that don't create these artifacts, but now you're working on the compressed footage. The original detail is literally no longer there as your camera has averaged to gray then compressed using a codec which merges similar pixels. USM algorithms can detect where an 'edge' is by comparing the pixel values across areas of the image, but have no idea that a fine texture is supposed to be noisier. It's a gamble, you throw data away in the hope your software can invent it again later. Sometimes it can, but not all the time.
Taking a common example, suppose you shoot a video of a winter tree against a blue sky.
- Sharpness in the camera will keep more of the shape and detail of each twig (bark textures, etc.) but you're likely to get halo effects around the edges.
- Sharpness in post will avoid the halo and improve the contrast of the edges of each twig more than you could get in-camera, but will never put back the texture of the bark as it's been compressed out by the H.264 codec.
Trying to apply both tends to get the worst bits of both, but which method is 'best' depends on what you want from the shot. Do haloes matter as much as textures? If the shot is moving they're unlikely to be visible to a typical viewer.
As to shooting with Cinestyle, again you need to ask why. Technicolor devised it specifically to allow DSLR footage to cut with film stock, not as an improvement for all DSLR footage. Yes it can pull more detail out of the ends of your luma curve, but only a bit - and the way it treats colors in general are not ideal if all you want to do in post is' get back to reality'. TCine is designed to 'get back to film', but that does not mean the results look like cinema. It means they look like a digital camera trying to look like Technicolor stock in the middle of the grading process. In contrast, a true digital cinema camera (Red,BMC, etc) samples and stores the raw data from the sensor. The differences between each pixel's bit values are just as subtle but all the data is preserved, so by a simple dash of contrast you can get back the pattern of light falling on the sensor.
If you want a bit of increased end depth and aren't cutting with film, use something like Prolost Flat (which is literally just a change of settings on the camera, no color LUT is applied). You can pull that back to 'reality' with a basic S luma curve, but unless you record to an external device (and your DSLR actually exports 422 via the HDMI port) you'll never get the same quality as a cinema camera. As Jim's links shows, it can be made to look the same in particular situations where detail isn't important, but if you film a football stadium with a Scarlet you can pick your mother out in the crowd. DSLR footage can be recorded with a 'log' picture style and sharpened to heck and back - it will show the shapes of heads, but you won't recognize anyone. It's why cinema hardware has two more zeros on the price tag.
Ultimately, people use a flat or log picture style when shooting DSLR so they have a bit more leverage over their 8-bit footage in post, but by doing so they have to sacrifice a bunch of the data. Even the most ardent defenders of the flat workflow will tell you it's better to get it right in the camera, and if you really need all the detail and some leverage, shoot in raw on proper hardware.
The final thing to look at doing, if you have CS6, is to include a reference shot of a Macbeth chart (X-rite ColorChecker as they're now called) and use SpeedGrade to automatically pull everything back to calibration. That step won't create a cinematic look but it means you're starting from a level playing field when applying your artistic grades (of which SpeedGrade has a bunch to pick from). Will it ever pass for film stock? Probably not. Does it matter? Probably not.
Maybe you took my comment out of context
No, I was just being goofy. Hence the wink.