This content has been marked as final. Show 28 replies
> We have our settings in PP so there is no additional compression and the footage is deinterlaced.
Why have you thrown away half your vertical resolution? That might be the cause of your problem. Did you play the AVI on a TV?
Thanks for replying to my question.... I'm sorry but you've lost me with your first comment, 'Why have you thrown away half your vertical resolution?' What do you mean by this?
I have watched the AVI files through various sources and it's as bad in all circumstances, including once it's been authored to DVD.
You deinterlaced, thus throwing away half your vertical resolution. AFAIK the XL1S is interlaced, so there are only disadvantages to deinterlace.
> I have watched the AVI files through various sources and it's as bad in all circumstances, including once it's been authored to DVD.
But where was the destination? Monitor or TV?
>What do you mean by this?
What Harm means is that the method Premiere used to deinterlace is not the best possible method. A good deinterlacer will interpolate new lines, thus effectively doubling the resolution. Whereas Premiere simply throws out one field, effectively halving the resolution.
> A good deinterlacer will interpolate new lines, thus effectively doubling the resolution
You start out with 480 NTSC vertical resolution. You deinterlace in PP and end up with 240 vertical resolution. Better algorithms may improve that to effectively 360 resolution but never to 480, unless you use 'mind-reading' algorithms that do not exist yet.
b The one GOLDEN rule in editing is NEVER convert from one format to another unless it is absolutely necessary.
If you start with interlaced, do NOT deinterlace. (unless you need it for web purposes, but remember the loss you take.)
"Mind reading" algorithms actually have existed for a number of years. Faroudja was particularly good at creating them.
Long before HDTV and even DVD, when people used to spend $80,000/ea on two correctly aligned CRT front projectors with 9" guns, using Laserdisc as the best possible source, adding a line doubler (and later even quadrupler), was a fairly common practice in high end Home Theaters.
These units would actually create the missing scan lines in interlaced source, effectively guessing at what those lines should look like based on the line above and below it. The output of these devices was double that of the normal NTSC signal, showing not just the 240 interlaced lines, but the original 240 plus the guessed at 240 for a total of 480 scan lines, sixty times a second.
Now, some devices were not so good at guessing what those missing lines should look like, some were OK. Faroudja devices were particularly adept at such "mind reading" tricks.
Unfortunately, Premiere does not have any such mind reading abilities.
Even Faroudja will have a difficult time to make an apple out of apple juice. These 'Mind-reading' algorithms can't be right all the time. If that were true, grocery stores would only buy apply juice, convert them to apples in store and save enormously on logistic costs.
Faroudja may be right half of the time, but then the end result will not be better than 360 lines. If it were such a fantastic prophet, all your arguments against HDV are null and void, since a good algorithm in your opinion can reconstruct with 100% accuracy. Hurray for HDV.
You know how a good deinterlacer is supposed to work. The algorithms that tools like FieldsKit use are not "mind reading" at all. They are educated guesses based on real information surrounding the field being interpolated. Information that comes not only from adjacent scan lines, but from adjacent fields as well.
Apple juice has nothing to do with it, not even metaphorically. A closer analogy would be cutting an apple into 8 slices, then removing 4 of the slices. Faroudja would be able to do a very good job re-creating the whole apple from the remaining 4 slices by creating new slices based on how the remaining 4 were cut and colored and textured. Premiere would just duplicate the remaining 4 slices and slap the new duplicates in between the originals. I'd rather eat a Faroudja apple.
>Faroudja may be right half of the time, but then the end result will not be better than 360 lines.
The result was a doubling of the normal field, 480 scan lines sixty times per second. With a quadrupler, it was twice that even.
So starting with 480i material and a dual quadrupler I end up with at least 1080p, right? Why did they ever invent HDV or P2 or XDCAM? This is so much easier...
One wonders how to make something from nothing, but according to you Faroudja is capable of that mysterious feat.
Can you learn me the trick for my wallet? If it is empty, how can I get it filled to the rim? Faroudja?
>This is so much easier...
Actually, it's not. It was just the only option at the time.
Unless you got to see a Faroudja line doubler playing back a THX-certified LaserDisc on a properly-calibrated Vidikron projector at a time when VHS (at 240 lines per frame) was the mainstream video format, you really can't appreciate what a theater-like and film-like experience it was. It was unbelievable.
HD blows that configuration away, even on today's consumer-priced gear. In the early 90's, however, it was worthy of Skywalker Ranch.
>You start with 1080i, use a Faroudja quadrupler and end up with 4K progressive material without any quality loss, that is super. Goodbye Red.... I can still use HDV for 4K features.
You know, Harm, sometimes you make it very difficult to engage in any kind of reasonable discussion.
This is one of those times.
Would it make you feel better if I said that you win? OK, you win.
I won't say that. Not on this one.
> I won't say that. Not on this one.
Anyone experienced with top-end deinterlacers, line doublers or line quadruplers knows that Harm's comments are myopic and out of phase with the practical reality of the great results that you can obtain from these devices.
I suspect Harm knows that, too.
I was simply letting him win the argument in this thread, since that seems to be the ultimate end game of his comments. Letting him win doesn't change reality, but it may make him feel better. ;)
I see your point.
But I'd also rather he see ours.
"Letting him win doesn't change reality, but it may make him feel better."
Was that "the ultimate end game" of this thread? :)
If it makes him shut the fek up ...YES
I'm glad you boys have enjoyed your argument regarding my problem on these jagged lines, but I was kind of hoping that one of you might be able to give me a straight answer in english. I'm getting the impression that part of our problem lies with this deinterlacing issue. Can one of you explain to me what 'deinterlacing' actually does then?
Well, it depends on whose doing the deinterlacing. That's part of what we were 'arguing' about.
In addition to Faroudja making "mind reading" algorithms for early high end Home Theaters, this type of thing is now pretty common.
>Toshiba seems to bring the advanced upscaling capabilities of its HD DVD players to its upcoming DVD players. The new players will reportedly feature an integrated circuit that will convert the standard definition DVD video to high-definition, in real time. The technology makes it possible to reproduce high-quality images comparable to Blu-ray video from current standard DVDs, according to the paper.
Harm's earlier comments do touch on a valid concern in the Blu-ray camp. If the "mind reading" capabilities of these players are so good, why would anyone want to spend extra money on a Blu-ray player and movies?
>why would anyone want to spend extra money on a Blu-ray player and movies?
I haven't, yet. I also haven't done A/B comparisons on a good HDTV between a BR player and a good upscaling DVD player (like my Denon).
I've talked to exactly one person who has done such a comparison, and he was not overly impressed with the difference between a full-res BR picture and the uprezzed SD DVD picture. He said there was a difference and that BR was better, but certainly not "better" enough to justify the additional hardware expense and the increased cost of movies.
A solution, though an unlikely one, would be for the studios at some point to just stop making DVDs and release only on Blu-ray.
But I digress...
My effort to get back OT:
A quality deinterlacer will take interlaced video and make it:
a ) more film-like for display on an HDTV (which is not an interlaced display device)
b ) suitable for watching on a computer monitor (which is also not an interlaced display device)
OTOH, improper or poor deinterlacing will cause artifacts on *any* display device, interlaced or not.
If your source video is interlaced, and your target audience will be viewing your production on an interlaced display device, then you should keep the video interlaced throughout the production pipeline. That means don't deinterlace the interlaced video. The exceptions to that rule would be for time remapping or image stabilzation.
>Can one of you explain to me what 'deinterlacing' actually does then?
Deinterlacing throws away half of the lines. Why? Because interlaced footage shows all of the odd lines, then all of the even lines. This is because of the way televisions where originally designed. This is not necessary for LCD screens, just for CRT.
Some deinterlacers do a better job than others, but the question people want answered is "Why deinterlace if the footage os destined for a television?"
By the way, DV really doesn't like straight lines. There are just not enough lines of resolution to draw the line straight.
Naive question, but is this how the adobe de-interlacing works (ie simply throwing away the odd or evn rows)? I wondered if it might do some fancy interpolation of the missing rows...
I guess its even more fanciful to think it uses the info from both rows in some correlation based matching way.
I only care as I'm extracting data on bumblebees from interlaced video. I have been using each half-frame (ie odd/even rows) independently, but thought maybe Adobe's deinterlacer could do something better
>I guess its even more fanciful to think it uses the info from both rows in some correlation based matching way.
That's what the FieldsKit Premiere Pro plug-in and the SmartDeinterlacer plug-in for VirtualDub do. They can also use info from adjacent frames and other very cool (and render time-consuming) tricks.
Comparatively, the SmartDeinterlacer is much faster than FieldsKit.
That's really useful. My Uni has access to virtual dub so I'll try that.