Advertisements Does the picture look good or does it not look good? Well- if you were doing that with USDA grades, then you'd have to have rooms full of taste testers for each side of beef for each cow. The USDA doesn't use an ITU-R approach. Why should visual quality be graded differently? Let's step back and understand first what grading labels do. Uniform grade identification provides a standardized way of communicating values between buyers and sellers and signaling consumer preferences back through marketing channels to the producer. It has to be practical to apply or it is useless to the market it is designed to strengthen. So first off, such grading labels for food do not guarantee percieved pleasure. USDA prime does not mean the standard is saying that you will think the USDA prime ribeye is as juicy as the USDA prime flank steak. Just because the USDA allows a producer to call something chili doesn't not mean you are going to like it. A cut of meat either does or doesn't have a specified amount of fat marbling. A video either bursts up to 19Mbps in an action scene or it doesn't. In such a scene, it either does use 16 pixel blocks for motion estimation or it doesn't. Secondly- I challenge the hidden premise of some posters that a standard is meaningless if there are possibilities for distortions. "SAT scores are meaningless", "Passing the board certification test is meaningless!" Sure, there are flaws. Is a measure meaningful or isn't it? To some, it is not "meaningful" unless it has the highest accuracy possible using an impractical ultimate measurement technique. Unquestionably, the ultimate would be to have a room full of testers judging the Picture Quality of each and every encoding of each and every movie. But we aren't talking about ultimates. The goal is to provide a few labels that are simply "more meaningful" than the current system. Which is nothing. The question is, is it possible to create a grading system that is more meaningful than the system today where there is no standardized way for consumers to understand the differences in quality between HD content? Analogous to the USDA guidelines, we can get into disagreements about whether it is legal to say a patty is 100% beef if it contains any partially defatted beef fatty tissue (which all of us informed consumers knows by the acronym PDBFT). But at the end of the day, some a grading system that more often than not tracks quality is better than no grading system. Do you think that a classification system for HD video is not possible? If so, I have to admit that I would be skeptical of such a claim. We are familiar with the endless debates on AVSforum.com about codecs and PQ, and it is true that everyone seems to have very strongly felt opinions. But while we may not agree on the details of the following trial balloon proposal, I think all that needs to be shown is that a grading system is possible that is more meaningful than no grading system. Now for one trial balloon proposal. I would start with the simple and move to complex as driven my industry input. Meaning- at first only measure the rough differences and surface those to the user. EG- 100% beef patty means no mule meat mixed in. In response to further games, you go further- prohibiting PDBFT. The factors are: The volume of data delivered The density of the data delivered The density of the content when the file is displayed. #1 corresponds to the bitrate. #2 factors the techical strength of the encoding job to concetrate the visual information. It's not just the encoder name, but let's start there. For example, the vendor may be restricted to MPEG2 main profile high level. What this factor is will be highly debated. Is H.264 Part 6 three times as dense as MPEG2 high level main or is it more like four? What's the number? Is chili still chili if it has less than 20% meat?, or does it have to be 30%? Well, there is an explosion of compression techniques and options to play with on each one. A best estimate is made, and may be revised as industry proof is submitted that their technique and hand coding should have a stronger density factor. Maybe their encoder bursts up to very high datarates accurately as needed and has a really great motion estimator. Say Amazon Unbox can show using an industry accepted metric of frame differences complexity, that their algorithms plus their third pass hand tunings provides fewer distortions as measured by an industry accepted measure of artifacts per second. What are those metrics? Let industry produce them. If there is sufficient concensus and science behind a metric, then the determination of the density factor may be based on it. A competitor can come in and say that the FIOS's density factor for onDemand videos unfairly favors them, because FIOS has been lazy with their encodings and are using a single pass hardware encoder device that has been shown in tests as doing a much poorer job than Fios's density factor indicates. #3 This factor considers how complex the picture is that is to be compressed. Obviously, if nothing is happening in the picture, then you could pretty much transmit a still photo and not transmit anything more until something changed in the picture. That is, a Hidef surveillance video of a room with no camera movement and no one in the room could realistically be transmitted in a very small file and be the "Best" HD picture possible, even compared to the same picture stored at bitrates possible on a bluray disk. I would defer indefinately on making an estimation of this portion and instead drop the value labels. For example, there might be a yellow HD logo, blue HD logo, purple HD logo. Consumer knows from experience that they like the purple HD logo for sports, but yellow HD logo is just fine for Oprah or talking heads. So factor one (Volume) times factor two (Density) equals an HD picture information number, and the number falls into a non value label that corresponds roughly to the highest practical quality possible (on bluray or HD-DVD) versus the lowest possible while still being perceptively better than an upscaled SD video. (Say 6Mbps for Mpeg2). We can quibble on the aspects of this. The important question to consider is- what evidence is there that it is impossible to practically apply picture data standards that are intelligible enough to aid consumers in better discerning between HD content products?