No Sh*t Sherlock
Wines & Vines - Wine Industry News Headlines - How Consistent Are Wine Judges?
Wine judges are rather unsteady, study finds - Los Angeles Times
There has been a spate of articles and blog posts about a study that statistically proves that results from large judgings are, for all practical purposes, worthless. They had to do research to figure that out? The wine trade figured this out long ago and the only place a gold medal from some judging does a winery any good is in their own tasting room. Tell a wine buyer in a top restaurant or retailer that you've just won a double gold and best of show and they'll look at you like you're some rube from the sticks. They could care less.
Everyone who has participated in such events knows that their results are skewed. It's just not possible to taste and accurately rank large numbers of wines. For example, if you took just ten wines and had the judges rank them, then took the same wines, changed the order and retasted them an hour later the results would change. If you did it ten times, in different orders over the course of a day, you'd get different results each time. You would also get different results if you took those same ten wines and had the judges taste one wine a day with dinner over the next ten days. In such a test even a tasting machine like Robert Parker Jr. would give different scores to the same wine as he, like all of us, is not a not machine, but a human whose palate is impacted by too many variables. This is not to say that professional wine criticism is not useful, but it is an opinion, not a science. To be scientific results have to be repeatable.
The explosion of wine blogs and sites where consumers post their notes, like on CellarTracker and Adegga, offers the antidote to relying on notes from a few critics and competitions. After all, can you think of a worse way to appreciate and understand a wine than sitting down and tasting it buried in a lineup of dozens (if not hundreds) of them? The notes from bloggers and consumers come from tasting conditions more in line with how wines are meant to be actually consumed - leisurely, thoughtfully and with meals. It is also a wonderful thing to have so many different opinions of the same wine tasted in different circumstances by different people. Of course you always have to be aware that some of these new media reviews may come from tasters with little experience, but it's easy to spot that inexperience in their notes. Also the risk of inaccurate information is no greater than that coming from professional judges when those judges are basing their opinions on results from mass tastings.
Over the last few decades wine sales have been driven by points and medals awarded by tasters plowing through masses of bottles at a single tasting. As a result, wine producers started making wines that tasted great with other wines, but not so great with food. Fortunately the tide seems to be turning back to wines with balance and elegance.
Perhaps with the price of gold these days, wineries should be sending in their old medals to Cash 4 Gold.
Wine judges are rather unsteady, study finds - Los Angeles Times
There has been a spate of articles and blog posts about a study that statistically proves that results from large judgings are, for all practical purposes, worthless. They had to do research to figure that out? The wine trade figured this out long ago and the only place a gold medal from some judging does a winery any good is in their own tasting room. Tell a wine buyer in a top restaurant or retailer that you've just won a double gold and best of show and they'll look at you like you're some rube from the sticks. They could care less.
Everyone who has participated in such events knows that their results are skewed. It's just not possible to taste and accurately rank large numbers of wines. For example, if you took just ten wines and had the judges rank them, then took the same wines, changed the order and retasted them an hour later the results would change. If you did it ten times, in different orders over the course of a day, you'd get different results each time. You would also get different results if you took those same ten wines and had the judges taste one wine a day with dinner over the next ten days. In such a test even a tasting machine like Robert Parker Jr. would give different scores to the same wine as he, like all of us, is not a not machine, but a human whose palate is impacted by too many variables. This is not to say that professional wine criticism is not useful, but it is an opinion, not a science. To be scientific results have to be repeatable.
The explosion of wine blogs and sites where consumers post their notes, like on CellarTracker and Adegga, offers the antidote to relying on notes from a few critics and competitions. After all, can you think of a worse way to appreciate and understand a wine than sitting down and tasting it buried in a lineup of dozens (if not hundreds) of them? The notes from bloggers and consumers come from tasting conditions more in line with how wines are meant to be actually consumed - leisurely, thoughtfully and with meals. It is also a wonderful thing to have so many different opinions of the same wine tasted in different circumstances by different people. Of course you always have to be aware that some of these new media reviews may come from tasters with little experience, but it's easy to spot that inexperience in their notes. Also the risk of inaccurate information is no greater than that coming from professional judges when those judges are basing their opinions on results from mass tastings.
Over the last few decades wine sales have been driven by points and medals awarded by tasters plowing through masses of bottles at a single tasting. As a result, wine producers started making wines that tasted great with other wines, but not so great with food. Fortunately the tide seems to be turning back to wines with balance and elegance.
Perhaps with the price of gold these days, wineries should be sending in their old medals to Cash 4 Gold.