Weekly Article

   Home

   Tour Diaries

   Past Articles

   Feature Stories

   Tasting Notes

   Daily News

   Readers' Write

   Get the Free Newsletter

   Useful Stuff

   Submit Wines

   Questions & Answers

   Drops 'n Dregs

   Who is TORB

   The TORB Rating System

   About TORBWine

   Best Buys

   Contact

   Links

              

This site is now closed

  and has been left here

  for historical reference

               only.

 

 

           Sydney Time

  

            

   Copyright © Ric Einstein 2009

 

 

 

 

Two Tales of One Wine (19 February)

 

Late last year I was contracted by Shane Redmond of Lewis Fine Wines. Shane wanted to know if I had tried the Leasingham 2005 Bin 61 Shiraz. I answered in the negative.

 

Now Shane is a pretty level-headed chap so it was surprising when he launched into a fully-fledged rant about this wine. I got the impression he was not a happy chappie, but then someone who was comatose would have been left with that impression; such was Shane’s mood. Apparently the wine had won four gold medals as well as three trophies. Based on the gongs, and Halliday’s score of 95 points, Shane had purchased cases of the stuff for his own consumption. Shane told me that he had tried the Bin 61 a few times over the past few weeks and could not understand how it had won those awards. He wanted my opinion and arranged to drop off a sample.

 

Tale One

 

The screwcap sealed bottle duly arrived, and to be completely fair, I tried it blind in a line up of six Shiraz wines from 2005 and 2006, and as I had not had four of the wines previously, it was a pretty unbiased method of judging the wine. I wrote the six tasting notes and still had no idea which wine was the Bin 61. Certainly none of the six wines struck me as serious, consistent, gold medal material, let alone being capable of winning three trophies.  There will always be disagreement between critics, so in some ways, this is to be expected, however Shane and I looked to be in a small minority; or was there more to this situation. I was seriously beginning to wonder.

 

I rang Shane with the results. During the conversation he mentioned that he had now tried a couple of bottles of the same wine that had been sealed with cork, and whist the cork sealed wines were better, in his opinion, there were still not up to the quality expected of a triple trophy level wine. As he had bought heaps of the stuff, he offered to drop off a cork sealed bottle for comparison purposes the next time he was in my area.

 

Last week four bottles arrived, two under each seal. I opened all four bottles and looked at them over 36 hours. Here are the tasting notes and the comparison.

 

Cork Sealed Bottles

The two cork sealed bottle showed significant variation from each other. The last bottle opened, the second cork sealed bottle, which I will call bottle four, was more advanced. The wine had travelled the length of the cork on one side, and had partially soaked the cork in other places. The note below refers to the more advanced bottle. It showed by far the most character, and was the best bottle of the four bottles.

 

Initially noticeably cedary, vanillin oak over intense blackberry was in evidence. Abundant powdery tannins and crisp acid overshadowed the deeply-seated fruit. The palate was savoury with sour plum, blackberry, plenty of chocolate and mint, with little noticeable sweetness, it finished dry, clean, long and with good persistence. The advanced bottle (four) was slightly sweeter than the other cork sealed bottle (three.) The third bottle was more retarded and did not show as much intensity or persistence. Bottle four was rated as Recommended now, but the rating should improve when it enters it peak drinking window in a couple of years. Drink by 2017. After thirty six hours, the wine had opened up completely, softened and was showing its true form. It was a very attractive wine and easy to drink, but the level of complexity was disappointing considering the accolades it had received in shows.

 

Screwcap Sealed Bottles

Both of the screwcap bottles were identical all the way through the comparison. On the palate, these bottles almost seemed like a different wine to the cork alternative. The flavour profile was similar with blackberry, sour plum, vanilla, chocolate, mint and menthol, but they finished a little sour, and whilst they had good fruit intensity, lacked length. The fruit seemed to lack the generosity of the cork sealed wine and it was well and truly buried by the tannins. The package seemed lighter than the cork sealed wine, and the more it opened up over the first twelve hours, the more the apparent the difference. After thirty six hours, the screwcap sealed wine was better than when it was first opened. It retained its fresh acidity and the fruit had started to surface. The flavour profile was dominated by plum, with noticeable chocolate and mint, and the finish had better persistence, but it still lacked length. In addition, it seemed like a significantly lighter wine than the cork sealed version, and, not surprisingly, looked like it would have a longer drinking window.

 

My conclusion was that now, and anytime in the next five years, the cork sealed wine is likely to look better than the screwcap sealed wine. After that, it is possible the screwcap sealed wine will catch up, but only time will tell.

 

The difference, which was so marked, really got me thinking, and wondering. The batch numbers on the cork sealed bottles were different to those on the screwcap bottles, so I wondered if there had been two batches of wine made, or if they had been prepared differently. Surely this was not exactly the same wine!

 

To find out, I rang the very helpful and friendly, Margaret Francis at Constellation. I have known Margaret since she first took over as her current management role, which includes answering stupid, technical questions about wine from people like me. Later that afternoon, I received a phone call from Margaret with the answers to my questions. Bugger! It was certainly not what I expected.

 

There was only one batch of this wine made. The screwcap bottles were bottled on one day and the cork sealed bottles were bottled the next day. And then the other shoe dropped. There was absolutely no difference in the preparation (or addition of chemicals) in the two batches of wine. They were exactly the same.

 

Interesting! And food for thought too. The differences between the wines were way more pronounced than expected. To the point that until I knew otherwise, I was absolutely sure it was either a case of two batches, or different preparation methods.

 

For my money, the cork sealed wine was a clear winner and likely to remain so for some time, possibly until it starts going downhill. I can’t see the screwcap version eclipsing the cork sealed version. So what does this prove? Many people would say not much, but that is not correct. It actually proves an incredibly significant point. We all know that cork failure is a big problem, hence the move by many to screwcap, however some wines are better suited to being sealed with cork (putting aside cork failure) and show better under cork than screwcap.

 

Now here in lies the problem. How does the winemaker know now, which seal they should use in each and every wine, for them to show their best down the track? In some cases it’s easy, but in many cases I bet the winemakers have no definitive idea, which once again just proves there is no such thing as a perfect seal for every wine.

 

Tale Two

 

Whilst I was scratching my head over the possible reasons for the differences between the ways the wines showed under the two seals, I was still wondering how this wine had managed to win four gold medals and three trophies. Logic dictates there are only a limited number of explanations.

 

1.     Bottle variation.

2.     The judges look for different things to us mere mortals.

3.     The judges were all smoking dope.

4.     The possibility of tricked up show samples

5.      Those that thought the wine was only worth a silver medal are smoking dope and don’t have good palates.

 

To fully understand these options we need more information on the awards, so here they are.

The three Trophies were all awarded at the 2008 Royal Perth Show for the Best Wine in Show, the Best Red Table Wine and the Best Shiraz of the Show. It also won a gold medal in the same show (pre-requisite for Trophy consideration).

The other three gold medals were awarded at the Royal Sydney, the Clare Valley and the Queensland Wine Show. So the majority of the gongs were from the Perth Show, and nothing from the most prestigious, National Wine Show. 

 

 

Let’s take bottle variation first. Yes it is always a possibility but in this case, it’s extremely unlikely. I have tried five bottles, under both cork and screwcap, and I would not give any of them a gold medal, let alone a trophy. A silver sure, but no gold; no way!

 

In terms of the second option, judges looking at the wine in a different light, it’s possible so let’s look at this a bit further. Harvey Steinman and the Wine Spectator Magazine rated the wine at 91 points, which is equal to a silver medal. The tasting note reads, "Lithe, focused and juicy, with blackberry, plum and sweet spice flavors that persist with some refinement into the lively finish, where there's also a touch of malt as this lingers well. Drink now through 2015."

 

Jeremy Oliver saw it much the same way with a score of 90, also a silver medal – just.  “A fine result from a tough vintage, with layers of deep, minty cassis, blackberries, mulberries and dark plums harmoniously knit with well-mannered oak and a firm, drying astringency that should settle down with time. Scented with violets, musk and menthol, it finishes earthy and savoury, with length, richness and brightness. (17.0/90, drink 2013-2017)”

 

The esteemed James Halliday looked like he agreed with the show judges as he rated the wine 95 points which equates to a gold medal score. His note reads: ”Vivid, deep purple red; powerful, focused and perfectly balanced, with dark berry fruits and fine tannins; a great example of a classic label.”

 

So, we have two completely different camps here. Those that rate the wine as a silver medal (me, Shane, Wine Spectator, and Oliver) while Halliday and a number of show judges rated it gold and trophy material.

 

I must admit when I saw Halliday’s score, given my experience between the screwcap and cork sealed bottles, I thought that Halliday’s sample must have been under cork and that the others must have been under screwcap. That would go some way to possibly explain this situation. Unfortunately I have to let the facts get in the way here; Halliday’s sample was under screwcap! So we are back to two divided sets of opinions without any real rational answer – yet.

 

The next possible reason, “the judges were all smoking dope,” whilst said in a tongue in cheek, may not be as stupid as it sounds.

 

On the positive side, when I had a look at the complete list of Perth red wine medal winners, it was apparent that there were not very many top quality wines in the show. The number of gold medals awarded was representative of quality of the entrants; the gold medals were few and far between. This is a fairly good indication that the judges were not frivolous with their medal awards. So, if there were very few gold medals awarded, there are very few contenders for the trophies. This could easily explain the number of trophies this wine won in Perth. There was simply not a huge amount of competition. The trophies are great for the wineries reputation (and marketing), but in this case, they don’t do a lot to enhance the reputation of the show system.

 

This is given further credence when you consider that in 2008, almost straight after the Perth Show, at the National Wine Show in Canberra, Australia’s most prestigious wine show, the Bin 61 was awarded a Bronze medal in a strong class, with the Trophy at the NWS going to the 2005 Langi Shiraz. At the same competition in 2007, it also won a bronze. So the big question here is, assuming there was no jiggery pokey going on, how does a wine go from winning three gold medals and three trophies in one show, to taking only bronze, twice, in another show.

 

Is the judging consistent and do they really know what they are doing. In the case of the National Wine Show, as well as some others possibly/probably. I would like to think that our wine judges are well trained, and in many cases that is true. However, Richard Gawel and Peter Godden (AWRI) examined the results of "expert wine tasters" over a 15-year period. Gawel and Godden concluded that consistency varied greatly among individuals, but when there was a combination of scores from a small team of tasters (like a show judging situation), the consistency improved. Let’s face it, that’s not a great recommendation. It shows just how much of an inexact science this show judging malarkey is, and the consequent awarding of medals and trophies, especially at smaller shows, where the judges may not be as experienced or as good.

 

Now if you think that’s a critical indictment of the show system in Australia, and it is, then the following information will really shock you.

 

Our wine events are called shows, not competitions, like they are in the US. The Australian wine show system started out as part of the local, annual agricultural show, and to a great extent it is still tied to it. Not so the US wine competitions.

 

Writing in the current issue of the Journal of Wine Economics, Dr. Robert Hodgson documents the significant variability in decisions by judges at the California State Fair Commercial Wine Competition.

 

In the 2003 Hodgson contacted the chief judge of this competition proposing an independent analysis of the reliability of its judges. Hodgson wanted to find out why is it that a particular wine wins a Gold medal at one competition and fails to win any award at another? Is this caused by bottle-to-bottle variability of the wine? To what extent is the variability caused by differing opinions within a panel of judges? Finally, could the variability be caused by inability of individual judges to reproduce their scores?

 

The tests were based on four triplicate samples served to sixteen panels of judges. (It wound up being run across sixty-five panels of judges between 2005 and 2008.) A typical flight consisted of thirty wines. Triplicate samples of all four wines were served in the second flight and randomly interspersed among the thirty wines. A typical day involves four to six flights, about 150 wines in total. Each triplicate was poured from the same bottle and served in the same flight. The test was designed to maximize the probability in favour of the judges’ ability to replicate their scores.

 

In reality, in one flight of thirty wines, there were three samples of the same wine, three samples of a second wine, three samples of a third wine, and three samples of a fourth wine. All they had to do was to mark each sample of the same wine with the same or similar scores. No involved tasting notes etc required.

 

So how did these judges do?

 

Not at all well! Only thirty panels achieved anything close to similar results on the test wine. The data results show "judge inconsistency, lack of concordance--or both" as reasons for the variation. The phenomenon was pronounced, to say the least. In one classic case, one panel of judges rejected two samples of identical wine, only to award the same wine a double gold in a third tasting. Bloody hell! OK, that is just one example, but generally the research showed that most of the judges at this show, over a four year period, were not up to standard. Whilst we train our show judges in Oz reasonably well, and I would like to think they are better than the judges used in this experiment, there is no reason why the judges at many Australian wine show are exempt from this phenomenon. 

 

The next possibility, “tricked up samples” is always a possibility. I must admit, I have no reason to think that Hardys/Constellation would be so unethical as to enter tricked up samples in a show, but it has been done by other producers before. McGuigan Wines got caught doing exactly that earlier in the decade. And then, to make matters worse, they were caught out when they told porkie pies. Major amount of scrambled egg on face resulted. It is also surprising how many entries have “suddenly been withdrawn” at the last moment at the National Wine Show. This is one of the rare shows where random, spot checks are carried out to authenticate selected entries, and to try and eliminate tricked up samples being entered.  Tricked up samples are unlikely, but they can not be written off as a possible reason.

 

The fifth reason, those who saw the wine as being worth a (low) silver medal have been smoking unfiltered green cigarettes that have addled their palates, is even more unlikely than tricked up samples. Steinman and Oliver are every bit as good as Tony Jordan and his team of judges at the Perth Show, and probably better than many show judges, especially at the small regional shows. To repeat, at two National Wine Shows, this wine could only win a bronze medal on each occasion.

 

In summary, it is my belief that in this case, the reasons for the incredible run of gold medals and trophies that were awarded to the 2005 Bin 61 are firmly rooted in the shortcomings of the show system. In essence, the wine got lucky at the Perth Show, where there was not a huge amount of competition. In addition, the standards of some of the smaller regional shows are not as high as some of the larger more competitive shows. And then we have the ability of the judging panel to consider.

 

The moral of the story is don’t trust those shiny stickers on bottles and taste the wine yourself before committing to large quantities, because some pigs are born more equal than others, and some just get lucky.

 

 

Feel free to submit your comments!

From: Michael McMahon - Thursday 19 February

The only thing that surprises me is that this is the first article of any substance on highly pointed ( JH in particular ), highly lauded wines that are nothing like what’s described by the hype . There are plenty of them out there . They generally fall into the readily available, budget conscious ( i.e $15 -$35 ) category . I don’t think anyone should be blamed for this as the reasons are probably complex, varied and different in each particular circumstance, as your article seems to indicate . I have a golden rule of never buying until I’ve tried and I think it’s served me well.

 

From: Andrew Ash - Thursday 19 February

Great article. I went to the Sydney wine show the other week (where the public can try all the wines), and I was amazed at the inconsistency of the scores of the wines. There were wines that someone like a Halliday had consistently rated a 95 or 96, and in this show they didn't even get a bronze medal. Much to my disappointment I noticed that the classic Barossa, full-bodied Shiraz was marked down heavily. A wine like the 06 Trevor Jones Wild Witch Shiraz was given a score that was nowhere near a bronze!?!?

 

From: Graham Butcher - Thursday 19 February
Having been a steward in Brisbane for the past 6 years, I would think that the system of a head judge, 4 Senior judges, 8 judges and 4 associate judges would eliminate the possibility of a bronze medal wine getting from a class level where it is judged by the head judge, senior judge, two judges and an associate and awarded top gold and then going forward and judged by head judge, four senior judges and eight judges and be awarded a trophy.

I also doubt that the overall quality of NWS judges would be significantly higher than the other major shows.

It would therefore be interesting to compare the Judging panel members at all the major shows in each year and then over a 5 - 10 year period.

It has been a worthwhile article and should prompt some thought.

 

From: Davo (David Pearson) - Thursday 19 February

Hi Ric, an interesting read, however one assumption is incorrect. I was at the awards breakfast for the Perth Wine Show as one of the groups I am a member of donates one of the trophies (best other red wine). After the brekky we were admitted to the tasting hall and had free reign to taste all of the wines that had been entered. The Bin 62 05 showed pretty well on the day and probably would have scored a mid level gold in my opinion however there were plenty of wines which showed much better, both within its class and across other classes. Unfortunately I took no notes on the day as it was a bit of a chore trying to taste as many of the thousands of reds and fortifieds and to find space to ruminate and write.

I emailed Brian after the show to let him and his subscribers in on the trophy winner before it was in the public arena, and made similar comments to him on the day.

Anyway, the long and short of what I am trying to say is :- the Bin 61 2005 showed pretty well and probably was to gold medal standard, but, and this is a big but, there were plenty of other reds which were better wines and did not medal at all, let alone get gold, and they were far more deserving of the accolades.

This was not just my opinion but also that of others I discussed the issue with on the day, and there was plenty of discussion regards the standards of the judging.

 

From: Rory Shannon - Friday 20 February

 

Interesting article, especially the comments on show judging. I am Associate Judge at the International Cool Climate Show in Red Hill (coming up next month!).


It's always interesting from a judging perspective to see how the general public vote "Best in Show" when they are invited in after the last gong has been awarded at the end of the last day. Regularly a big crowd turn up. They taste their way through and fill in their own judging sheets and hand 'em in at the en

d.
The general public and the Judges have yet to agree on the gongs! Certainly over the past two years I have been involved. The general public inevitably give the nods to wines not even in gold class! Which begs the question: Are judges out of touch with what the general public looks for?


The judging panels are, by and large, made up of winemakers or winemaking consultants.So inevitably those judges are awarding wines with good winemaking/fruit inputs. Fair enough.  But is that what the show circuit should be awarding? Quite a bit of discussion has already evolved on this topic.
 

As a wine educator, I am too often surprised at the shock my groups show when I discuss the perils of wine shows and medals awarded. Most of them (wine consumers) still believe medals are the best way of assessing quality.

From Matt Pedersen: Tuesday 24 February

Interesting and very thought provoking article on the show judging issue. I tried to do some limited research last year on the net on the shows around Australia, looking for a standard on the classes. It was with a view to be able to at least assess a wine label with a trophy or medal in a known class as well as the rate the show. Who cares that it won some obscure class in an obscure region. I could not find a standard nor a ranking of shows. You mention that the NWS is prestigious. What would say is the National rating of shows? and does every show have its own class system or is there actually a national standard?

 

TORB Responds:

There is no relationship between classes in shows. For example, Class 35 in Wollongong may be for the badest red fruit bomb, whilst the same class number in Sydney may be for Ry-slings from 2008-2009.

 

As this list shows, there are roughly fifty local Australian wine shows this year. The majority of these are agricultural shows.  As to ranking them in order of importance, that would depend on who you ask.  As a rule of thumb, The NWS is at the top of the list followed by Sydney and Melbourne vying for next place. The likes of the Qld Royal, Adelaide and Perth would be next and then the big regional's, with the smaller regional's bringing up the rear.  Some of the small shows like Winewise Small Vigneron Award and The Great Australian Shiraz Challeng, are also well respected. 

 

From Erl Happ: Thursday 26 February

Ric, thanks for your newsletter. I enjoy your spirit of fierce independence. I don’t read wine journalism as a rule but I do read your stuff.
 

As a producer I have had good success at wine shows but I am regularly disappointed in what happens there. The opportunity to ‘improve the breed’ is simply not part of the design criteria and this aspect ends up being very badly compromised. The current setup caters for big egos and an in-crowd. Having winemakers doing the judging is just incestuous.

Every business that is really serious about producing good product should be interested in how the consumer sees the product. I have long held the view that it should be possible to run a consumer judged wine show and have a lot of fun doing it. It doesn’t have to be a judging of three thousand wines in the space of three or four days. It can be designed to be judge friendly. Some of the things that I would build into such a process are:

• Never more than 6 wines in a group and 12 or 18 wines (if you are a hardy soul) at a session.
• The consumer simply has to identify his top two or three wines out of the six. Perhaps just one. Perhaps he could write a few words on the reason for his choice of the best and worst wine.
• Each wine is presented for judging in a different group of five others each time it is judged to avoid contextual effects
• Each wine gets tasted by at least half a dozen consumers, preferably more, so as to get a decent sample of the diverse palates of the population at large. The design is set up by a statistician so as to be sure that a fair sample of the public palate is included and results are repeatable.
• The show is designed to result in good feedback to producers….and many would probably be prepared to pay for this. So, the information that is generated must be compiled and it must get back to producers.
• The consumers pay for the privilege of being involved and having fun.
• The consumers who manage to do a good job of reflecting the public palate are rewarded.
• The successful wines are made available via the local trade as a dozen for sale to consumers at large, as a post show activity.
• With a bit of consultation it might be possible to get a few bums on seats in local restaurants as part of the process.
• I think that there could be a good business in running such a judging if some bright spark were to take it up. There are several possible revenue streams.
    

 


Copyright © Ric Einstein 2009

 

Back