Jeb Dunnuck really loves 2018 Napa

I think there’s a lot of 100s under $250. Off of the top of my head, there’s Myriad Dr. Crane Elysian and Maybach Materium. I’ve only seen the one’s he posts on Instagram, but there are a good amount of 100 and 99 pts under $250.

I agree with this approach and went deeper with Caterwaul, Pott, Myriad, DI CO, Teeter Totter, etc. Napa Valley bottlings. My big (for me) purchases were RM’s Herb Lamb and William & Mary.

Given the the surprising number of folks here who apparently are not batting an eye at 4.3% of 1600 reviews being scored at 99 or 100, I decided to take a precise look at my scores in CT. A majority of the time, I don’t even score the wine (about 2/3 of the time), so the following data is only for the 1,093 scores I have in CT:

Points:___ # of ratings _____percentage of all ratings
100 … 1 … 0.09%
99 … 1 … 0.09%
98 … 3 … 0.27%
97 … 4 … 0.37%
96 … 11 … 1.01%


I would be interested in seeing the same data set from folks who don’t think Jeb’s 2018 Napa data set is absurdly skewed towards the top end.

My numbers benefit from a healthy dose of selection bias; Jeb’s, presumably, do not. Nonetheless, out of the 1600 wines Jeb scored for his 2018 Napa Valley report, he gave 1.875% of them perfect scores; that’s a higher percentage than the percentage of my scores that I’ve scored at 100, 99, 98, 97, and 96 combined. And, again, Jeb tasted wines from one appellation in one vintage, and (presumably) didn’t have the benefit of tasting only wines selected on the belief he would like them.

And, No, I don’t buy the whole “winemaking is better now” argument that is frequently used to justify ridiculous score inflation; the wines I drink and score are subject to that same “better winemaking.”

And I’m still waiting for any evidence that proves, or even suggests, a critic’s livelihood is impacted by the number, or percentage, of 100s or 99s they throw-out OR their average score given. I understand the theory that a critic will make more money by giving out higher scores, but is that theory actually supported by any evidence? I’ve yet to see it, and I bet we can all think of some critics who don’t engage in (this much, if any) score inflation and are doing just fine.

Stoked on his review of Spottswoode. RP also gave a 100pts. Looking forward to AG’s review. Could it go triple hundo?? [popcorn.gif]

I haven’t given a 100 pt score, but I feel like I’d have much more confidence to do so if I had 12,000 or so wines/year like Jeb.

I haven’t given a 100 pt score, but I feel like I’d have much more confidence to do so if I had 12,000 or so wines/year like Jeb.

I’m not sure that’s entirely true. I think plenty of serious critics revisit wines multiple times. Robert Parker used to taste Bordeaux from barrel twice back in the day. And I taste wines more than once from barrel and then revisit from bottle in Burgundy reasonably often. It’s just due diligence, and very instructive.

Hmm. I’d be curious if William has stroked a 100 points on a wine.

I’m like Brian, I’ve only done it for a very select few wines: 1989 Petrus (twice), 1982 and 1986 Mouton and 1989 Haut Brion. Maybe the 1982 La Mish that I had a couple months ago. Now mind you, I don’t taste anywhere near the number of wines that the pros taste. I’d be wiped out. And I don’t spit. Bad combo.

I think he has for Madeira, but I would let him verify.

I wonder if he is higher than other critics if you looked at it on a bell curve. It’d be interesting to see % distribution of the major critics on wines by point score. I always keep in mind that he isn’t reviewing grocery store wines so for the scores to be high-ish makes sense. 2ish percent at 100 points considering the vintage doesn’t seem too far off.

Hope they don’t jack the price up this year. It’s been steady the past couple years after rocketing in price before that.

When he was reviewing California for Decanter, William rated '13 Togni and '13 Monte Bello 100 points.

I just looked and I see I’ve given 31 since I joined TWA in late 2017! About one third of those have been mature wines. I also gave a few when I was at Decanter, including to the 2015 G-Max and the 1969 Chappellet. Even though that might seem a bit more conservative than Jeb, I think we would both absolutely agree that the scale stops at 100 and it makes sense to use the entire range (it seems to me that this has been complicated by people referring to 100-point wines as “perfect”, which opens a philosophical can of worms and begs questions such as can one wine be more perfect than another, etc; and having spent too much time engaged with such matters during my academic career, these are questions I am anxious to avoid). In my case, my reason for being (I think) quite parsimonious is mostly to do with differentiating at the top end of the scale: I want to be able to communicate the difference between, say, d’Auvenay’s Criots-Bâtard-Montrachet and Chevalier-Montrachet and Bâtard-Montrachet—because even if all three could, quite frankly, score 100 on most scales, if there is a meaningful difference in quality (rather than just style) I want to be able to indicate that. Given that such shadings are so fine, I have found that writing about Burgundy practically obliges me to score quite conservatively. What I do try to do is not score by appellation, because that is the other big challenge in Burgundy - the AOC hierarchy is so powerful and pervasive, it’s hard not to just metabolize it. One of the big achievements of the 100-point system as employed by Robert Parker at the beginning of his career was to really disrupt the hierarchies of the 1855 classification in Bordeaux; and I think something similar is possible in Burgundy, where by we can break some of the glass ceilings that have historically been imposed on so-called “lesser appellations”, and which have arguably proven even harder to penetrate than the glass ceilings that were once imposed on e.g. a Médoc fifth growth. To that end, I haven’t been afraid to give e.g. 100 points to a wine from the Mâconnais. So, even if we often hear a lot about the negatives of scoring, I think it does have this positive side of validating hard work and potentially elevating how whole regions or sub-regions (or grape varieties, or whatever) are perceived in a very material manner. To tie this all up (and apologies to anyone who didn’t want to read another wine critic talking mainly about another wine region in this thread), I think what’s clear from Jeb’s reviews (beyond his passion for the wines in question and the region as a whole) is that he really believes in the 2018 vintage in Napa Valley; so perhaps rather than debating that, or the number of high scores, in the abstract, we ought to actually taste some of the wines first!

11 Likes

I’m not sure that’s entirely true. I think plenty of serious critics revisit wines multiple times. Robert Parker used to taste Bordeaux from barrel twice back in the day. And I taste wines more than once from barrel and then revisit from bottle in Burgundy reasonably often. It’s just due diligence, and very instructive.

I know of barrel tastings. If you are revisiting these wines blind multiple times I would promote that. I would rather subscribe to someone putting the extra work in than someone going down the line. Good to know! [cheers.gif]

William,

I’d be curious to know whether you believe that the 100 point scale is an absolute scale or a relative scale. If absolute, what does that mean exactly. And if relative, relative to what? Stated differently, if you give a Niellon Chevy 95 points, am I to take that rating as one that applies within the context of only Chevys? Only Puligny or Chassagne? All burgundy whites? All French whites? All whites? Moreover, is the rating only good for where you might be now, i.e., only good for a certain time period? Critics regularly admit that grade inflation naturally occurs over time. And it isn’t all, or even mostly, due to so-called improved winemaking. So a 95 point wine from ten years ago, might be today’s 99 point wine. And so you see the problem. When one of your associated colleagues assigns, again by example, a Newton Unfiltered Chardonnay 95 points, how am I to interpret that? That the Niellon and Newton are largely equal in quality? That each’s rating only applies to its own context? Which is what? All to say, I think this business of assigning numbers to wines by multiple reviewers has become so fraught with problems that it has, at least for me, become an ever increasingly diminished rubric. Others have pointed out additional pitfalls in this string.

This is the sort of question which I normally try to evade by saying (borrowing from Clive Coates) that, being British, my methodology consists in scoring out of 20 and then multiplying by five.

Happy to discuss further in DMs if you like, but I am loath to derail this thread by getting into a lengthy tangent right here!

Its fine, no need to take it private. But I don’t think that publicly addressing the questions I raise would necessarily be derailing this thread. The fact that Dunnick is awarding so many wines so many high scores for Napa 2018 may speak to the quality of a vintage, as you have noted. But it also obviously raises other questions of the type I outlined, and to which others in the thread have alluded. It wouldn’t hurt for any critic who uses this scale to answer them publicly or even convert them to a preface that accompanies reviews. And, to my way of thinking, the only difference between 1-20 and 1-100 is that the latter offers the opportunity for a wider range of delineation, which critics have largely declined to use.

1 Like

You know you really have to stop posting so much common sense and reasoning that is so clear. You will be disrupting the normal flow of conversation here at Berserkers.

1 Like

Well, at TWA for example we have a rubric that says very explicitly that wines are scored in the context of their peer group…

And I think all scales have a similar problem, if that’s what it is: one doesn’t see many 75/100 scores, but nor does one see many 10/20 scores, either.

I see where you and others are coming from. You can’t get over the high number of 99/100pts when you compare it with your own methodology of scoring.

There are different interpretations of the scoring/rating scale. It basically all comes down to whether you see the 100 pts as a plateau a wine has to reach with a lot of space for many wines (and hence a lot of 100 pts ratings) or wether you see a 100 pts score as the absolut pinnacle with only one wine can ever reach (the one very best wine ever).

Think of Burgundy Al on CT with his 31k notes and two 99s as best socres ever as being one extreme, with Jeb and its 30 Napas being on the other side of the spectrum (but probably not extreme either). You are obviously much closer to Burgundy Al’s definition of the rating scale but it is a misperception that Jeb’s (and many others) interpretation of the scoring/rating scale is wrong, just because it’s different from yours.