Piers Corbyn has been a vocal critic of the forecasts produced by the Met Office, both in terms of accuracy and underlying theory.
One difficult problem in evaluating Piers’ claims is that he has only talked in very broad terms about his methods – they haven’t been subject to independent detailed scrutiny.
After some discussion on Twitter with Richard Betts and others, it seems to me that even without access to Piers’ method, if we can fairly evaluate the forecasts he produces, and they have significant skill, it’s worth further attention; commercial interests may hamper down the line, but we aren’t there yet.
This post is by way of setting the scene – I’m only concerned with the overall experience of using WeatherAction, and the overall structure of the report.
Also, before going further I want to stipulate:
- I accept that both WeatherAction and the Met Office are honestly pursuing their goals, that is both parties believe the skill that they demonstrate is significant, and both parties believe the theories that underly those predictions are grounded in reality and not superstition.
- Accepting that the parties are acting honestly is not the same as accepting that they are correct in their belief, however sincerely held.
So as a first step, I bought a one-off advance report for the UK for May. Fortunately Piers has made a series of what appear to be very firm predictions that look easy to evaluate, and that stand in some contrast to Met Office predictions made at the same time. Although this is effectively only one datapoint, it at least has the potential to be an interesting one.
The reports are secured through WeatherActions’s website, and this particular report was £15.00 for a “30 day UK and Ireland Single Forecast”.
Buying a report gives you access to a members area, with a number of historical reports for browse and download. These reports seemed to be a mixture, with advance reports accompanied by expansion and clarification as the period in question rolled in as time went on.
The naming convention for historical reports is slightly random, but it’s presented in chronological order.
I wasn’t keeping a close watch, but the filename indicates it was placed online on 29th April. This means it could be tweaked right up to that point.
The report itself is a 6 page PDF, and the overall structure seems to be similar looking back through historical reports.
This is the first time I’ve examined this (or indeed any) detailed long-term forecast, so I only have a very poor handle on how readable I should expect it to seem the first time round.
What I can say is that the format is typical of Piers’ other output – on Twitter or his forum – apparently dense with information and occasionally lacking some polish in the design.
The document is broadly arranges as follows:
Update: I’ve added in representative portions of one of Piers’ publicly available archive reports.
Page 1 – “Headline summary & essential weather type development”
The left hand pane is a typical newspapers’ inverted pyramid style with punchier information at the top for the overall period, followed by a slightly more detailed drill-down into Temperature, Rain and Sun, and then a summary of the important points for 8 time periods of a few days each.
Confidence is expressed in the order Temperature, Rainfall and Sun, with Temperature being the measure the report claims to have most confidence in.
The right hand pane is more editorial in nature, with references to 3rd parties and some opinion on the relative accuracy of other forecasters. This is repeated elsewhere in the report and to me felt unnecessary and something of a distraction.
Pages 2 – 4 – “Most likely detailed weather”
A similar structure for each page.
A total of 8 panels arranged across 3 pages, each panel representing a different period of a few days throughout the forecast.
Each panel includes:
- a brief description of the main conditions
- a confidence level for that period
- a small map of the UK showing likely distribution of rainfall, temperature and sea conditions
- separate consideration of Winds, Temps and Solar Factors
- a “Likely possible weather map scenario” – a discussion of various depressions, fronts etc.
In each of the sections above there are one or more very specific predictions, as well as more general guidance.
The way that confidence in each of these periods is expressed makes it diffcult (for me at least) to evaluate how impressed I should be with the accuracy claimed. This isn’t a dig at the report, and likely at least in part a reflection on my lack of familiarity.
The report claims that the periods involved are accurate to “+/- one day” and that “6 of the 8” should be “basically correct”.
Each of the panels has an accompanying confidence measure associated with it for that period. This is expressed as a percentage, where the percentage quoted defines both the confidence expressed that the prediction is “essentially right” and conversely that the prediction will be “unhelpful”
So an expressed confidence level of 75% would mean there was a 75% chance that the prediction is “essentially right” and 25% that the prediction is “unhelpful”.
Taking these statements of confidence together, I find it difficult to see how the skill could be accurately gauged in an objective way, but again this is unfamiliar territory.
Page 4 – Easy Look Forecast
This is more or less in the form of a time-series graph, where various deviations from an average are shown for Rain, Temperature and Sun.
One or more horizontal lines graph the changes in these values throughout May, with thin lines representing specific regions (which may differ depending on the measure being graphed) and a thicker line effectively representing everywhere else in the UK.
As well as these graphs, short descriptive words of phrases are used to emphasise notable weather features for periods as short as 2 days.
At first blush this looks as if it should be easy to evaluate in terms of skill, but again I found it difficult to work out how I’d go about doing it.
The graph header says it is “Normally accurate to 1 day.”, but then qualifies this by saying they are not precise predictions and are just likely values “around” the dates shown. This seems to introduce a fair bit of leeway that again makes objective evaluation difficult.
Page 5 – Deviations
This is a summary showing deviations from a historically calculated normal for Precipitation, Temperature and Sunshine, presented on a small map and split by very broad region.
No confidence values are stipulated here, except for the reports relative confidence in Precipitation, Temperature and Sunshine (not shown in this older example below).
There’s a lot of information to assimilate in the report; I found it difficult to parse in places, but it’s the first few times of reading; I struggle to see how the skill could be easily evaluated.