How accurate are Canadian polls?

Philippe J. Fournier: A 338Canada analysis shows the hits and misses of the country’s top polling firms—and the importance of polling aggregates

Content image

Justin Trudeau, Thomas Mulcair and Stephen Harper, speak during a leaders’ debate in Calgary on Sept. 17, 2015. (Ben Nelms/Bloomberg via Getty Images)

A grand total of 142 federal polls were published since Jan. 1, 2017. This much data to the general public can be a source of great confusion and, without proper and unbiased analysis, can muddle the narrative.

We are all prone to confirmation bias to varying degrees after all.

Regular readers of 338 know that the model takes into consideration every scientific poll published by professional firms. Each of these polls is then weighted according to its sample size and field date, and by the past performance of said firms.

Another key aspect of the 338 model is its detection of outliers: when a poll contains numbers that stray too far from the current averages, its weight will be automatically reduced. However, if said poll is later confirmed by other sources (meaning the poll was perhaps a precursor of a new trend and not an outlier per se), the poll will be given its normal weight back.

With this in mind, here’s an interesting exercise: how far away (or how close) are polling firms from the poll average of recent polling in Canada?

Obviously there are many ways to calculate such things and the devil sure lies in the details. For the purpose of comparing polls to the overall average, each firm is set on an equal footing in the weighing and the “outlier detector” has been turned off. We will compare each poll with the average from the day of its middle field date. Results are then tabulated and the average “skew” is calculated for each firm.

These calculations will include the following polls:

  • 29 polls from Nanos Research (one per every four weeks as to not use duplicate data from the rolling polls)
  • 21 polls from Forum Research
  • 19 polls from Campaign Research
  • 18 polls from Abacus Data
  • 17 polls from Ipsos
  • 9 polls from Mainstreet Research
  • 9 polls from Angus Reid
  • 9 polls from Léger
  • 8 polls from Innovative Research
  • 3 polls from EKOS

Here are the results for the Liberals (x-axis) and the Conservatives (y-axis) within a three point radius of the 338Canada poll average (set at the origin):

On the Liberal axis (horizontal), we notice many of these polling firms generally agree: Mainstreet Research, Léger, Ipsos, Campaign Research, Nanos, Abacus Data and Innovative Research all stand within about a point of the 338 Liberal average. Angus Reid, just outside the two-point radius, underestimates the Liberals by about two and half points compared to its competitors.

On the Conservative axis (vertical), none of these polling firms show a skew higher or lower than two points for the CPC on average since 2017. Innovative Research has the Conservatives 1.8 point lower than the poll average; Angus Reid has the CPC 1.3 points higher than the poll average.

However, when we zoom out from the centre of this graph to a six-point radius, here is what we see:

EKOS lies just outside of the three-point radius. Even though EKOS’ Conservative numbers are generally quite close to the average (within less than one point), its Liberal score is more than three points lower than the average. (However, to be fair: EKOS has only published three federal polls since January 2017.)

However, the real outsider among these firms remains Forum Research, whose average skew stands just outside of the five-point radius. Forum has had, over the past two years, the Conservatives more than five points higher than the 338 average. In fact, during all of 2018, when most polls showed the Liberals either ahead or tied for the better part of the year, Forum regularly had the Conservatives on top by margins of nine points or more.

The graph above depicts all the federal polls published in 2018. The eight Forum polls are indicated by dark arrows.

You may notice a pattern.

* * *

Election results make or break pollsters’ reputations. Let’s run a similar exercise, but this time the centre of the graph will represent the 2015 federal election results. How did these same pollsters fare in 2015? We use the very last poll published by these firms in the waning days of the 2015 campaign.

Here are the firms within a three-point radius:


The best final polls were conducted by Forum, Mainstreet, Nanos and Ipsos, all within a two-point radius (Liberal and Conservative numbers only). Both Léger and Innovative also did very well with similar numbers just outside of the two-point radius.

Once again, if we zoom out from the centre, we notice EKOS and Angus Reid, which both had the Conservative score within less than a point, but underestimated the Liberals significantly (by four and four and a half points, respectively).



These two polling firms underestimated the Liberals four years ago, and have measured the LPC lower than others since 2017.

But the enigma remains Forum Research. Its recent federal numbers are completely off the chart compared to other firms, yet in recent elections the very last Forum poll often hit the mark accurately.

There are many ways we could try to explain this, but last year’s Ontario election was very telling of Forum’s overall tendencies: on May 9, 2018, Forum measured Doug Ford’s Progressive Conservatives ahead by seven points over the NDP; on May 23rd, Forum suddenly had a 14 point lead (!!) for the NDP. A mere six days later on May 29th, Forum put the PC back in the lead by four points.

No other firm showed this much swing from poll to poll.

* * *

The fact that polling firms have a “House Effect” that appears to favour one party or another by a few points does not mean there is intentional bias from those firms. Polling firms use different methods and samples. Their questionnaire wording may slightly differ from one to another. Above all, they are all chasing a moving target: the mood of an electorate, in which many individuals choose (or do not bother) to vote at all—the 2015 federal election turnout was 68.2 per cent, meaning that roughly one-third of eligible voters did not vote.

This makes the pollsters’ task that much more difficult. And, above all, this is precisely why we need independent analysis and polling aggregates.