Perspectives on Polling – Part 4 of a 4-part series: Where do we go from here?

Polling: Where do we go from here?

 

 

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

*  *  *

Part 4 – Where do we go from here?

Just throwing money at this problem is not a solution. The money, frankly, is not there. Costs matter. We are in a new world – low voter turnouts, multiple communication technologies, social media platforms, and the use by parties of geo-demographic targeting and sophisticated voter identification methods to find supporters. These have dramatically affected the political polling business, and pollsters have been slow to adjust and/or they are not evolving their skills.

This is a cultural problem – one of our industry, the media and public engagement connected to and part of our political ecosystem. Grenier is pointed in his summation:

“I think the next time around there will be a lot more reticence to get more than a passing kind of mention of what the poll is showing, rather than using it as a basis for entire articles. I think once an election comes around where the polls do well, and there will be, some of that trust will be regained but it won’t be the same for at least a couple of years I would say.”

Polling is evolving, and has to continue to evolve. The horse-race dimension is damaging our reputation and we are losing the public’s trust. Corporate leaders are asking questions about the accuracy and quality of our work. Thoughtful, more transparent polling is what is being asked of the industry.

As outlined above, we have numerous and emerging realities. And it is only going to get more challenging. Emerging trends will be amplified: new players will always enter the market seeking to build their reputation by giving away their findings, and aggregators (serving as third party evaluators) will likely become our spokespersons. However, I believe that this is good time to reflect and set a firm course of action. This can also be MRIA’s time to shine and provide leadership on a very public issue.

My apologies for the Nate Silver love fest. While he a free-rider in his use of polls, he has made them sexy in the minds of the public. And he has the platform of the NY Times, his book and speaking tour to do it.

Based on my review of practices in other jurisdictions, feedback from interviews with thought leaders, and from my own observations, I propose the following points for our association, and for the polling ecosystem, to consider.

1: Focus on Quality Control I believe that we need to focus on diligence, transparency and disclosure. While many are already diligent, we need to pay greater attention to stratified and structured samples. And we need to be transparent about how data collection and analysis were undertaken. Further, a full disclosure of data collection, including sample sources and field protocols, weighting schemes and, if it was part of an omnibus survey or if the poll was commissioned, needs to be posted. Datasets should also be available for review and, ideally, subject to ongoing academic review. Integral to this is more nuanced polling – moving beyond the horse race and  building stronger data integrity.

2: Media Disclosure MRIA needs to immediately establish more stringent reporting standards and work with Canada’s print, electronic and digital media outlets to adopt and enforce them. There are many examples out there – grab them, take the best ideas, and make sure the media adopt them, too. And don’t let up – keep posting polling articles and what was disclosed.

3. Oversight In times of crisis, other jurisdictions undertook inquiries into their polling industry. While MRIA may lack the clout to do this, we can provide leadership. While it would be great to set up a Canadian Association for Public Opinion Research (CAPOR?), it might be more realistic to become a national chapter of AAPOR. Adoption of their resources, protocols and dissemination of education resources, within a Canadian context, would be a positive step.

4. Establish an online information database to inform polling: I believe the onus is on us to collect, review and triangulate as much data as we can before we design a poll. MRIA could consider establishing a “pay to play” central database that pulls in and organizes all political data from social media platforms – e.g., hashtags (such as #cdnpoli, #bcpoli, #ableg, #onvote), blogs, articles (e.g., media clipping service) and other types of commentary – that can be analyzed to assess emerging issues and trends that can be used to inform polling questions. Establishing a forum to discuss how these data are used in questionnaires and methodologies could serve as a valuable complement. A potential partnership with The Hill Times? Could this be an opportunity for app development? Canadian political geeks will be all over it.

5. A Real Experiment: While this may be a call to return to first principles, I think an in-depth project with real-time research on research is required. I propose that our industry look at the 2015 federal election, and work with media and academics to collaborate and co-create an introspective, future-oriented national polling project. It can be multi-modal, but more importantly, we can build on inventory of insight and dialogue (using YouTube, Google/hangouts and podcasts) on preparing for and polling during an election. Aspects such as A/B testing, broad population versus voting populations, and analysis of swing ridings only, could be conducted. Aggregators, who benefit from our work, could be brought in to provide another critical perspective. MRIA should coordinate this.

While the emergent media/pollster business model requires careful examination, the current business model of the media overrides any quick resolution of the “fast and cheap” polling problem. I have stopped short of advocating for only publishing polls that are paid for, or for labeling free poll results as “advertorial.” While this is going to take a lot of money (I am not delusional about this), it is our reputation and trust in our industry that is at stake.

MRIA should approach all levels of government, foundations and think tanks to seek out the funding to pursue these recommendations (including possible use of SR&ED and IRAP grants). The rest of the funds can come from industry, with substantial sweat equity, and finally from the media. Ultimately, the project will be transformative and will serve as a LEAN review of our industry and ecosystem.

Poor polling is a symptom, not a cause, of weak voter turnout. While voter turnout will continue to plague our elections, at least we can begin to put to rest any problems and beliefs associated with suspect polling and its subsequent reporting.

* * *

Helpful resources:

American Association for Public Opinion Research: http://www.aapor.org/Home.htm

Associated Press Stylebook on polls and surveys: http://ralphehanson.com/blog/ap_poll.html

The New York Times Polling Standards: http://www.nytimes.com/ref/us/politics/10_polling_standards.html

BBC Opinion Polls, Surveys, Questionnaires, Votes and Straw Polls – Guidance in Full: http://www.bbc.co.uk/editorialguidelines/page/guidance-polls-surveys-full

Nate Silver – Which Polls Fared Best (and Worst) in the 2012 Presidential Race: http://fivethirtyeight.blogs.nytimes.com/2012/11/10/which-polls-fared-best-and-worst-in-the-2012-presidential-race/

CBC, The Current – The Power of Polls (May 16): http://www.cbc.ca/thecurrent/episode/2013/05/16/the-power-of-polls/

Perspectives on Polling: Part 3 of a 4-part series – Calgary Centre: An Odd Case of Public Engagement

Calgary, Alberta

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

*  *  *

Part 3 – Calgary Centre: An Odd Case of Public Engagement

I reside in Calgary, where there was a by-election last November in the Calgary Centre riding. Given the restricted geography, the only polls that were being done were telephone-based – via either live operators or IVR, with an emphasis on the latter.

The initial IVR polls indicated a strong position for the Conservative Party of Canada candidate. My sense was that given the nature of the candidate, as well as that of the riding, the polls may be overstating the strength of this candidate; given that this was a by-election, there was likely going to be a low turnout, and thus, less predictive power of the polls. As the writ was dropped and we got closer to election day, more IVR polls were being conducted. This being an isolated riding, as IVR polls got underway the parties started to send out messages via social networks to their supporters asking them to answer their phones.

The results were intriguing: as more people were aware that polling was being undertaken, they were more inclined to respond to them. What is fascinating is that the IVR polls started to perform really well; in fact the last one that was released by Forum Research practically called the result, including the slight lead for the Conservative candidate on election day.

Thus, the notion of being aware of polls and their importance within a particular jurisdiction led to some vested interest on behalf of the public to respond to that. While the voter intention rate was high, the actual distribution was more realistic compared to anything else I have seen in IVR polling within Calgary.

This was a one off event, but an interesting one. The underlying consideration here is: if voters are aware of when polling is undertaken, regardless of inclination, could it improve quality and response rate?

The Matter of Calibration

In the future, IVR, online and mobile methodologies will likely predominate given the cost. However, these do not address the issue of calibration: how do we establish methods that are able to better represent the population and generate higher response rates? Part of the challenge is to calibrate the population over the course of the election cycle, in order to better grasp what the reality of their voting behaviour is, to be able to identify who is voting, and then zero in on that population to better grasp what their intentions are.

Yes, this is a modeling consideration, but we need to calibrate with known numbers – such as mobile phone ownership, vehicle ownership and home ownership, which are firm numbers that we always use to assess quality of the data. This will lead us to a better position for the future as we evolve methodologies and do more research on research in future polls.

The Rising Role of Aggregators

For this article I reviewed Nate Silver’s analysis of polls, and as well I interviewed Éric Grenier. These two individuals are aggregators of surveys. They rely upon the data and the quality of that data to do their jobs. The question arises at this point: are they garnering more attention than the pollsters who are doing the polls themselves? This is another proverbial ‘train that has left the station.’ The reason why they have gained more media clout is that they have become more agnostic across polling methods and have factored that into their analyses – especially on their assessment of the quality of polls themselves. They have become our de facto third party polling evaluators.

This also presents a problem for them: to keep doing their jobs they are dependent upon our work; moreover, they are not paying for that work and we are not gaining on our reputation other than through the validation they provide for the polls that our industry releases. Henning reflects on this:

“Nate Silver, Huffington Post, Talking Post Memo, and Real Clear Politics. There are four aggregators right there that probably get more attention combined than traditional polls and for good reasons I think. It does create an interesting Tragedy of the Commons where why should I do the hard work and do the increasingly expensive work to do it right, in terms of cell phone sampling and everything, if I am just going to end up subsumed into somebody else’s model.”

Is this a major problem for the industry? My belief is that it isn’t, but I do feel that we need to collaborate with such individuals to have them at the table and give feedback based on their usage of the data that we generate.

The Importance of Disclosure and Transparency

Darrell Bricker is blunt about the state of our trade:

“This is one of the problems I also have with Canadian pollsters – we kind of get obsessed about what is happening here and we don’t necessarily learn from what is happening in other countries. I mean these aren’t new phenomena or new issues.

This is again about disclosure, research on research, being mature about our responsibilities as pollsters, as social scientists, to explain and to open ourselves up to critiquing criticism by people who are actually having an informed discussion about this stuff.”

Based on experiences of the jurisdiction, there is much learning to be grasped, especially from a publishing standpoint. There are the Associated Press Style Book on polls and surveys, the New York Times polling standards, and the BBC’s guidance on opinion polls, surveys, questionnaires, votes and straw polls. These are excellent resources but they are rarely used. What has generally prevailed is the notion of a margin of error, and we have seen margin of error being quoted on all forms of surveys. While some consider this questionable, the bigger question is what really is margin of error?

Margin of error is about the error within the data based on the reproducibility of the survey using the same exact methodology. What has happened is that horse race questions have been fielded across the various methods, all quoting margins of error within the same timeframe, and yielding dramatically different results. Thus, we have a disclosure issue that we have to address so that there can be more external assessment and evaluation of a poll. The onus should be upon the pollster to publish a more complete snapshot of the methodology, the questionnaire and the dataset, in order to allow individuals to assess the quality of the poll. This would always remain problematic as there are commissioned polls and there are polls that may be tacked onto an Omni that lack clear accountability.

I am not advocating that people can’t conduct polls independently; we should never lose that right. However, the more that we disclose what we are doing and the motivations for doing some of these things, the better we can focus on the quality of the polling and assess any areas where things may be slipping quicker so that we can address them more effectively across time – but specifically, and especially, within an election cycle.

Bricker, likely one of the loudest voices in Canada on this issue, reinforces this perspective on disclosure and its motivation for self-reflection:

“This is again about disclosure – research on research, being mature about our responsibilities as pollsters, as social scientists, to explain and to open ourselves up to critiquing criticism by people who are actually having an informed discussion about this stuff.

“…When you go to AAPOR and see the absolute disclosure that they demand, all of the information that the pollsters give to them, how they dismiss people who play these kind of games and don’t even include them in the averages, that is the way we should be.”

Éric Grenier pointed out Nate Silver’s revelatory analysis prior to the 2012 National Election: he had found that a more transparent a polling firm was, the better the results were. Food for thought.

* * *

Next up – Part 4: Where do we go from here?

Perspectives on Polling – Part 2 of a 4-part series: The horse race ignores context and nuance

Horse-race polling

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

Part 2: The Horse Race Ignores Context and Nuance

What to ask in polls? Polling, in its cheapest form, focuses on the horse race. But elections are more than that. They are tests of political parties’ brands, the public’s confidence in the economy and their governments’ stewardship, the alignment of voters’ values with parties, and societal trust. Quality polling captures these elements, and how they wax and wane during the writ period. Quality polling also entails more in-depth, statistical analysis that addresses aspects such as tests of correlation and voter segmentation – aspects that Nate Silver and his more methodical contemporaries embrace.

Horse race polling, which has become the predominant form reported in the media, neglects the nuances of motivations that influence voters as to which candidate or party they will support on Election Day. One that arose during my discussions with Barb Justason and Frank Graves was the notion of the economy. What we have seen consistently is that if people are generally satisfied or that there is a slight doubt about changing course on the economy, they tend to prefer the incumbent. Frank elaborates:

“…there’s a fragile economy out there, let’s not risk the adventure of a new government at this time. I believe that that factor was the same factor we saw in Ontario, the same factor we saw in Quebec and probably the same factor that was going on in Alberta.”

And Barb Justason shines a light on this question of the economy and how it was leveraged by the B.C. Liberals.

“Looking back the BC Liberals, their campaign came together and that message from Christie Clark throughout the campaign – the economic message that was packaged into this nugget of not leaving debt to our children, I think that resonated and I think that really took hold, especially following the debate.”

We saw this emerging in Alberta, barring some gaffs by the Wild Rose Alliance, and in BC, when after the debate, Christy Clark fine-tuned her messaging to emphasize their stewardship of the economy and how it has influenced the current and future quality of life for BC residents. Clearly, anger and complacency can coexist, and work to the benefit of a well-organized incumbent. This is also one of the reasons why the Conservative government continues to hammer home the point that they are good stewards of the economy (relative to other parties) through various talking points and its Economic Action Plan campaign – this is an ongoing going campaign designed to appeal to their support base, and core tactic in their preparation for the 2015 Federal Election Being perceived strong on the economy resonates with committed voters.

Given the variation that we have witnessed across all polls with an election cycle, other than the horse race, there is the relative position of the players in that horse race that has garnered the attention of some of the interviewees. This is becoming an interesting problem as it reinforces the need for better context within surveys, and the sharing of such questions, so as to have comparative data to understand the dynamics of an election. Michael Marzolini, president of Pollara, intimated in a CBC interview that the polling has become lazy – it’s missing out on the careful nuance that campaign teams undertake to do their job properly, but also short-changing the public on where the true dynamic of an election is, rather than the horse race itself.

Consideration should also be given to adding new types of questions that capitalize on our social instincts to herd and take social cues from our peers and the crowd. Voting, in many regards, follows social norms and herding effects. Some may say that the NDP’s Quebec “Orange Crush,” driven by social cues, was a result of the herding effect.  John Kearon, Founder & Chief Juicer at BrainJuicer, indicated that the question “How do you think your neighbours will vote?” was a better predictor of the outcome of the recent Italian national election compared to traditional horse race questions.

Party brand is also at play. How this nuance manifests itself in populations is largely neglected in horse-race centric polling. How ingrained is this? I collaborate with Dr. Paul Zak, at the Centre for Neuroeconomic Studies at Claremont Graduate University in Southern California, and we have noted that no matter what scandal or demonstrated weakness arises, the Republican/Conservative vote in the U.S. has remained relatively consistent over the last decade. There is a body of research emerging from neuroscience that is suggesting that some people are firmly conservative voters and do not consider any other party as a viable option. The data in Canada does tend to support this, and Conservative parties have been remarkably effective in polarizing the electorate, and finding and mobilizing this vote. While putting EEGs on voters may pose a challenge, we as pollsters have to be able to dig into linguistic methods to explore findings from other disciplines to improve the design of our polls.

The Pollster/War Room Dichotomy – Voter Identification & Data Triangulation

There is a perspective that has become pervasive, especially amongst campaign strategists and teams; they state that they don’t listen to the polls. This is utter nonsense. They all have pollsters on the team, and if they don’t it’s because they can’t afford them. The true context of that statement is that their internal polls are being conducted differently.

Political war rooms use a variety of tools. There is an inherent misalignment between pollsters and party war rooms. Pollsters have polls. War rooms have polls, plus social media monitoring platforms, feedback from their ground network, content analysis of media coverage, text analysis of editorials and public comments. They parse these data in a much more strategic way to suits their needs. Those who do it well have established a track record among senior members of the team of being very strategic and very collaborative. Further, data triangulation – finding the best insights across multiple sources – has always been a skill amongst the best war room teams. It is no surprise that data scientists – those with triangulation, interpretation and communication skills – are much sought after by political parties. Their talents are becoming more useful than those of the traditional party pollster. Thus, teams are focused on ensuring the usefulness of their polling to assist activities such as messaging and, more importantly, voter identification and getting out the vote. As stated above, the Conservatives have demonstrated remarkable guile on this front over the last three to five years.

Pollsters who published polling results during the course of the Alberta and BC elections appear to be in unison saying that there was last minute movement by the electorate. This too is nonsense and an attempt to blame the electorate for being fickle. If we delve into other data associated with party loyalty, comfort with the economy, and motivating factors to vote, it appears that most voters may have been initially disenfranchised with the incumbent but return to them come election day. Much of this has to do with the notion of change, with feeling comfortable with the state of the economy or not wanting to shake it up too much, as well as with voter identification.

I continue to harp on this issue of loyalty, as this is a card that the best organized parties always have in their back pocket. They are much better at identifying the committed voter. In fact, conservative-leaning parties across the country invest substantial resources to this, with the Conservative Party of Canada having their CIMS database that is readily shared across the country with other conservative-leaning parties, including the BC Liberals. We are seeing tools evolving such as (Barack Obama’s platform of choice) NationBuilder and Track and Field, exclusively for voter identification. These platforms are meant to address a question rarely considered in the media: What is a party’s secure and confirmed vote? Polls are not designed to capture this data, but voter identification is playing a larger role in election outcomes. Some parties are clearly better at getting their vote mobilized and to the polls on Election Day.  We as pollsters continually ignore what this is, but not for the sake of having to account for the tool, but the very notion of modeling.

This said, Grenier reinforces the case for quality independent polling in the media:

“If you don’t have accurate media polls, you can have the narrative driven by the internal polls and you don’t have a way to fact check them… there is no way to do an independent poll to figure out what was actually going on. That is one of the reasons why media polls need to be around, but also they need to be done right otherwise it’s worse than having no information at all.”

Modeling: A Preferred Method?

Éric Grenier, on the problems in the polls prior to the B.C. Election:

“On the one hand there were some problems with getting a representative sample for some reason in BC, especially with the online polls. For one reason or another, maybe the panel was not as representative as it could have been. There might have been some sort of … the weighting issues, when you look at how you are going to weight the poll – some places were weighting it according to how voting was happening in the last election, some were applying less important weighting.”

There is a consistent misalignment of voter intentions and voter turnout. In most cases, answering a poll is not akin to actually voting. Polling exposes social desirability bias – I say I vote because it is the right thing to say, even if I don’t actually vote. Saying you want change and voting for change are independent events. This was evident in all of these “surprise” results.  In my opinion, the real metrics that matter relate to the committed/intending voter. These are poll respondents who have a history of voting (themselves and in their family tradition) and intending to vote on Election Day. In my analysis of polls from these “surprise” results, while it may result in a small respondent base with a higher margin of error, this number was a better predictor of voter turnout. Observing this metric within the content of the BC and Alberta elections, there were warning signs that things turned for the eventual winner earlier than what most pollsters believed.

All interviewees, like me, are firm believers that the ultimate challenge, as noted above, is to accurately model the voting population. This is the area where pollsters within a party’s or candidate’s campaign team focus on; they are not going to look at what they have sewn up or what is unattainable, but rather where the swing is and where they can potentially capture some of the committed vote. They don’t waste time on potential voters who are not committed or intending to vote. In fact, Darrell Bricker pointed out that one IVR poll in the US during the last national election started off with the statement, ‘If you do not intend to vote in the election, please hang up.’ He indicated that this was a successful method for addressing this modeling consideration.

With the challenge of identifying who is going to vote, there is increasing attention towards modeling this population. There is an appetite for doing more predictive and scenario-type analyses to assist in the determination of appropriate weighting procedures that can be developed for pollsters. Long considered the secret sauce of various pollsters, this simply does not cut it any longer. We are seeing entities such as Google bringing Bayesian-type approaches to identifying polling participants to the table and performing reasonably well in elections polling. Their method is fairly transparent, but it does indicate that more tools like this will be developed which can generate more consistent tracking, as well as higher response rates, especially on the notion of survey stitching – that is, how we can use online environments to continuously poll individuals without returning to the same individual twice. Thus the imputation of who an interested and committed voter becomes an interesting consideration for the industry.  Further, given the reams of data that can be generated from such an undertaking, this leads to further creativity in how certain communities could be developed that can provide an assessment of the quality of data relative to the general population. This will further challenge the media. However, campaign teams will love and adapt to these tools quickly.

The challenge is that this is imposing an abstraction of reality on the data, but a much-needed one, for pollsters to do their job better. With a voter turnout in BC of 52%, and in Alberta in 41% (2008) and 57% (2012), we are practically at the point where it’s a 50/50 shot of predicting whether or not an adult citizen is going to vote or not. This makes our job exceptionally hard. It also makes the ability to predict what is going to happen even harder if we are still bound by the illusion that we are working with a probability sample. As pointed out by Nate Silver in the analysis of various polls, there are inherent biases with different methods, but also regional discrepancies within those methods. One term that is becoming popular in polling is “horses for courses” – certain methods work better in certain regions – and understanding the mix of modes becomes the imperative of a pollster, even if it is also more challenging to explain.

Justason reflects on her recent experience within this context of better modeling.

“We need to adjust the data to make sure we have done it and we need to learn from our mistakes. We also need to acknowledge that we have made mistakes and get off this high horse that somehow the electorate – almost implying the electorate made a mistake – that kind of thinking, we rely on the general public to talk to us and to help us with this kind of thing and to go back on them and say somehow they are the reason that our industry goofed on this twice now is disingenuous.”

* * *

Next up: Part 3: Calgary Centre: An odd case of public engagement

Perspectives on Polling – Part 1 of a 4-part series: The problem with polling (and pollsters)

The problem with polling

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

*  *  *

Foreword

The market research and intelligence industry has a problem with political polling. A problem that needs clear thought and action.

This article has been three years in the making. While I did political polling before, it was a new experience doing it for a political campaign. Amidst a discussion of a “surprise result” in Calgary’s 2010 municipal election, I realized that political observers and the media were ignoring some basic fundamentals in understanding and analyzing polls. I also realized that it was useless to consider an entire electorate when turnout was projected to be low. This latter phenomenon has driven my interest in the state of polling in Canada.

The facts cited here – from provincial elections in Alberta and British Columbia – are clear, and I encourage readers to review the results discussed.

This is primarily an opinion piece. It is also a different article than originally envisaged. I had planned to write an assessment of polling practices and their reporting. However, the BC election of May 14, 2013, resulted in following up with North American thought leaders to explore the dynamics of this and other recent elections, and delve into the problems of the polling ecosystem itself. While it is easy to criticize, I have taken the opportunity to develop an agenda for action.

In preparing this I reviewed numerous reports of polls in the media, industry and association reporting and compliance protocols, interviewed thought leaders on the topic and delved into my own analysis and commentary at my blog, in The Globe and Mail and on CBC Radio and Television. Thought leaders interviewed included:

  • Dr. Keith Brownsey, Professor, Policy Studies, Mount Royal University, Calgary;
  • Barbara Justason, Principal, Justason Marketing Intelligence, Vancouver (@barbjustason);
  • Éric Grenier, Founder, ThreeHundredEight.com (@308dotcom);
  • Dr. Darrell Bricker, Global CEO, Ipsos Public Affairs, Toronto (@darrellbricker);
  • Jeffrey Henning, CEO, Researchscape, Norwell, MA (@jhenning);
  • Frank Graves, President and Founder, EKOS Research Associates, Ottawa (@VoiceofFranky); and,
  • Michael Mokrzycki, President, Mokrzycki Survey Research Services, NE Massachusetts (@mikemokr).

Much of the dialogue on polling – its problems, quality protocols and future – is happening on Twitter. I encourage all readers, if you have not done so already, to follow these individuals.

Part One: The problem with polling – and pollsters

We have a problem with political polling in Canada. Poor polling is affecting our reputation and the public is losing trust in our trade.

The signs have been around for a long time. Polling is at a crossroad. But the reality is that it will be at an eternal crossroad. And we need to invoke a balance of tradition and innovation to grasp how to do more accurate and relevant political polling.

With the focus now on horse-race polling, we do ourselves a disservice by trying to simplify complex phenomena and skip the nuance of finding the truth within the embedded data.

Let’s rewind to May of this year…

After trailing in the polls right up to election day, the BC Liberals won another mandate – in another provincial election that the media and public are calling a “surprising” result.  The question arises: Was this really surprising? According to the polls, it was. We witnessed yet another round of pollster “mea culpas” and those flogging that they got it “less wrong than the others.” Herein lays a new problem for the media, politicians and the public: their faith and belief in the accuracy of political polling.

As an industry and/or service, polling has historically had a decent track record. From the time George Gallup did a modest poll to the glory days of telephone polling in the 1980s and 1990s, with 60% to 70% response rates, polling has been constantly evolving. Now, that evolution is faster than ever before. And the biggest challenge, with declining turnout rates, is how to accurately model the voting population. Darrel Bricker sums it up this way:

“B.C. … really was a reflection of a problem emerging in Canada, and we have seen it over a couple of elections now, in which a phenomenon which is usually more of a force in other countries became a real force here. That was being able to predict, not necessarily how people are going to vote, but who was going to vote.”

Canada has now been infected by this international problem.  The American Association for Public Opinion Research (AAPOR) has taken this to task. There have been inquiries elsewhere – for example, after John Major won a majority in the early 1990s amidst the polls indicating a different result – that have resulted in improvements to the local polling ecosystem. Frank Graves considers another dimension of this problem:

“I think is most important is that we more clearly understand the difference between the job of a pollster or market researcher to model their population, and the job of providing a forecast about a political event, or any event, consumer decision I suppose. I think that those two have been conflated in an unhealthy fashion and that we are seeing the media and others use the standard ‘did you get the election call correct’ as a measure of polling quality and I think that that’s routed in a historical context which no longer applies”.

A True Probability Sample is a Thing of the Past

While a quality probability sample is still regarded as the gold standard, the reality is that this is likely no longer feasible. With a majority of polling being done online, and with the growth in IVR (Interactive Voice Response) surveys, many have accepted that non-probability samples are the norm. Jeff Henning weighs in on two points:

 “… People are saying it’s not true probability sampling anymore because of the response [rates], because of the expensive modeling of weight that is necessary to make it [polling] work.

…The battle that many firms are having is that their clients are asking them for more information, cheaper information and so it has led to the use of non-probability methods.”

Henning elaborated on prevalence of online polls:

“We have looked at 250 press releases from March until the first week of May of reporting survey results, 87% were reporting online surveys. I don’t think a single one of those was done with an online probability panel or a company like [GfK] Knowledge Networks.”

With the rise of the software-based polling methods – notably online polling using internet panels of self-selected respondents, as well as IVR systems (typically referred to as “robodials” by the public) – the cost of entry for new methods and firms has never been lower. And driven by the law of large numbers and shockingly low response rates. Gone are the days of excellent response rates to telephone (landline) polls. A 1% response rate on an IVR poll is considered “acceptable.” Gone also are the days of predictably engaging the public to garner their political inclinations.

After the 2012 US national election, Nate Silver, most likely the most famous political statistician at this moment, published an eye-opening analysis of all the polling data collection methodologies and pollster accuracy. The findings were revelatory – fast and cheap methods had larger respondent biases (by supporters of specific political parties) and were less accurate.

Surprisingly, the best-performing poll was the Columbus Dispatch’s old-school mail survey. Henning notes that this was address-based sampling (“a true probability sample”). Overall, live telephone operator and internet panel polls performed significantly better than robodials. These methods were better at establishing a more population-representative sample that captured the diversity of opinion and voting behaviour. However, they are also significantly more expensive than the cheap-to-operate, large sample, conducted-overnight robodials. Clearly there is a trade-off here.

The consensus is that, while feasible, it is cost-prohibitive at this point. While there is some dialogue with mixed mode methods, including cell phone samples, to reflect what a contactable live caller option may be, we are challenged to work with what we have. The evolution of more methods – and in this case the evolution of more digital methods, including online and now smartphone-based – means that response rates will continue to remain low. So, the challenge of attaining a true probability sample is never going to go away.

Structured, Targeted and Regional Samples

One thing that was evident in the recent BC and Alberta elections was the issue of the balance and size of samples.

Keith Brownsey remains critical of the notion of sample size. Alberta and BC are regionally diverse, and in his opinion pollsters have been using sample sizes that are not adequate. He notes that:

“A regionally diverse place between Vancouver Island, the Lower Mainland, the Okanagan and the North, even the North Coast, are very very diverse, and with a small sample from those regions you can’t really get a sense of voting intentions.”

I agree with Keith, as it is my observation with online samples that they do tend to be better within urban environments. Thus, when we are seeing great diversity within our cities, and more homogeneity by region, the onus is upon pollsters to ensure that we are able to get large enough samples to provide some better regional perspective. And, more importantly, in critical swing ridings.

In all elections there is always an inherent skew in voter turnout patterns as a result of regionality. We have seen parties leverage this to their advantage – for example, the Conservative Party of Canada targeting rural and suburban Ontario. With regionality a dimension that needs to be grasped, why do we focus strictly on the horse race? Especially when it is about understanding which seats are in play. Nate Silver states that he is luckiest analyst in the world, as almost 90% of the vote in the US is already decided, so he just has to focus on the 10%. So, his interest is highly focused; he is seeking to understand the dynamics which are shifting the 10% of the known voters.

Media & The Business of Polling

Political polling is traditionally a test of accuracy of a public opinion research firm. It is considered a loss leader (typically below cost), with the expectation that an accurate poll result would build a firm’s reputation and attract new and more profitable business. Over the last decade, this premise has changed dramatically. Mike Mokrzycki reflects upon three salient points: Gallup’s poor performance in the 2012 National Election, emerging firms, and oversight.

The Gallup organization for decades “gave away” its polling and I think I read in “Business Week” the other day about how in essence the public polling is a loss leader for an organization like Gallup. We see this for others as well, where they get a tremendous amount of publicity for the polling and ideally its good publicity, and that helps bring them business in other areas. Gallup, for instance, has a huge management consulting practice. Just because a company pays for polling on its own and gives it away for free, doesn’t automatically make it bad. However, I do see cases where companies that we haven’t even heard of before will all of a sudden appear on the scene and do a bunch of political polling and it gets reported and often it will be reported without anybody taking very much of a look at the underlying methodology. There has been at least one case about 4 or 5 years ago where AAPOR issued a public sanction standards complaint against a company that just refused to release any details about its methodology and the company subsequently – it was a PR firm that started doing polling and they don’t do polling any more.”

There is that old saying – “Fast, cheap and good. Pick any two” – which is truly applicable here. While corporations typically choose a combination of fast or cheap with good, media outlets have opted for fast and cheap. The business model of polls and the media has evolved. Media are currently either cash-strapped or losing money, and thus, in most cases, either do not pay for political polling or pay for access to polls already conducted. Jeff Henning sheds a light on the current state of journalism and polling:

“I know reporters who are freelancers who are paid on page views. They see a word count, on price per word – which as it was hasn’t gone up over the last 10 years – now they are getting paid on page count, so they have got to write multiple articles – write the article and get on to the next one. They want to do as good a job as they can, but they are in a hurry and some content is better than none and an online panel survey is better than no survey, so they will run with it.”

I have colleagues who said they don’t conduct polls for the media unless they get paid. Well, there are a lot less of their polls in the papers now. On his firm’s diminished presence of its political polling, Frank Graves adds:

“We had long standing relationships that went on over a decade with larger players like the CBC and La Presse and Radio Canada, Toronto Star – they have all evaporated in this climate and I don’t really think any meaningful relationships … and they were properly resourced, we didn’t get a lot of money though, but enough to do a good job.”

During the last Alberta provincial election, a regional newspaper approached my firm to conduct a poll. We provided a quote, to which they responded by asking if we could do it for free, as “it would be good for your reputation.” We thanked them for the offer and declined.

Pundits play favourites. There is the additional dimension of politicos and the media’s obsession with the “horse race.” Many column inches are taken up with the analysis of poll results and insights from pollsters (some of you may include this article in that category as well). While these stories do capture the pulse of an election, they do not take into account the overall election ecosystem and the body politic.

It is our sense that there is an educational component that really needs to be undertaken with the media to question the polls more vigorously. Graves states:

“The statistical fluency, methodological literacy, of the media today is appallingly low. That may be typical of a lot of areas where newsrooms have been cut back and we don’t have the same substance of experts.”

Instead of holding to their addiction to publishing poll results and moving on to the next poll, there could be more attention paid to working with them, to understand more about the method and what is going on with the data. This could yield deeper insights as to where the voter population is at, and also about how a poll is framed to gather its input. Too many times on Twitter we see people taking swipes at polling methodologies based on a weak understanding of how to develop a sample frame. We need to do a better job of educating these people, as they are highly influential within this sphere. This is something we have to work collaboratively with media, to ensure that there is better understanding among some of these advocates and influencers within the population, especially during political campaigns.

* * *

Next up: Part Two: The horse-race ignores context and nuance

Honeymoons, marriages of convenience and vote-splitting: Trudeau & Mulcair’s Challenge Ahead

Photo: Toronto Sun, Dec. 28, 2012

Photo: Toronto Sun, Dec. 28, 2012

This article is based on an op-ed published in The Globe and Mail: 

Justin Trudeau predictably is now the Leader of the Liberal Party of Canada. With 80 per cent support based on an 82 per cent voter turnout, the win validates that he is the Party’s consensus choice to lead them to the next election. While his proposal and level of engagement appear strong, there is much he and his party have to earn to substantiate a future marriage with the voters of Canada.

Trudeau’s election this weekend came amidst the confluence of four independent and interconnected initiatives and activities: the NDP’s Policy Convention in Montreal; the upcoming May 13th Labrador federal by-election for the seat vacated by Peter Penashue; Joyce Murray’s call for collaboration among progressive parties; and thought leaders calling for a one-time collaboration with the NDP. All, in various ways, point to challenges that Mr. Trudeau will face over the next two years. And the unifying theme here is marriages of convenience.

By definition, a marriage of convenience is one contracted for reasons other than relationship or love. It is done for a strategic purpose and personal gain. The gain framed here, and called for by many, is one whereby Liberals and NDP form a one-time pact to win the 2015 election with a goal for electoral reform.

Let’s take a closer look at these events.

On the one-time collaboration, much has been written by thought leaders such as Andrew Coyne and Jamey Heath. They have all called for a coalition of the Liberals and NDP to defeat Stephen Harper’s Conservatives, and to change the First Past the Post system in favour of proportional representation. Within the parties, such collaboration has been favoured by Nathan Cullen and Joyce Murray, who each finished third and second in their respective leadership races. Like the thought leaders, they too crunched the numbers and surmised that 60 per cent of voters presents a real opportunity for a unified progressive party. Joyce earned just over 10 per cent of the points, but also attracted support from those seeking party cooperation, including Green Party and post-partisan voters (i.e., a growing segment with no party affiliation). This notion of collaboration speaks to the quiet majority of Canada’s progressives – a group that heavily favour electoral reform, desire their vote to matter, and who seek representation.

This leads to the forthcoming May 13th by-election. Many are calling for the original race to play out as it did in 2011 between the Conservatives (Penashue) and Liberals (now Yvonne Jones). In the spirit of cooperation, the Green Party immediately stated that they would not be fielding a candidate. However, the reality is that they never had a chance, and Elizabeth May – who has the most to gain – was not going to miss an opportunity play politics with this event. Meanwhile, Thomas Mulcair is adamant that the NDP are in the mix and stated “we have every intention of getting Harry Borlase elected.” While this may be a worthwhile endeavour, some may feel that this is a waste of their resources and could be a low-risk test of collaboration. This seat will likely return to the Liberals – solely on the events that lead to need for this by-election, and not the strength of the Liberal brand.

This leads to the last key event: the NDP convention on the weekend. Over two thousand delegates came to Montreal, in their 59 seat Quebec beachhead, their attention squarely focused on the 2015 election. Present were stalwarts, such as economist Joseph Stiglitz, associated with the U.S. Democratic Party. But the most dramatic event of the weekend was the dropping of “socialism” from their constitution. This is a clear play to broaden their appeal, move to the centre and improve their electability.

This brings us back to marriages of convenience.

Trudeau, a fresh face at 41, buoyed by recent polls, was always the clear choice for the Liberals. However, the most cynical would say that this was never a race. It was a marriage of convenience. Some may argue that Trudeau from Day One, because of his name, may have benefitted from the “halo effect” (i.e., judgments of character can be influenced by impressions of him and his name) – an effect that the party is hoping to leverage. No doubt pollsters will begin to investigate the substance and effect of the Trudeau brand, and his famous last name, among the electorate amidst this dead zone between elections.

But who exactly is Justin Trudeau? A likeable character with politics in his DNA, he appears to be open to defining himself and his leadership of the Liberal Party. His performance over the last four months has generally been faultless, and he established a track record as a revenue generator. In his acceptance speech he distanced himself from his father’s past, and decried the “negative, divisive politics of Mr. Harper’s Conservatives.” However, Justin is no fan of party cooperation; he is “unimpressed that the NDP, under Mr. Mulcair, have decided that if you can’t beat them, you might as well join them.” On the prospects of a party marriage, Trudeau speaks to the partisan core of Liberals in Canada here.

The reality remains, that a marriage between the NDP and Liberals is hard, Trudeau or not, no matter how you cut it, even if with their similar policies and positions,.  In my own polling over the last five years, there are notable differences between their supporters.  NDP supporters are mix of highly-engaged, educated, socially active innovators and less-connected older, blue collar union-oriented types. The Liberals, by comparison, look a lot more like the Conservatives in terms of age, education and social and political engagement. A recent poll by Abacus Data (March 19-21, 2013) indicated that the second choice party for over one-quarter (28%) of Liberals would be the Conservative party. For Conservatives, a Liberal vote would be the second choice for over half (57%). This is understandable. Liberals and Conservatives are the only parties to ever lead the country. Even while they hold one-third of the number of seats as the NDP, most partisan Liberals still view their party as the natural party to lead Canada.

Deja vu all over again

Much of this new round of dialogue about marriages of convenience emanated from the three by-elections last November – specifically, the one in Calgary Centre, where progressives managed to get 63 per cent of the vote and yet lost to the Conservative. There were a number of critical events, connected to this by-election, beyond low voter turnout, that both Trudeau and Mulcair should consider:

  • Of the three centre/left parties, the Liberals were the most unwilling to cooperate –  even though the candidate (Harvey Locke) would have been the beneficiary of cross-party support and would easily have won with unified, progressive support.
  • The Green candidate picked up support from disaffected and former Liberals.
  • The NDP fielded a full campaign, with a late-entry candidate, even though they never stood a chance.
  • The three centre/left parties each fielded a fantastic candidate – each a quality individual who voters would have been happy to get behind, but instead ended up splitting the vote.

Sounds familiar? Painfully familiar to many voters across Canada.

With the Liberals re-emergence in the polls under Trudeau’s leadership and Mulcair moving the NDP to the centre, voters can expect more of the same for the future. This lack of consideration does not bode well for progressive voters. Recent research by Samara (December 2013) found that 55% of respondents were very/somewhat satisfied with the way democracy works in Canada – down from 75% in 2004. Samara also found that the public were dissatisfied with their MPs – a dissatisfaction driven by the thought that MPs do a better job of representing the views of their parties than they do representing their ridings and constituents.

Either way, Mulcair and Trudeau will continue to lead their parties for a share of about 60% of the electorate. Harper likes this math and continues to quietly cement his support with the balance, with a hope that the vote-splitting will likely lead to another Conservative majority. The reality of electoral math is that, beyond most partisan faithful, the centre-left is running the risk of frustrating their base.

In the absence of cooperation, Mulcair and Trudeau have to demonstrate that they and their party brand are relevant for Canada – relevant and in tune with the desire of the electorate for quality representation – and that they do reflect the values and aspiration of Canadians. Not just 60%. But to make inroads into Conservative support to grow the progressive pie, they need to demonstrate that this is not simply about defeating the Conservatives; it is about inspiring Canadians to consider their vision for the country while, at the same time, rebuilding the guts of their parties and building meaningful relationships with Canadians.

It is still in the eyes, minds and hearts of Canada’s progressive electorate, two (and three) sets of party infrastructure squarely aimed to get their vote. And both have their challenges. Of the two-thirds of Canadians voters who lie within the centre/left spectrum, Trudeau needs to convince them that the Liberal Party is the one of the future and to move them out of the doldrums of 35 seats to Canada’s second party (at worst). For Mulcair, the challenge is to show that the NDP is a party that can lead the country. Either way, Harper and the Conservatives are moving along with their existing strategy and will continue to stoke the fire that divides progressives.

Another key event of the weekend was the presence of Jeremy Bird, co-founder of 270 Strategies and Obama’s national field director for his successful 2012 campaign. He told the NDP convention delegates to use “Moneyball-style” analytics, establish meaningful relationships with voters, and build a strong ground game to win elections.

If executed, the problem remains that the Liberals and NDP will continue to approach the same pool, courting similar voters. Thus, the immediate challenge for both Trudeau and Mulcair is for each to annex the largest share of the pool, and then woo others who are in the remaining smaller share. And most importantly, woo those who have not turned up to the pool at all.

In the absence of any resolution on cooperation, Trudeau as Liberal leader and Mulcair with a new constitution both appear to have committed their party to another individual battle in the 2015 battle. Regardless of the state of or interest in partnership, like any marriage they have to earn the trust of the electorate. And they need to remember that a strong marriage is based on an alignment of values. Values of Canadian progressive voters that continue to taxed and tugged two and three ways.