Close

Trump, lies and why measurement matters

The polls got trump wrong, says Jeremy Suisted. But there's a way they could have gotten things right - and there are plenty of lessons organisations in Aotearoa can learn from their failure.

I was sitting on a stationary bicycle at AnyTime Fitness when I heard the news that Trump had an unassailable lead and Clinton was about to cede the presidency to him.

This was an event that was undeniably surprising for two key reasons.

First was the fact that, well, Donald Trump was now the President of the United States of America. A figure associated with “You’re Fired!”, gaudy hotels, shady business dealings and the WWE was now in one of the most powerful and serious political positions in the world.

This was cognitive dissonance at the highest level.

Second was the realisation that the standard polls and predictions had been astronomically wrong. From his first standing in the Republican Primaries, through to the general election — polling experts told us that Trump didn’t stand a chance.

Voters in the street said they were put off by Trump’s behaviour. Surveyed votes said they were undecided, or would not vote for him.

After the election, review on the numbers showed that the polls under-estimated hissupport by about 2% — a significant figure for politics, and an even higher margin of error given the huge quantity of people surveyed. What went wrong?

Data scientist Seth Stephens-Davidowitz’s fantastic bookEverybody Lies: Big Data, NewData, and What The Internet Can Tell Us About Who We Really Areexplores the world of data, measurement and metrics.

As Seth explores the Trump case, he makes two salient points.


1.The way we measure matters. After Trump’s numerous gaffes, offences and dubious history came to light — publicly admitting you would vote for him was akin to social suicide. So when a reporter asks who you would vote for — the safest option is to lie. Some people would be too embarrassed to say Trump. Others would misreport as ‘undecided’ when they really knew they would vote for the Don.


Anytime we use a measurement tool, we are framing our focus. Our selection of the tool —  be it a metric, survey, or focus group — self-selects what we will be including as our data —  and, more importantly, what we will be excluding.


2.Great measurements go wider. Seth’s focus is on using a range of data from Google searches to discover unique insights. Leaning on his analysis, he concludes that Trump’s presidency could be more accurately predicted by two data points.


First, when participants Googled one candidate, then invariably would include the other in their search (e.g., “Trump Clinton policies”). 27% of searches with either candidate’s name included the other.


And — the order they put the names was a key indicator of who the searcher was more likely to vote for.


Whoever they put first, they were highly likely to support.


Someone who searched “Clinton Trump booth” — would likely vote for Hillary. And vice-versa.


This had occurred for the previous three elections — and repeated for the 2016 Presidential elections.


Secondly, Seth’s research has also uncovered a shockingly high-degree of covert racism among Americans. By exploring Google searches of people looking for racist jokes, poll expert Nate Silver discovered that the single factor that best correlated with Trump’s support was their racist searches.


Seth concludes, “Areas that supported Trump in the largest numbers were those thatmade the most Google searches for the n-word.”

The most valuable insights come from searching widely, comparing data and utilising a range of tools to engage in our discovery.


Last week I read a report on the New Zealand innovation landscape. The researchers had sent out a survey to a range of CEOs, and reported back:


  • 84.3% of business leaders said that innovation was crucial to the business’ long-term success;

  • 30% cannot manage innovation well.


These numbers are not surprising — it is clear that CEOs would state that innovation is crucial to their business’ success — as that is a truism that is not worth denying.


Incidentally, when it comes to self-reporting of ability to manage — most high achiever’s over-report their ability. Over 90% of professors say they do above-average work. 25% oftop-students think they are in the top 1%.

I would estimate — that a much higher percentage than 30% cannot manage innovation well — but the leader does not want to report this.


However, the final reported insight was the most telling of them all.


When asked what the most important condition for improving innovation within their business was, the most popular response for CEOs was — creating anorganisational culture that supports innovation — at 37%.

Now, I agree with the significance of an innovation culture — but the vast majority of organisations (well over 90%) do not have any metrics or measurements to benchmark and report on how strong their innovation culture is performing.


So, when asked how their innovation culture is going — they can say, “Good, I guess!” and pat themselves warmly on the back — without any justification of why.


What’s a better way forward?


1. Remember, the way we measure matters. A standard innovation measurement (ROI or ROI2; R&D spend; %R from New Products/Services) will only tell part of the picture. It focuses us on financials — and lagging financial indicators (the revenue that our innovations have generated) rather than leading indicators (that mark how our innovation capabilities are developing).


Additionally, these indicators fail to provide meaningful insight to the capabilities of the business — such as the people, culture, business models, use of technology, leadership styles and processes. Innovation does not happen between notes and coins — but between people.


And finally — these metrics aren’t benchmarked. Numbers make sense when we have global standards to compare them with. Without a benchmark — the metrics can merely self-perpetuate the current reality.


2. Great innovation measurements go wider. The true competitive advantage of an organisation is not their innovation culture, but their innovation capability.

A capability is the sum of their culture, people, strategy, measurements, models, processes — and how each of these link together.


So a reliable measurement will engage with each aspect of the organisation’s capability —  benchmarking their results against the global best-practices in their field.


Additionally, in my consultancy work, the greatest insights have always come fromthe intersection between quantitative and qualitative analysis.

The metrics allow us to identify the capability areas to focus on. The interviews, conversations, ethnographers and focus-groups then raise stories, language and moments that awaken breakthroughs.


With one company, it was the discussion of their “Nerdy Time” — a hack-a -thon event that had been present in their early days. Another organisation shared stories of working with the founders.


When these stories intersect with metrics — powerful and valuable insights emerge. Mark Fuller, CEO of Monitor Deloitte and former Harvard Business School professor, stated that innovation is impossible to sustain without rigorous, and relentless efforts tomeasure and improve performance along all relevant dimension. What we measure, we can manage.


This is not a cry to abandon hope of innovation measurement. It’s a call to sharpen up.


Let’s avoid simplistic metrics that provide no real meaning — and pursue measurement and understanding of our innovation capabilities that are deep, benchmarked and transformative.

Jeremy Suisted is the director of Creativate (www.creativate.co.nz), a New Zealand innovation + design agency specialising in innovation capability measurement and development.