Blogs

Right information ‘key’ to big data success

When it comes to analysing large data sets and converting this information into useful insight that a business can use to improve its performance, it may seem like common sense to rely on the results you get to inform your decision-making.

But often, enterprises can run into problems because the data they are using for their calculations is either wrong at the start or being interpreted incorrectly at the end of the process. It was stated by Drew Turney in a piece for the Sydney Morning Herald's IT Pro section that this can often have severe repercussions.

For instance, he noted that when the US had its credit rating downgraded by Standard and Poor's a couple of years ago, this was not put down to any significant worsening of the economy, but an error in the agency's original calculations, which led to a discrepancy of $2.1 trillion. 

Statistician and predictions expert Nate Silver also recently told a conference that as much of the blame for the financial crisis could be put down to poor modelling as to greed. He said that ratings agencies based their assumptions on the number of past mortgages and not the number of people who were likely to default, which led to much of the industry relying on faulty results for their key decision-making.

For many firms, issues with poor results from data analytics come back to the old adage of 'garbage in, garbage out'. If companies do not know what the most relevant data will be for them, they are unlikely to get usable results, while they also need to understand the relationship and context of the data they are using.

Mr Turney noted, for instance, that it will be a mistake to assume their customers' incomes are related to their ages when looking to attract them with more targeted offers.

In another case, NBC lost out by misunderstanding the views of its test audiences when conducting TV analytics. Mr Turney noted these groups often ranked highly successful shows such as Seinfeld poorly, while being more positive about cheaper, lower-quality copies. It was eventually determined by marketers that what the audience were responding to was familiarity rather than quality – something that changed the meaning of the results they were getting.

Head of marketing at BloomReach Joelle Kaufman told IT Pro good data interpretation is about ensuring the data is closely monitoring and has a personal input. She said: ''Machine-generated data can be useful, but without the human element of intuition and control it can be downright offensive. We've all been the victim of poorly targeted data-driven marketing – I recently purchased a dryer and have been stalked by dryers on the web ever since. I only need one.''

Omer Trajman, vice-president of field operations at big data applications company WibiData, agreed, adding that companies are often quick to blame poor-quality data for the failure of projects, when in fact the issues often lie elsewhere. "There's no such thing as bad data, just bad models. Data analysts just need to extract the right data at the right time, then act on the insight in a timely manner," he said.