Royal Mail embraces big data to boost performance

20

Apr
2016
Posted By : admin Comments are off
Royal Mail embraces big data boost performance
Categories :#AnalyticsNews

As the UK's largest mail carrier, dealing with billions of items every year, Royal Mail is a company well-used to managing huge volumes of information. But when it comes to improving how it handles its own digital data, the business is still in the rollout stage.

Speaking at a recent Hadoop Summit in Dublin, director of the firm's Technology Data Group Thomas Lee-Warren, explained the company has turned to Hadoop as the basis of a drive to gain more value from its internal data.

He told ComputerworldUK that as every item Royal Mail delivers is tracked, it has a huge amount of data at its disposal. 

"We are about to go up to running in the region of a hundred terabytes, across nine nodes," he said. 

One of the key challenges for managing this was to reduce the time moving information around the business. Previously, Mr Lee-Warren estimated the company's data insights team could spend up to 90 per cent of their time simply moving data backwards and forwards between its data warehousing solution and its analytical solution.

However, the organisation's Hadoop platform, which uses a Hortonworks deployment of the open-source software, eliminates much of this and helps Royal Mail get closer to its goal of data analysts spending 90 per cent of their time exploiting data and making it available to the rest of the business.

"We're accelerating that whole process, we're not having to spin up projects just to get data," Mr Lee-Warren said. "We are able to accomplish a huge amount of work with single individuals."

The company is still building out its big data analytics solution, and is taking a measured approach to the technology. As Royal Mail has relatively few resources it can devote to the area, it has to keep a tight focus on projects that can deliver a specific return on investment.

For example, one solution the data insights team is working on is churn modelling in order to help reduce customer attrition. By studying the data, Royal Mail can help its business units identify customers in particular industries who are most at risk of churn, so the sales and marketing teams can take proactive steps to avoid this.

A key advantage of deploying Hadoop for such tasks is the speed the software can provide. This enables the company to experiment more and find new ways of integrating the technology with its more conventional tools.

Mr Lee-Warren also noted that Royal Mail has not so far experienced difficulty in attracting talented big data professionals to the company, even though a lack of skills in the industry was one of the top topics for discussion at the Hadoop Summit.

He said: "It may be because we have a very attractive brand, but we're not finding it difficult to attract strong talent. A lot of the time I think data scientists get locked into a way of working that they find difficult and they like new challenges all the time, and we can provide that." 

Address your big data challenges, the Kognitio Analytical Platform explained

15

Apr
2016
Posted By : admin Comments are off
Categories :#AnalyticsNews, Blog

Watch how the Kognitio Analytical Platform provides highly scalable, in-memory analytical software that delivers ultra fast, high-concurrency SQL access to large and varied data using low-cost commodity hardware or Hadoop. When your growing user community wants ever faster query responses for complex workloads – they want unequivocal raw compute power by harenssing lots of CPUs efficiently doing lots of concurrent work, never waiting on slow disk. Enjoy the video, we had fun putting it together, leave us comments telling us what you think of it…

ADDITIONAL RESOURCES

Learn more by visiting the Kognitio Analytical Platform page

Explore

Retail banks turn to big data to regain customer trust

13

Apr
2016
Posted By : admin Comments are off
Retail banks turn to big data to regain customer trust
Categories :#AnalyticsNews

For many retail banks, the task of regaining consumer trust in the wake of the financial crisis of 2008-09 will be a difficult and ongoing challenge. With the sector still viewed with suspicion by many people, presenting a more personal face and improving customer service levels will be a high priority.

It was noted by FusionExperience chief executive Steve Edkins in an article for ITProPortal that this has become even more important in today's connected era, where the internet and social media mean dissatisfied customers are able to quickly voice any complaints to a wide audience.

In order to improve their customer service and avoid such issues, many retail banks are therefore turning to big data to offer services tailored to individual customers.

According to a study from the Centre for Economics and Business Research (Cebr), more than four-fifths of retail banks (81 per cent) will have adopted big data analytics by 2020. As well as helping track key industry trends and allowing banks to proactively adapt their strategy, this will also have a key role to play in building profiles of individual customers.

This can be useful at every stage of the customer journey. Mr Edkins noted that initially, big data analytics can be used to more effectively evaluate risk and creditworthiness. Then, when it comes to retaining customers, offering specific deals and tailoring their services accordingly will go a long way towards making consumers feel valued.

However, financial institutions will face two key challenges when it comes to adding big data to their customer service activities. The first will be how they extract relevant information from the huge amount of data they collect – separating the signal from the noise in order to make informed decisions.

The second will be how they collate this data and turn it into a useable format in time to make a difference. Today's fast-paced world demands the ability to extract, analyse and act on insights gained from data quickly if a company wants to maintain a competitive advantage.

"It is no small feat for retail banks to ingratiate big data into their processes as it often requires a daunting technological overhaul," Mr Edkins said, adding that one of the biggest challenges for these firms is getting complex legacy systems in line with today's big data capabilities. These often result in key data being placed in silos, and make it difficult for businesses to get the information they need quickly.

"To rectify this, banks will need to make better use of growing data sets such as correspondence, loan facility letters, contracts and the diversity of customer interactions if they want to offer bespoke consumer products that will allow them to fend off their more agile competitors," he stated.

However, if retail banks can get this right and build a strong customer service culture centred around big data, the rewards on offer are significant. Cebr's data forecasts that effective analysis of data could add £240 billion to the UK's economy through improved efficiency and better understanding of the market and customer demands.

Banks ‘not making the most’ of big data

08

Apr
2016
Posted By : admin Comments are off
Banks 'not making the most' of big data
Categories :#AnalyticsNews

Many banks should be doing more to turn the wealth of information they have available on their customers into actionable insights, it has been stated.

Speaking to Network World, head of banking and financial services at IT consultancy Xavient Information systems Deanne Yamato-Tucker noted that these institutions now have access to a wide variety of data from consumer-facing products such as apps. However, few of these are effectively analysing this information.

As a result, they are failing to take advantage of new opportunities to re-invent their offerings, deliver higher levels of customer service and develop innovative new products.

By careful use of their customers' data, banks should be able to offer more specific, tailored services to consumers, with rates that are "based on a consumer's banking patterns, levels of deposits, spending patterns, web browsing history, social media information [and] geolocation data", Ms Yamato-Tucker stated.

She added that offerings such as biometric identification, loyalty programmes, savings schemes and interactive money management programmes can all be part of a personalised user experience.

Crucially, much of the data needed to make these innovations a reality is already being collected anyway, so banks would not even have to put in place extensive new information gathering processes in order to learn more about their customers. The key to success will be how they can harness this existing data.

In particular, financial services firms need to improve how they handle metadata in order to make the organisation and analysis of information easier.

"With the growing variety and increasing velocity of data, banks need to develop comprehensive metadata management and data governance processes," Ms Yamato-Tucker said. "One cannot share and understand data effectively, and in a meaningful way, without managing the metadata."

Almost every bank has now set up services such as online and mobile portals that allow users to create payments, transfer funds and check their statements wherever they are. This was described by Ms Yamato-Turner as the "first round" of banking innovation.

The second, she continued, will be "a ubiquitous customer experience, where the customer, and their devices, as a representation of the customer, is the centre of the mobile ecosystem."

‘Cognitive storage’ aims to cut the cost of big data

06

Apr
2016
Posted By : admin Comments are off
cognitive storage cut big data costs
Categories :#AnalyticsNews

One of the key challenges for any organisation embarking on a big data project will be ensuring that costs are kept under control – something that is not always easy to do when firms are collecting and storing huge amounts of information.

Therefore, in order to tackle this issue, IBM has revealed it is working on a new method for automatically classifying information in order to ensure the most relevant data is always on hand.

Known as 'cognitive storage', the solution involves putting value to incoming data, determining what data should reside on which type of media, what levels of data protection should apply and what policies should be set for the retention and lifecycle of different classes of data, Computer Weekly reports.

IBM researcher Giovanni Cherubini explained the most obvious answer to the challenge of handling large amounts of data while keeping costs low is to have tiers of storage – such as flash and tape solutions – with the most important data held on the fastest media.

The machine learning tool aims to assess the value of data and direct it to the most appropriate solution, by studying metadata and analysing access patterns, as well as learning from the changing context of data use to help it assign value. 

IBM researcher Vinodh Venkatesan added: "Administrators would help train the learning system by providing sample files and labelling types of data as having different value."

For business users, the challenge of this is that they will have a large variety of data – from business-critical transactional data to emails, machine sensor data and more – so it will be essential that any cognitive storage system is able to categorise this correctly.

Mr Venkatesan said: "For an enterprise, there are ‘must keep’ classes of data and these could be set to be of permanently high value. But that is a small proportion in an enterprise. 

"The rest, the majority, which cannot necessarily be manually set, can be handled by cognitive storage – such as big data-type information and sensor information that might have value if analysed."

How big data is helping transform the logistics industry

30

Mar
2016
Posted By : admin Comments are off
Image credit: iStockphoto/Maxiphoto
Categories :#AnalyticsNews

By some estimates, e-commerce now makes up more than half of all retail sales, indicating that it's clearly something no business can afford to be without if it wants to be successful.

But despite the huge range of innovations that have helped improve the sector over recent years, the final part of the process – the process of physically getting goods from a business into the hands of the consumer – has remained largely unchanged. Although offerings like click-and-collect give consumers more choice, for many people, it's still a matter of sitting around waiting for a courier to show up.

And for the logistics industry itself, it's this part of the process – the so-called 'last mile' between local distribution centres and the customer's home, where the challenges lie, as this is typically the slowest and least cost-effective part of their operations.

However, this is starting to change as more organisations adopt big data analytics and the Internet of Things to give them more insight into where they can make improvements.

Speaking at a supply chain conference recently, Matthias Winkenbach, director of the Massachusetts Institute of Technology’s (MIT's) Megacity Logistics Lab, said these innovations can be a powerful resource for the sector, if businesses are able to effectively harness them.

The Wall Street Journal reports that one of the big challenges is that, while companies have large amounts of data available to them, they often don't know want to do with it, or even understand what it is telling them. But the team at the MIT Megacity Logistics Lab is looking to change this, working with companies such as Anheuser-Busch and Brazilian e-commerce firm B2W to examine what lessons can be learned from last-mile analytics.

For instance, Dr Winkenbach noted that while data-collecting tools can be used to track the progress of delivery vehicles and identify patterns in delivery times in order to better inform route planning, they can also provide "transactional data" that gives a clearer picture of what happens between a delivery truck and a customer's doorstep.

He explained many shippers want to know why some drop-offs take longer than others, a question that was hard to answer in the past as there was very little data available other than that provided from the truck itself.

But advanced geospatial information reveals that longer drop-offs tend to occur in the most densely populated parts of a city, where many people live in high-rise apartments. Mr Winkenbach said this indicates delivery workers are struggling to park, walking farther after parking, and climbing stairs when they get there.

It can also help logistics providers get a better picture of consumer behaviour, such as pinpointing customers who are typically not at home during delivery hours. This is not a factor that is usually factored into route planning, but can greatly impact the efficiency of operations.

The WSJ notes this is all information that can be used to help create more efficient routes, inform training programmes and determine the most suitable delivery vehicles. For example, the data may prove that multiple short-route deliveries on smaller vehicles, including bicycles, makes more sense than bulk deliveries in large trucks.

The results of this should be more effective delivery routes that not only save money for logistics firms, but allow consumers to get their goods faster, thereby improving customer satisfaction from both buyers and retailers.

Streaming analytics to be key big data priority for 2016

25

Mar
2016
Posted By : admin Comments are off
Hadoop and Spark 'key big data platforms' in the UK
Categories :#AnalyticsNews

Improved streaming analytics capabilities, self-service solutions and advanced in-memory tools will be among the key big data analytics technologies businesses will be investigating throughout 2016, a new report has found.

A study by Forrester Research, which notes the development of big data technology is reaching a new stage this year, as organisations aim to embed the solutions into the applications that power their operations.

InformationWeek reports that the main priorities for 2016 indicate a change in businesses' attitudes and approach to big data. In previous years, companies were still struggling to get to grips with the technology and how they could extend their existing solutions to incorporate big data.

Now, however, there is a much higher level of confidence in the tools available. Research vice-president and principal analyst at Forrester Brian Hopkins said: "Forrester has seen an explosion in client adoption of big data since we first wrote about it in 2011. For example, the number of firms implementing streaming analytics, a key leading-edge big data technology, more than doubled between 2012 and 2015." 

The research firm defines streaming analytics software as tools that are capable of filtering, aggregating, enriching, and analysing a high throughput of data from multiple disparate live data sources and in any data format. By identifying both simple and complex patterns, such tools enable businesses to get a real-time pictures of their operations, detect urgent situations, and automate immediate actions.

Increased demand for tools such as Apache Spark illustrates the challenges businesses are facing when it comes to processing and analysing data in real time, the report continued.

Elsewhere, newer, more business-focused goals for big data analytics will see higher demand for advanced analytics solutions that make use of in-memory computing and data preparation tools.

"To keep pushing revenue growth and digital customer experience transformation, big data technology is expanding its scope," wrote Hopkins. "It must also address the scale, speed, and integration requirements necessary to embed insights into the very fabric of next-generation, insights-driven businesses."

Overall, the report found that many organisations are set to make significant investments into big data this year, with more than six out of ten firms in Europe and North America planning to have systems in place by the end of 2016.

However, many organisations are said to be at a crossroads when it comes to making decisions on their big data deployments. Although many of Forrester's clients are looking to invest in technology for real-time data processing and user and customer self service, they are often unsure of which options to choose amid a crowded field of options from open source and commercial vendors.

Converged approaches to data ‘among key big data trends’ for 2016

22

Mar
2016
Posted By : admin Comments are off
Image credit: iStockphoto/cifotart
Categories :#AnalyticsNews

A move away from centralised data storage approaches, converged analytics platforms and a greater focus on value and quality will be among the key trends facing the big data industry in 2016.

This is according to co-founder and chief executive of MapR John Schroeder, who wrote in an article for IT Pro Portal that as big data analytics has moved beyond a buzzword to become an essential part of many organisations' strategy, it is transforming the enterprise computing environment.

However, this is an area that's constantly evolving. "With many new innovative technologies on the horizon, not to mention a particularly noisy marketplace, differentiating between what is hype and what is just around the corner can be challenging," Mr Schroeder noted.

Therefore, he highlighted several key trends that all businesses looking to improve their big data analytics capabilities will have to consider in 2016.

One of the key areas of focus will be an effort to develop more converged analytics environments. Mr Schroeder said that in the past, it has been accepted best practice to keep operational and analytic systems in separate business applications, in order to prevent analytic workloads from disrupting operational processing.

But this attitude is changing as new tools emerge that can use in-memory data solutions to perform both online transaction processing (OLTP) and online analytical processing (OLAP) without the requirement for data duplication.

"In 2016, converged approaches will become more mainstream as leading organisations reap the benefits of combining production workloads with analytics in response to changing customer preferences, competitive pressures, and business conditions," the MapR chief executive stated. This convergence will also speed up the 'data to action' cycle and removes much of the latency between analytical processes its impact on business performance.

Mr Schroeder also forecast that 2016 will see a shift away from centralised workload and processing models to more distributed solutions. One reason for this will be to better deal with the challenges of managing multiple devices, data centres, and global use cases, across multiple locations.

Changes to overseas data security and protection rules brought about by the nullification of the EU-US Safe Harbor agreement will also dictate how companies store, share and process large quantities of data. With Safe Harbor 2.0 on the horizon and set to bring in new restrictions, global companies will need to re-evaluate their approach to cross-border data storage that will affect their analytics activities.

Elsewhere, it was predicted that 2016 will see the market focusing far less on the "bells and whistles" of the latest products, and more on established solutions that have proven business value.

"This year, organisations will recognise the attraction of a product that results in a tangible business impact, rather than on raw big data technologies – which, while promising an exciting new way of working, really just cloud the issues at hand," Mr Schroeder said.

Ultimately, vendors that are able to demonstrate quality will win out in 2016 as businesses demand proven, stable solutions to meet their requirements for better operational efficiency. 

"Now more than ever, an organisation's competitive stance relies on its ability to leverage data to drive business results. That's easier said than done when it’s pouring in from every origin imaginable," Mr Schroeder said.

4 in 10 businesses to adopt IoT solutions this year

17

Mar
2016
Posted By : admin Comments are off
Image credit: iStockphoto/emyerson
Categories :#AnalyticsNews

The number of companies adopting Internet of Things (IoT) technology is set to see significant growth in the coming years, which is likely to place additional pressure on businesses to develop solutions to make the best use of the data they gather.

IoT is a technology that often goes hand in hand with big data analytics, due to the large volume and variety of real-time information they are capable of collecting. And for a growing number of organisations, the tools are set to have a pivotal role in how they operate.

Research from Gartner revealed that less than a third of companies (29 per cent) are currently using IoT. However, it also found that an additional 14 per cent are planning to implement the technology within the next 12 months.

Overall, almost two-thirds of businesses (64 per cent) agreed that they will deploy IoT eventually, while just nine per cent of respondents stated the solutions will have no relevance to their sector.

However, adoption rates varied widely across sectors, with Gartner noting that heavy industries such as utilities, oil and gas, and manufacturing are leading the way. By contrast, more service-oriented businesses are still lagging.

Chet Geschickter, research director at Gartner, stated there are two key reasons why only a third of businesses have currently implemented IoT. He said: "The first set of hurdles are business-related. Many organisations have yet to establish a clear picture of what benefits the IoT can deliver, or have not yet invested the time to develop ideas for how to apply IoT to their business."

The second problem lies within organisations themselves, as many companies lack the skills and leadership needed to make their initiatives a success.

"2016 will be a very big year for IoT adoption. We are starting to see a wide range of IoT use cases across virtually all industries. But the big challenge now is demonstrating return on investment," Mr Geschickter said.

In many cases, turning details gathered from IoT devices into actionable results will demand a strong big data solution that is able to quickly and effectively process this information and translate it into a useable format.

Hadoop and Spark ‘key big data platforms’ in the UK

14

Mar
2016
Posted By : admin Comments are off
Hadoop and Spark 'key big data platforms' in the UK
Categories :#AnalyticsNews

Hadoop continues to lead the way as the preferred big data analytics platform for organisations in the UK, but Spark in starting to make inroads into its dominance.

This is according to recent research by Computing magazine, which found almost six out of ten respondents (59 per cent) believed their company will be using Hadoop as its primary analytical tool in 18 months' time.

This compares to 17 per cent who named Spark as the way forward for their business, while Kinesis (seven per cent), Storm (four per cent) and Flink (two per cent) received lower levels of interest. One in four IT professionals stated their business would be using another solution for big data processing.

However, the research found that more advanced organisations – described as those businesses that are leading the way when it comes to adopting and using technology to drive change – were more likely to favour Spark over Hadoop, suggesting that it is catching up.

Computing did offer a note of caution, observing that many businesses use both Spark and Hadoop in conjunction with one another, so it may well be the case that even as Spark interest goes, Hadoop is unlikely to be replaced any time soon. However, for the purposes of the survey, respondents were asked to choose only one processing platform, in order to see which are having the most impact on professionals' thinking.

Interviews conducted by Computing also saw Spark come up frequently, with the speed of the solution a commonly cited benefit. One chief technology officer noted that although it is much easier to find people with experience and understanding of Hadoop, tools such as Spark and Storm are "much more attractive and faster".

As the capabilities of Spark have grown, it has also become more attractive to companies with needs for both batch and real-time processing. One data scientist Computing spoke to noted that if users are looking to deploy new solutions, they will increasingly turn straight to Spark, rather than use tools such as MapReduce.

Facebook

Twitter

LinkedId