IoT ‘to change focus’ of big data plans

19

Aug
2016
Posted By : admin Comments are off
IoT 'to change focus' of big data plans
Categories :#AnalyticsNews

The emergence of the Internet of Things (IoT) as a key technology for many businesses will lead to a significant change in how organisations approach their big data analytics operations.

This is according to a new report from Machina Research, which stated there will be an increasing focus on predictive and prescriptive analytics in order to assist with business decision-making as firms try to make the most of the available data.

The company said that as millions of connected devices come online and provide real-time details about what is going on in the physical world, businesses will look to shift the goals of their analytics activities from examining what has happened to asking what is likely to happen.

Although more traditional activities such as historical and descriptive analytics will still have their place, the real value in the coming years will come from being able to accurately foresee opportunities and threats before they become readily apparent to competitors.

Author of the report Emil Berthelsen, principal analyst at Machina Research, observed: "One of the more significant developments as part of, and in parallel to, developments in IoT, is the approach of two different 'waves' in data management – big data and fast data."

He explained both of these are characterised by high speed and large scale, and the combination of the two has led to significant changes in how businesses interact with data, resulting in new requirements for data management.

"The landscape of IoT data and analytics is certainly evolving and will include a new age of machine learning, augmented insights and managed autonomy, as well as a new set of enabling technologies and data governance tools," Machina Research stated.

By 2020, it is estimated that IoT-equipped gadgets and sensors will make up around half of all connected devices. According to a study by Cisco, this will equate to some 12.2 billion items.

Meanwhile, Machina Research has estimated that revenues from IoT are set to exceed $3 trillion in 2025, compared with just $750 billion last year.

Many firms still struggling to secure cloud-based data

09

Aug
2016
Posted By : admin Comments are off
090816 - Image credit: iStockphoto/Henrik5000
Categories :#AnalyticsNews

For many businesses, cloud computing presents a great opportunity to make the most of big data, as the technology allows them to access storage and processing resources that may otherwise be beyond their reach.

But if they are going down this route, they must take steps to ensure any sensitive data they transfer to the cloud is secure – and this is something that is proving to be a challenge for a large number of firms.

A recent study conducted by Gemalto and the Ponemon Institute found that although 73 per cent of enterprises currently regard cloud-based platforms as important to their current operations, fewer than half of IT security professionals are confident in the security of their solutions.

Some 54 per cent of respondents did not agree their companies have a proactive approach to managing security, or ensuring that their cloud solutions comply with privacy and data protection regulations.

Additionally, 56 per cent did not agree their organisation is careful about sharing sensitive information in the cloud with third parties such as business partners, contractors and vendors.

This is despite the fact that a growing amount of data, ranging from customer information to payment records, is stored or processed in the cloud. In 2014, 53 per cent of businesses held customer data in the cloud, but this has increased to 62 per cent today. The majority of respondents (53 per cent) also consider this information to be most at risk.

Dr Larry Ponemon, chairman and founder or the Ponemon Institute, said that cloud security "continues to be a challenge for companies, especially in dealing with the complexity of privacy and data protection regulations".

He added: "To ensure compliance, it is important for companies to consider deploying such technologies as encryption, tokenisation or other cryptographic solutions to secure sensitive data transferred and stored in the cloud."

This is something many firms are currently failing to do. The survey showed only a third of businesses (34 per cent) using Software-as-a-Service solutions currently encrypt or tokenise sensitive data that is being transferred to the cloud.

One common concern was that conventional security practices do not apply when dealing with the cloud, which means firms may have to adapt their approach to activities such as big data analytics. Seven out of ten firms (70 per cent) cited this as a challenge, while 69 per cent stated the fact they cannot directly inspect cloud providers for security compliance is a problem.

Jason Hart, vice-president and chief technology officer for data protection at Gemalto, commented that although organisations have embraced the cost and flexibility benefits of the cloud, it is clear that many businesses are still struggling to maintain control of their data in this environment.

"It's quite obvious security measures are not keeping pace because the cloud challenges traditional approaches of protecting data when it was just stored on the network," he continued. "It is an issue that can only be solved with a data-centric approach in which IT organisations can uniformly protect customer and corporate information across the dozens of cloud-based services their employees and internal departments rely on every day."

How big data could transform the roads of the future

08

Aug
2016
Posted By : admin Comments are off
How big data could transform the roads of the future (iStock/IakovKalinin)
Categories :#AnalyticsNews

While big data is fast becoming an integral tool in the business world, its potential impact is far more wide-reaching and has the capacity to impact the day-to-day life of people all over the world. 

For example, big data is playing a key role in the development of so-called smart cities, where almost every aspect of life, from waste disposal to law enforcement, will be managed by integrated technology solutions that are dependent on big data to function. 

Key to the development of true smart cities will be the evolution of more efficient transport systems, which make getting from A to B quicker and easier. Big data will be vital to making this future a reality and great strides have already been made.

Intelligent traffic

Traffic congestion is one of the biggest problems in cities across the world. Time-consuming and pollution-generating, it has a negative impact on both the environment and city residents' quality of life. There is an economic effect too, with a study from the INRIX and the Centre for Economics and Business Research forecasting congestion will cost the UK economy £307 billion between 2013 and 2030.

The growth of the internet of things and machine-to-machine technology presents solutions to these problems. According to Statista, more than one in ten (12 per cent) of cars on the road now possess connected technology and this figure is expected to reach 22 per cent by 2020. With such technology in place, it is possible for vehicles to share and receive real-time data on road and traffic conditions. This can then be processed to provide information that can be used to manage traffic more effectively.

Theoretically, the exchange of data will make it possible to monitor the number of cars in a certain area and divert approaching motorists towards a different route when capacity is reached. Trails to this effect have already taken place in the US and China. 

However, progress is still required before this vision of truly smart traffic can be achieved. Hussein Dia, associate professor at Swinburne University of Technology, Melbourne, recently discussed the issue with Raconteur.

"While decision-makers and city leaders are recognising the role of data analytics in ‘sweating of assets’ and providing innovative solutions to meet demand, deployment at a global scale is still in its infancy," he stated.

"As the amount of data about current and future travel demands increases in the connected world, so the possibility of better analytics increases. In order to have real benefit, though, predictive analytics for transport as a whole is required," Prof Dia added.

Self-driving cars

Should data-driven traffic management become a reality, the next step in the development of truly smart cities will be autonomous vehicles. Progress has already been made in this area, with Google having covered close to 2.5 million miles in test drives through its self-driving car project. Tesla, meanwhile, has adopted more of a piecemeal approach and has already released the 'autopilot' software update, which allows its vehicles to drive, change lanes and adjust speed autonomously. However, the car must already be moving at a consistent speed and have maps data of the surrounding area before these features can be engaged.

If genuinely autonomous cars are to become a reality, big data will be crucial, providing the real-time information these vehicles will require to navigate the roads in the safest and most efficient manner possible. 

4 in 10 manufacturing firms experimenting with big data architecture

04

Aug
2016
Posted By : admin Comments are off
4 in 10 manufacturing firms experimenting with big data architecture (iStockphoto/shulz)
Categories :#AnalyticsNews

Just under half of businesses in the high-value manufacturing sector are currently experimenting with big data architecture.

This is according to a new study carried out by the Alan Turing Institute and Warwick Analytics, the results of which have been shared with The Manufacturer. It found that 41 per cent of businesses in the sector are currently at the experimentation stage with big data architecture, looking into how they can utilise this technology and the value it can provide. However, adoption is expected to accelerate in the near future, with the number of businesses still experimenting forecast to be only 11 per cent come 2019.

The study revealed there is a lack of clarity about some aspects of big data among manufacturers, with half of the companies surveyed unable to clearly understand the difference between business intelligence, big data analytics, and predictive analytics.

When it comes to technical barriers to adoption of big data analytics, having data spread across a number of systems, which will prove difficult to combine, was rated as the top factor. Concerns about the quality of data and difficulties cleaning it were ranked as the second biggest challenges. Other barriers mentioned ranged from a lack of data to having too much information, while some respondents believe data analytics is simply too difficult to understand.

A number of business challenges were also highlighted as obstacles to adoption, such as a lack of internal sponsorship, a shortage of specific data analysis skills and not having an effective business case for the technology. Despite these concerns, the majority of respondents see the value in big data analytics, with 92 per cent saying they believe it can drive a business improvement of more than ten per cent.

"The ability to extract meaningful insights about products; processes; production; yield; maintenance, and other manufacturing functions, as well as the ability to make decisions and take proactive action – when it matters – can deliver tremendous growth and profitability result," the report stated.

"Manufacturers have tremendous potential to generate value from the use of large datasets, integrating data across the extended enterprise and applying advanced analytical techniques to raise their productivity both by increasing efficiency and improving the quality of their products. However, the reality is that very few of today’s manufacturers are close to this vision yet," it added.

The study highlighted a number of key benefits that data analytics can deliver for manufacturing businesses. These include improving quality by providing a firm foundation from which the root cause of problems can be identified and making production more effective. Other advantages are faster time-to-launch, forecasting maintenance needs and improving supply chain operations.

Big data tech makes inroads into UK public sector

03

Aug
2016
Posted By : admin Comments are off
Big data tech makes inroads into UK public sector (iStockphoto/blackdovfx)
Categories :#AnalyticsNews

Big data technology is beginning to make a mark within the UK public sector.

Both HM Revenue and Customs (HMRC) and the Home Office are now using Hadoop to help manage their data needs, according to a report from Computer Weekly. Both organisations are using commercial distributors of the software, with HMRC utilising Cloudera and the Home Office working with Hortonworks.

The former spent some £7.4 billion on Cloudera at the beginning of last year before investing almost another £1 million earlier in 2016.

A HMRC spokesperson told Computer Weekly: "HMRC has built an enterprise data hub – a powerful central repository for all of its data, which will help it to personalise services to customers and strengthen its compliance work. HMRC will be able to store and analyse data using a mix of open source and closed source tools, and commodity hardware, representing better value for money for taxpayers."

The spokesperson added that use of Hadoop will allow for greater operational efficiency and a level of analytical capacity that has not been available to it in the past.

Records show the Home Office spent £53,000 with Hortonworks in August 2014 and a further £61,000 two months later. It is reported that the organisation is using Hadoop to connect the various databases it currently relies on.

Regarding the use of big data technology in the public sector, Cloudera's vice-president for northern Europe Stephen Line said full-scale adopting will take time.

"Government, like a lot of old industry, has to go through that digital transformation, modernising its digital architecture, breaking down those silos. The UK is not necessarily behind or ahead particularly," he commented.

Other UK public sector bodies now utilising big data technology include the National Crime Agency and Office for National Statistics. 

Earlier this year, an independent report commissioned by the UK government called for the nation's public sector to improve its use of data. Professor Sir Charlie Bean, a former deputy governor of the Bank of England, compiled the document, which said improvements need to be made to ensure that accurate economic statistics on the digital economy can be captured.

Prof Bean: "We need to be candid about the limitations of UK economic statistics. The UK was one of the original pioneers of national accounting. We need to take economic statistics back to the future or we risk missing out an important part of the modern economy from official figures."

Among his recommendations were for the establishment of two new centres to better measure the UK economy and unlock the "treasure trove of big data available – especially in the public sector".

Prof Bean called on the Office for National Statistics to become innovative enough to provide the kind of data the country needs.

How can you keep your data lake as clean as possible?

27

Jul
2016
Posted By : admin Comments are off
260716 - Image credit: iStockphoto/Pixtum
Categories :#AnalyticsNews

One of the key trends in big data analytics for the last couple of years has been the concept of the 'data lake'. The idea behind this is to place all a business' incoming data in a single location, from which it can be studied at will.

But while this may seem like a simple idea in principle, the reality may often be far different. If organisations are not careful with how they manage this, it can quickly become clogged with poor-quality information, irrelevant details and inaccuracies, which could see a firm's data lake ending up looking more like a swamp.

So how can this be avoided? In a new report, Constellation Research explained that many businesses fail to appreciate that a data lake should not be viewed as a replacement for a traditional data warehouse, which is able to support predictable production queries and reports against well-structured data. 

Instead, it noted: "The value in the data lake is in exploring and blending data and using the power of data at scale to find correlations, model behaviors, predict outcomes, make recommendations, and trigger smarter decisions and actions."

Where many poor implementations fail is if a business does not put in place a clear structure to order the data within their lake. There may be an assumption that simply deploying a Hadoop framework is enough to create an effective data lake, but in reality, this is not the case.

Constellation Research vice-president and principal analyst Doug Henschen, who authored the report, noted that despite it's name, it will be a mistake to consider a data lake as a single, monolithic repository into which data can be dumped without thought or planning.

Instead, they should look to split their data lake into 'zones' based on the profile of a particular piece of information.

"If Hadoop-based data lakes are to succeed, you'll need to ingest and retain raw data in a landing zone with enough metadata tagging to know what it is and where it's from," Mr Henschen wrote.

For instance, businesses should set up zones for refined data that had been cleansed and is ready for broad use across the business. There should also be zones for application-specific data that can be developed by aggregating, transforming and enriching data from multiple sources. A zone for data experimentation was also recommended.

This will not be an easy goal to achieve, Mr Henschen stated, as it will require businesses to pay much closer attention to data as it enters the company, as opposed to simply ingesting everything and then looking to categorise it later.

Although the Hadoop community has been working on a range of tools to help with the  ingestion, transformation and cataloging of data, many IT professionals are still not hugely familiar with these. However, Mr Henschen said there is good news on this front, as a broader ecosystem has emerged around Hadoop, aiming to tackle the problems associated with managing data lakes.

How are retailers making the most of big data?

26

Jul
2016
Posted By : admin Comments are off
260716 - Image credit: iStockphoto/emyerson
Categories :#AnalyticsNews

One part of the economy that's been particularly quick to embrace the potential of big data is the retail sector. Given the large amounts of customer information these firms collect as a matter of course, being able to feed this into an advanced analytics platform in order to gain insight is a natural fit.

Therefore, forward thinking retailers were some of the first adopters of big data analytics technology, and have developed their innovations into mature solutions that can give them a leg-up over competitors. But what does this look like in the real world?

Datanami recently highlighted several key use cases for big data that retailers are employing. While some are straightforward, there are also some more complex solutions in place that companies are using to understand the market and offer the best products and service.

For starters, product recommendation is a key area for big data. This is particularly popular among ecommerce retailers, as it is a relatively simple use of the technology, but one that can have a big impact. By using machine learning techniques and historical data, smart retailers can generate accurate recommendations before the customer leaves their site.

Eric Thorston, Hortonworks‘ general manager for consumer products, told Datanami: When you think about recommendations, everybody wants to beat Amazon. Love them or hate them – most retailers hate them – Amazon makes from 35 per cent to 60 per cent revenue uplift on recommendations, and everybody is saying, How can we get a piece of that?

But this is just the tip of the iceberg when it comes to what big data can offer retailers. For instance, another common use case for the technology is market basket analysis. Looking at which groups of products are commonly purchased together is an activity that has been carried out manually for decades, but with the advent of tools such as Hadoop, retailers can automate the process and delve much deeper into their data.

In the past, such activities may only use a small sample of customers, with receipts going back one or two years. But big data can greatly expand this, offering companies much more accurate results they can use to inform future strategy.

Big data is also a major benefit when it comes to analysing unstructured data, such as social media posts. In today's environment, any company that does not listen to its customers on platforms such as Twitter and Instagram will be missing out on a huge amount of potentially valuable information, and retailers are no exception.

Tools such as Hadoop use natural language processing to extract information from these channels and play a critical role in helping firms understand their audience. However, Mr Thorston warned this is an activity that must be conducted carefully.

"The minute you make a wrong move, you lose," he said. "The obligation is to use it judiciously. That prevents the misuse and that also preserves and supports and aligns to the ultimate goal, which is customer intimacy, customer loyalty, increased revenue, and increased margin."

These are just a few examples of how big data can help retailers. In addition to this, processes such as price optimisation, inventory management and fraud detection all stand to benefit from the technology.

How the aviation sector is embracing big data

22

Jul
2016
Posted By : admin Comments are off
220716 - mage credit: iStockphoto/Maxiphoto
Categories :#AnalyticsNews

The aviation sector has always been a leader when it comes to technology, so it should be no surprise that the industry has been quick to embrace the potential of big data.

In many ways, it's no surprise that there are many opportunities for data to have an impact. With millions of people taking to the skies every day, this gives airlines a huge pool of resources they can use to improve services and determine trends. 

Meanwhile, the advent of Internet of Things sensors offers airlines greater ability to conduct activities such as predictive maintenance, as well as giving manufacturers more insight into what is happening on the factory floor.

Among the companies looking to take advantage of this is Boeing, which has just announced a new agreement with Microsoft that will see it use the technology firm's Azure cloud computing platform to run a range of analytical operations.

Big data analytics operations that will benefit from this include real-time information on purchasing and leasing aeroplanes and engines, as well providing customers as route planning, managing inventory, and maintaining fleets.

For instance, Boeing's advanced airplane health solutions are currently used on more than 3,800 airplanes operating around the globe and allow customers to use real-time data to optimise operational performance, fuel use, maintenance, and supply chain performance. Meanwhile, nearly 13,000 aircraft a day benefit from digital navigational tools. 

The manufacturer also claims that the use of big data-based crew scheduling applications can reduce the costs of these operations by as much as seven per cent.

Kevin Crowley, Boeing vice-president of Digital Aviation commented: "Boeing's expertise and extensive aviation data resources coupled with Microsoft's cloud technology will accelerate innovation in areas such as predictive maintenance and flight optimisation, allowing airlines to drive down costs and improve operational efficiency."

Elsewhere, budget airline Ryanair has also announced a new partnership with visual analytics firm Qlik this week, which is says will consolidate data from across the company and allow employees instant insight into what it going on within the business.

It hopes that in the future, the use of this data will allow the airline to make better business decisions in time-sensitive departments such as flight and ground operations, as well as improving the services they offer to passengers.

For instance, Ryanair aims to boost its in-flight retail offering, as well as helping to optimise the supply chain by understanding the anticipated passenger mix on a given flight and matching this with an appropriate range of products and sufficient stock for the flight.

"We're building a complete overview of what’s going on across the business and it is playing a major role in the way we are evolving the services we offer to customers," said Shane Finnegan, senior BI developer at Ryanair. "Ultimately, we want to find the best ways to make our customers happy on-board, while being able to offer them the lowest fares on the market."

Insurance sector ‘must be careful’ in how it uses big data

21

Jul
2016
Posted By : admin Comments are off
210716 - Image credit: iStockphoto/cifotart
Categories :#AnalyticsNews

The increased use of big data in the insurance sector to conduct more personalised risk analysis and tailor quotes accordingly could lead to the create of a new 'underclass' of consumers who struggle to secure coverage unless the industry treats the technology with care.

This is the warning of a new report from the Chartered Institute of Insurance (CII), which said the use of the technology could result in some people being refused insurance altogether if they are deemed to be too risky.

The Financial Times reports that big data analytics is increasingly being viewed as a key part of the future of insurance due to its ability to give providers a more complete picture of their customers, thereby leading to a more personalised service.

Much of the discussion surrounding this so far has focused on the ability to offer discounts on premiums for activities such as careful driving and healthy lifestyles, but at the other end of the scale, this personalised approach could leave people priced out of the market.

"While in some cases this may be to do with modifiable behaviour, like driving style, it could easily be due to factors that people can't control, such as where they live, age, genetic conditions or health problems," the report stated.

Therefore, insurers need to be very careful about how they approach the use of big data analytics. While there are undeniable benefits to the technology, insurers must be wary about the extent to which they rely on this.

"Data is a double-edged sword," said David Thomson, director of policy and public affairs at the CII. "The insurance sector needs to be careful about moving away from pooled risk into individual pricing. They need to think about the broader public interest."

He added that if the industry cannot ensure that coverage is available to everyone – particularly in areas such as health insurance – intervention from the government may be required.

This has already been seen in some areas, such as home insurance. The Financial Times noted that improved mapping and data analysis has allowed insurers to much more accurately identify homes and businesses that are at highest risk of flooding.

This led to complaints from many people that cover became unaffordable for these areas, so the government created the Flood Re organisation, which aims to lower the cost of insurance for people living in high-risk areas.

"Regulators are trying to catch up on this issue," said Mr Thomson. "So there is a huge emphasis on insurers to guard their own reputations and business models. As in banking, algorithms can be good and bad."

At the moment, there are some restrictions on the data insurers can take into account when calculating premiums. Health and life insurers, for example, cannot use predictive genetic test results under an agreement between the government and the Association of British Insurers. This is currently set to expire in 2019, although a review is due next year that could see it extended.

How can you ensure the quality of IoT data?

14

Jul
2016
Posted By : admin Comments are off
How can you ensure the quality of IoT data? Image credit: iStockphoto/cifotart
Categories :#AnalyticsNews

One of the biggest trends affecting the analytics sector at the moment is the emergence of the Internet of Things (IoT) as a key source of data.

Over the next few years, the number of IoT devices is set to explode. By the end of 2020, Juniper Research forecasts there will be some 38 billion such items in use, a threefold increase since 2015.

But while this will present huge new opportunities for businesses to apply big data analytics to the information generated in order to gain valuable insight, it does pose a range of risk as well.

Although questions such as privacy are well documented, one issue that is frequently overlooked is the quality of the data itself. Businesses may assume that because the incoming data will be taken directly from sensors, there will be little that can go wrong with it, but in fact, this is not necessarily the case.

It was noted by Mary Shacklett, president of Transworld Data, that one issue that may frequently affect the quality of IoT data lies in fundamental flaws in the way the embedded software used in the devices is developed.

She explained in an article for Tech Republic that historically, developers of this software – which runs machines, produces machine automation, and enables machines to talk to one another – did not always employ the same methods as they would for more traditional apps.

"This meant that detailed quality assurance (QA) testing on the programs, or ensuring that program upgrades were administered to all machines or products out in the field, didn't always occur," Ms Shacklett stated. 

The result of this could be significant for big data operations. If an undetected flaw in an IoT device's embedded software results in inaccurate data being generated, this could lead to an erroneous analytics conclusion that has a major impact on the business.

Although this is changing as more manufacturers mandate strict compliance and QA testing from their embedded software developers – with sectors such as automotive, aerospace, and medical equipment leading the way due to their high quality standards, for now, this remains a risk that must be considered when using IoT data.

To counter this, Ms Shacklett highlighted two key steps to ensure the quality of this information. Firstly, she noted that users must monitor their generated data closely, and immediately investigate any unusual readings, which also need to be reported to the appropriate teams.

For instance, "if the team charged with end responsibility for machines/devices sees anything unusual with the data, immediate action should be taken on the floor, and they must report back to the analytics team that a potential problem could affect data".

Organisations also need to ensure that vendors are kept in the loop, on both the analytics and machine side. It was noted that as hardware and software is never perfect, there may be some instances where data might be skewed by a known issue that a machine manufacturer or IoT provider is experiencing.

Facebook

Twitter

LinkedId