The hospitality sector is a highly competitive part of the economy, with hotels in particularRead More
IoT and cloud ‘the future of Hadoop’
The creator of Hadoop, Doug Cutting, has said that cloud computing and Internet of Things (IoT) applications will be the basis for the next phase of growth for the platform.
So far, most deployments of the big data analytics tool have been in large organisations in sectors such as finance, telecommunications and internet sectors, but this is changing as more use cases emerge for the technology.
Much of this is down to the growing use of digitally-connected sensors in almost all industries, which are generating huge amounts of data that businesses will need to quickly interpret if they are to make the most of the information available to them.
Mr Cutting highlighted several major companies that have already adopted Hadoop to help them handle this huge influx of sensor data.
“Caterpillar collects data from all of its machines,” he said. “Tesla is able to gather more information than anyone else in the self-driving business, they’re collecting information on actual road conditions, because they have cars sending all the data back. And Airbus is loading all their sensor data from planes into Hadoop, to understand and optimise their processes.”
One sector that is on the verge of a revolution in how it manages information is the automotive industry, as a growing number of cars are being equipped with IoT sensors and networking capabilities.
Mr Cutting noted that almost every new car now sold has a cellular modem installed, while almost half of new cellular devices are not phones, but other connected items.
Until now, Hadoop has often been deployed as a key component of a ‘data lake’, where businesses pool all their incoming data into a single, centralised resource they can dip into in order to perform analytics. However, use cases for IoT typically have a need for data to be exchanged rapidly between end-devices and the central repository.
Therefore, there has been a focus recently on the development of new tools to facilitate this faster exchange of information, such as Flume and Kafka.
Mr Cutting particularly highlighted Apache Kudu as having a key role to play in this. He said: “What Kudu lets you do is update things in real-time. It’s possible to do these things using HDFS but it’s much more convenient to use Kudu if you’re trying to model the current state of the world.”
He also noted that while the majority of Hadoop applications are currently on-premises, cloud deployments are growing twice as fast, so it will be vital that providers can deliver ways to embrace this technology in their offerings.
“We are spending a lot of time on making our offerings work well in the cloud,” Mr Cutting continued. “We’re trying to provide really powerful high-level tools to make the lives of those delivering this tech a lot easier.”