Now, I’ve just read the website from a company called Splunk. They were onRead More
Why multi-node on Kognitio Cloud will get a huge lift with Amazon Web Services
So far it has been great fun building multi-node Kognitio environments on Amazon Web Services (AWS), but it has not been without its issues. The key one being the large number of nodes required to bring together a RAM footprint that is capable of supporting complex analytics on billion row plus data volumes.
This is because the Cluster Compute instances that Amazon supports have been limited to a maximum of 60GB of RAM per node and so a terabyte of RAM will require 17 of them. From a Kognitio perspective, this is fine, our technology cares not about the number of nodes. From a financial perspective, your wallet may feel the pinch after a few hours of processing, especially if you are using on-demand instances.
Well your wallet can rest a little now as AWS introduced a new instance type at their re: Invent developer conference in Las Vegas last week. This new class of node has been designed with Massively Parallel Processing, in-memory databases and HPC usage in mind and provides 240GB RAM per node and has 2 x Intel Xeon E5-2670 (total of 16 cores – though Amazon are likely to enable hyperthreading so we should see 32). The nodes also come with 2 x 120 GB SSD and the 10 GbE networking that is at the heart of their Cluster Compute environment.
I for one can’t wait to get my hands on the new infrastructure and begin building out some new ready-to-use platforms on it. Watch this space for our certified release on that architecture.