Kognitio analytical platform:
tech profile

3. What is an analytical platform?

Although Kognitio can be classed as a massively parallel processing (MPP) analytical “database”, it operates very differently to the majority of other MPP databases on the market.

Databases tend to suggest data storage. Kognitio has its own optional internal disk subsystem and is primarily used as a high-performance analytical layer on top of existing storage/data-processing systems, e.g. Hadoop clusters and/or existing Enterprise Data Warehouses or cloud storage etc. Hence we use the term “platform” as opposed to “database”.

To the outside world, the Kognitio Analytical Platform can look like a traditional Relational Database Management System (RDBMS) in the same way that Oracle™, IBM DB2™ and Microsoft SQL Server™ are databases. However, unlike these databases, Kognitio has been architected specifically for an analytical query workload as opposed to the more traditional on-line transaction processing (OLTP) workload. The optimal architecture for an effective transactional system is very different to that required for successful analytical performance.

Analytical workloads are best served by a MPP architecture that splits the data and queries across many individual computer elements or compute nodes. Each individual node has a portion of the total data, individual queries are sent to all nodes and each node works on its own portion of the data.

High performance OLTP processing, on the other hand, requires an architecture where each node is able to see all of the data. Even when an OLTP database is run on a cluster of nodes, each node needs access to a complete copy of the data. Usually the data is distributed, but each node needs to see the data held on physically different nodes. This creates huge amounts of inter-node network traffic, limiting OLTP database clusters to a small number of physical nodes. In fact, eight nodes is a large OLTP cluster and it is well known that Oracle Real Application Clusters (RAC), for instance, is best limited to two nodes. Even when several nodes are used, an individual query is generally satisfied by only one node in the cluster.

These different architectural requirements mean that OLTP databases perform poorly when asked to do analytics on large data volumes. Conversely, analytical systems have relatively poor transaction processing performance. Some analytical databases are actually unable to do any transaction processing. Kognitio supports full transaction processing and is ACID compliant, but its transaction processing performance is moderate when compared to a high-performance OLTP database.

Kognitio has been delivering scale-out, in-memory analytics for more than 20 years and as such can rightly claim to be a pioneer in the field. MPP has become the preferred technique for solving the problem of how you provide high performance analytics on big data. Many platforms now have a shared nothing MPP architecture, although the degree to which these different platforms parallelize their operations varies. Some, like Kognitio, parallelize all aspects of their operations, whilst some only parallelize their data scanning.

Changing the analytical model

The very high-performance levels achieved by Kognitio are about far more than making things go faster. In-memory analytical platforms fundamentally change the way organizations go about building future analytical infrastructures. The traditional analytical infrastructure, with its onerous data latency, lack of flexibility, poor scalability and high maintenance is moving towards a much more dynamic model, based on the power of low-cost commodity hardware rather than relying on expensive system administration skills. The new model allows database and data model life cycles to become shorter and more agile. The figure to below contrasts the two approaches.

In-memory platforms

The majority of database systems, whether they are OLTP or analytical, store data on mechanical, spinning, disks. Mechanical disks are relatively slow devices and the speed with which data can be read from disk is limited. Mechanical disks generally have a maximum read speed of around 100MB per second. Disk I/O speed is the primary performance bottleneck for disk-based databases. Writing data to disk is even slower than reading, so analytical query processing that generates intermediate temporary result sets is further impacted by the need to perform many disk write operations.

Kognitio, on the other hand, is an in-memory platform. The data of interest is held directly in RAM or memory and modern low-cost industry-standard servers allow increasingly large amounts of RAM to be fitted at very low cost. And of course RAM is dramatically faster than disk.

A typical industry-standard server will have RAM with access speeds of at least 6400MB per second. This is 64× faster than a disk drive read and more than 100× faster than a disk drive write. It is also important to note that DRAM as its name implies, is a random access device where data can be read or written in very small chunks, from anywhere in the memory space, with virtually no overhead. Disk drives, on the other hand, are a sequential block access device, which means that data is read in sets of sequential blocks. During a read operation, these blocks must be found and read from the drive and copied into RAM before the data of interest can be worked on. This three-stage operation slows access to the data of interest even further.

Moving between blocks on the disk usually involves “seek time.” This is the physical repositioning of the mechanical read head over the required track and is a very slow operation. Analytical databases are generally “seek time immune,” as data is normally scanned sequentially in volume. However, when an analytical query involves the generation of an intermediate result set, the seek time becomes hugely significant, since the disk must now be reading data from one track whilst also having to write the intermediate result sets back to a completely different track on the disk.

Kognitio does not write intermediate result sets back to disk. In fact, when all the data of interest is held in memory, Kognitio does not perform any disk access even when executing the most complex of queries. Instead, intermediate result sets are created in memory, and use Kognitio’s sophisticated query streaming mechanism that allow queries to run, even if the available free memory is too small to hold the intermediate result set.

Holding the data in memory means that Kognitio is able to scan data at extremely high speeds eliminating the need for indices. By splitting the available memory into small chunks and dividing them between the available CPU cores, Kognitio is able to scan all memory chunks in parallel.

The use of algorithms that exploit the random access nature of RAM and techniques such as Dynamic Machine Code Generation mean that these scans are incredibly fast and efficient.

While indices work well for transactional systems that need to access a small number of records in a large data set, they make analytical systems that are looking for patterns across large chunks of the data, very inflexible. Analytical systems should also allow users ad-hoc access to the data and should not be constrained by available indices, couple this with the overhead of building and maintaining indices and the ability to get speed without indexing is a major advantage.

Is in-memory the same as caching?

At first glance, in-memory simply sounds like a large cache, but it is in fact very different.

A cache is a buffer of the most frequently used disk blocks, held in RAM, for opportunistic re-use. Only the caching layer knows which data is resident in RAM. So when a query is physically executed, the CPUs must continually run code that asks the question, “is the data I need cached or not cached?” for every block or row. This code is not trivial and significantly increases the number of instructions the CPU has to execute as it runs the user query. Caches themselves are highly dynamic depending on what operations are accessing data; the contents of a cache can widely vary over time – CPU cycles are wasted merely determining at any point in time what data blocks are best retained in cache.

When data is loaded (pinned) into memory by Kognitio, it is explicitly formatted and placed in structures that guarantee immediate, ultra-low latency, on-going random access; every aspect of the system knows exactly what data is held in RAM. When the Kognitio compiler and optimizer produce a query plan, they can take into account the different costs of RAM-based access versus disk-based data fetch, and produce an appropriate, efficient plan, depending on whether or not all the data resides in memory. Most importantly, the code being executed does not need to keep asking the, “is data cached, not cached?” question. This reduces the executing code path length by a factor of 10.

When working with data-sets that are too big to fit completely in the cache, a conventional cache will keep the last used data in memory, causing inconsistent query performance. A query that once ran quickly can suddenly perform orders of magnitude slower as the data it needs is removed from memory by a query that needs a different sub-set of data. This problem becomes worse as concurrency increases. In an in-memory system on the other hand, the system is explicitly told which data to hold in-memory and it will keep it there until explicitly commanded to remove it. This provides consistent and predictable query performance.

In a Kognitio system, the data loaded into memory is not just a simple copy of a disk block. Instead, the data is held in structures specifically designed to take advantage of the low-latency random access nature of RAM. When combined with Kognitio’s Dynamic Machine Code Generation, this significantly reduces the executing code path length, thereby improving query performance.

Because Kognitio software has been engineered from its earliest versions to work against data held in RAM, all of its algorithms for processing data (e.g. joins, sorts, grouping, etc.) have been specifically optimized to fully exploit the random access nature of RAM along with modern CPU instruction optimizations. This is what we call “dynamic machine code generation”. This is not true of other databases that are fundamentally designed to work against disk-based data and which have introduced, at a later date, extended caches or faster I/O sub-systems that have been inappropriately labelled as in-memory data storage.

In-memory vs. solid state disk drives

Solid State Disk drives (SSDs) use silicon storage to replace the mechanical disk drive. SSDs cannot be considered as being equivalent to computer memory or RAM for several reasons. Although they do make disk based systems faster, there are several reasons why they certainly do not deliver anything like the same level of performance as in-memory platforms:

  • SSDs do not use DRAM, instead they use a much slower memory technology called FLASH
  • SSDs mimic conventional disk drives; as such, they are block access devices
  • SSDs typically connect via standard controller interfaces and not via the main front-side bus
  • Server class SSDs are still very expensive
  • SSDs have lower capacities than traditional mechanical spinning disks

On paper, SSDs appear to have a significant performance benefit over mechanical disks. In fact, the bulk of this performance benefit comes from the elimination of seek times. As previously discussed, analytical databases are relatively seek time immune, so the performance gains are not as dramatic. An application that involves mainly random disk access (OLTP database, file server, Windows, etc.) may see a 10–20× performance increase from using SSD, while Kognitio’s testing of SSDs in an analytical platform showed a more modest 2–3× increase in performance over conventional hard drives when scanning data on disk.

Whilst this is still significant, it produces nowhere near the performance level of DRAM. The high cost of server class SSD drives also means that, terabyte-for-terabyte, DRAM is not much more expensive than server class SSD drives. So why do people use SSDs? Because the vast majority of applications were designed to work with disk drives and are unable to exploit large amounts of RAM. Simply replacing the mechanical device with a solid state device means that they can get noticeable performance gains without any re-engineering of the application. The complication is that the removal of the performance bottlenecks at the disk I/O level exposes the code’s inability to parallelize across all the available CPU cores and its inherently inefficient use of the CPU cycles. This means that a significant amount of the potential performance gain available from deploying SSDs is not realized.

4. True parallelism is all about the CPUs

Having all the data of interest held in computer memory does not, in itself, make an analytical platform fast.

Keep reading