Kognitio analytical platform:
tech profile

7. Kognitio technology features and capabilities

In this section, we explain some of the specific workings of Kognitio in more detail.

Pinning data into memory

If a query is run against data that is held on the Kognitio internal disk subsystem or an external platform (via External Tables), Kognitio will automatically fetch the data it needs into memory and execute the query. On completion of the query, the data is automatically dropped from memory. This is OK for infrequent access but is not desirable for frequent access where disk I/O becomes the limiting factor and CPU efficiency is lost. For the best possible performance, Kognitio allows all or portions of the data in a table to be permanently pinned into RAM. These are called “images”. Queries that access imaged data never have to access disk at all – totally eliminating slow disk I/O.

Simple one-line extensions to standard ANSI SQL are used to define and create memory images. Issuing these commands causes data to be read from disk (internal or external) and loaded into RAM. Memory images can be defined in a number of different ways; this allows for the optimum use of the available memory for a full range of analytical operations. Equivalent simple commands can be used to drop images when no longer required – an instantaneous action.

Memory distributions

Kognitio supports a number of memory distribution algorithms to support a wide variety of use cases. Available distributions include random, replicated, partitioned, sorted, hashed and partial hashed. Some of these may be combined e.g. hashed with partitions. Data distributions combined with an intelligent optimizer to help reduce scanning IO (only scan required partitions not the whole table), improve join performance for large-table to large-table (hash and partial hash) and small-table to large table (random), accelerate aggregation (hash and partition) and accelerate sorts (sorted). Distributions can be easily and quickly changed to meet variable operational needs or the changing nature of the data and data model.

Fragmentation

Any table or a portion of a table can be instructed to pin into RAM as a memory image. The table can be loaded into RAM in its entirety or user selected vertical and/or horizontal fragments of the data can be imaged.

Fragment illustration

Vertical fragmentation is used when there are columns in the table that are infrequently used for analysis e.g. long address text fields or comment fields.
Horizontal fragmentation is often used when the majority of analysis is performed on a range of data such as the most recent data, the remaining (historic) data is accessed less frequently. For example, the most recent year of transactions would be held in RAM and the remainder left on disk.

For data that is stored on Kognitio local disk, horizontal fragments are created by simply adding a “where” filter clause to the image creation statement. When a table is horizontally fragmented, the system will automatically execute the query in-memory if all the data it needs is memory-resident. If it is not, it will automatically perform the necessary disk scans to fetch any additional required data in order to fulfil the query.

A powerful feature of Kognitio is the ability to image views. Standard relational views (a virtual table defined via an SQL query) can be used to perform complex data selections and manipulations, the results of which can then be pinned into RAM (a “view image”). View images can be created for any view, including complex table joins and aggregates, the imaged content is a read-only snapshot of the results of the view query at the time it was imaged.

Data from external tables can be easily pinned into memory either by using
insert select … into <a ram-only> table;
to create a RAM-only table, or via appropriately defined and imaged views.

Update-able vs. read-only

A benefit of view images is that they carry a very small row overhead (can be zero overhead) when imaged in RAM; however the underlying data cannot be updated. Table images (full or fragmented), for tables stored on Kognitio internal disk, are fully updatable in real-time. When updates are applied to a local table that has a table image or fragmented table image, the update is applied to both the memory image and the disk-based data. The advantage of view images is that they use less memory than table images for the equivalent data, as they contain no links back.

Partitioning

Partitioning is a feature that can reduce the amount of data scanned for each query. Kognitio Partitioning works in a different way to conventional database partitioning. Conventional database partitioning allows the data to be distributed across the available infrastructure based on some key in the data. Queries that only work on a particular key will execute only on the hardware that contains that key. Kognitio partitioning also allows data to be distributed on a key in the data, but partitioning happens within each memory chunk. This allows all available cores to participate in each query but reduces the amount of data scanned within each memory chunk, thereby increasing query speeds.

Intelligent parallelism

Kognitio’s Intelligent Parallelism allows granular queries that operate on a small subset of the total data to be isolated to one or more CPU cores and thus allow hundreds of these queries to be satisfied simultaneously. Even queries that need to hit all cores during an initial data scan can execute subsequent steps on smaller sub-sets of cores, again increasing concurrency. These features allow even a moderately sized Kognitio platform to support a query workload of thousands of queries per second.

Dynamic Machine Code Generation

Kognitio uses a technique called Dynamic Machine Code Generation or Code Generation for short, to further increase the speed at which data can be queried.

Compared to the normal interpretive techniques used by other databases, code generation reduces the code path and hence the number of CPU cycles for the compute intensive inner loops of a query, by a factor of ten.

What is code generation?

Like most SQL databases, when a SQL query is submitted to Kognitio, it is first compiled and optimized to produce an efficient query plan. The query plan tells the run-time engine the steps needed to satisfy the query. However unlike most SQL databases, Kognitio then passes the query plan to a process that turns each step of the plan into a piece of optimal machine code with the parameters of the query hard-coded into the code. Kognitio produces machine code for all query steps.

The machine code produced is then distributed across the available cores and executed. The machine code is able to run as a very tight loop with no wasted machine cycles. Without code generation, a large number of cycles would be wasted by the software interpreter, decoding and executing the query plan.

Because the machine code is generated dynamically for each query, code generation, when coupled with RAM-based operation, also has the advantage that the size of the data row can be used to form an offset between the fields of interest. The machine code can then use this offset to jump between memory locations that contain the relevant data further increasing query performance.

Loading data directly into Kognitio

Kognitio allows data to be imported from external platforms into memory only, to combined internal disk and memory, or just to internal disk. Data is typically loaded via multiple parallel streams into the same table, or different tables, and tables can be queried while they are being loaded. An analytical platform is always expected to be updated and refreshed with ever changing data.

The bulk data loader is designed to be very fast, supporting demonstrable load rates into RAM of 14TB per hour on a moderately sized system of 24 servers with 10GbE networking. Subject to the limits of the source platform and the delivery network’s bandwidth, the load rate scales as the Kognitio platform size increases. The load rate to disk is a function of the available disks write speed, multiplied by the number of physical disk volumes present, and scales linearly, again subject to the limits of the source platform and delivery network’s bandwidth.

The Kognitio client-side (source platform) bulk loader sends data to the Kognitio platform in the data’s original format. Conversion is performed on a massively parallel basis within the Kognitio platform, dramatically reducing the CPU impact on the source system sending the data. Kognitio also supports trickle loading for continuous delivery and real-time environments, with the ability to commit data by row count and/or time window e.g. every five seconds or every 1000 rows, whatever comes first.

Compressed disk maps

Compressed disk maps (CDM) are an optional mechanism for increasing the speed at which data can be loaded from Kognitio internal file systems to memory. They cannot be used with external tables. CDMs eliminate blocks of data that do not contain any rows of interest to the query in question from disk scans. It does this by maintaining an optional bitmap index for selected columns. For every block on disk, the bitmap will give a simple TRUE/FALSE indication of whether that block contains any record with the value of interest.

As each block is condensed down to a single bit, these indices are very small, but they are further reduced in size by a very aggressive compression algorithm. The block size used is variable, so if clustering occurs in large chunks the block size is larger than for less-useful clustering of data. This algorithm can be very aggressive, as it does not matter if during de-compression the index gives a few false hits, as this will only result in few extra blocks being unnecessarily read.

Individual bit-map indices are unique to each server in the cluster and are small enough that they can be stored in memory. Access to the index is therefore very quick and does not cause any extra disk head movement.

Compressed bit-map indices work best when the data is clustered and the columns that are indexed are reasonably selective. Compressed bit-map indices can also be combined to further increase selectivity. For example, if data is loaded on a daily basis and you wish to look at transactions for a given period, then the transactions for that period will be clustered tightly together on the disk because Kognitio uses a linear file system. If the analysis was further refined to look at transactions for a given region then the indices for the date column and the region column could be combined to indicate which blocks contain transactions for that period AND that region.

For certain applications compressed bit-map indices can massively reduce the time taken to retrieve data from disk. If a compressed bit-map index exists for a column it is maintained during insert, update and delete operations. There is no performance penalty for doing this.

External Tables

Kognitio can seamlessly access data stored on external platforms using a feature called “External Tables”. This is a feature that allows Kognitio to seamlessly pull data from an external storage system or persistence layer. External Tables use “External Connectors” to implement connectivity to the external source. The External Connectors are designed for very high speed data ingestion and support multiple parallel load streams. The degree of parallelism is dependent on the source system but Kognitio External Connectors are designed to provide the highest possible data throughput rates. The framework for External Connectors allows Kognitio to rapidly develop new connectors. Kognitio has connectors for Hadoop, Cloud Storage, Data Warehouses, File Systems and Operational Systems and we continue to add connectors on a regular basis.

Access to an external persistence layer is setup by the system administrator, or a privileged user, by defining what data, in which systems, will be accessible in Kognitio as an external table. The access information can be setup using Kognitio’s Administration Console’s GUI interface, but as all metadata in Kognitio is stored in relational system tables, it can also be setup using simple SQL or a SQL-generating tool. Once the metadata has been configured, the external data will appear to users with the appropriate privileges as non-memory resident relational tables. At this stage, although the external table and its metadata are visible to the user, it is not yet resident in the Kognitio platform and still only exists on the external source system. Kognitio only holds metadata about the external table along with appropriate access controls.

When a user submits a query that involves an external table, Kognitio will automatically pull the data it needs from the source system and put it into RAM. The data will stay memory resident only for the duration of that query. In this scenario the speed at which the query can be satisfied is typically dictated by the speed that data can be pulled from the external source. To eliminate this generally expensive data pull step, the usual approach for a Kognitio system is for the user (or administrator) to pre-load or “pin” the data of interest into memory as an image. This data will then stay memory resident in Kognitio until it is explicitly commanded to drop it.

For very complex queries, such as those involving intensive mathematical operations on large datasets, the query execution phase can be significantly longer than the data pull operation. In these circumstances pinning data in memory does not have a significant impact on improving performance and so Kognitio can be left to automatically pull the data as and when it needs it. Not pinning large datasets in RAM leaves more available memory for the query computation phases and any intermediate result sets.

Integrating Hadoop Data into Kognitio

Using the External Table functionality, Kognitio users can map collections of files stored in Hadoop HDFS / MapR-FS as non-memory resident tables which they can then query directly, or alternatively instruct Kognitio to pin selected portions of data from them into memory as view images or RAM-only tables for high-frequency access. The connectivity and tight integration is provided by two high-speed Hadoop connectors, the MapReduce Connector and the HDFS / MapR-FS connector. Kognitio supports a range of HDFS / MapR-FS file formats. The list of currently supported formats can be found here.

The Hadoop connectors can be used to access data regardless of whether Kognitio is running on the Hadoop cluster (Kognitio on Hadoop or Kognitio on MapR) or running on a separate connected cluster (Kognitio standalone).

The MapReduce connector wraps Kognitio’s data filtering and projection code together with the bulk loader tools into a MapReduce job, submitted on-demand in the standard Hadoop manner. The filtering and projection code is the same code that runs inside Kognitio itself when processing data. It filters out just the rows and columns required, before sending to Kognitio memory. The MapReduce job executes on all Hadoop nodes in parallel and Kognitio exploits this to send data on a massively parallel basis to all Kognitio nodes, with every Hadoop node sending data in parallel to every Kognitio node. This is all transparent to the end-user.

For example, a Hadoop dataset comprising thousands of files may contain five years of historic data, but the user is only interested in the last quarter and does not need any of the description fields. As an external table, the user sees the entire data set but can tell Kognitio to load only the data required into memory via a simple SQL “where” statement that gets passed into the External Table. Kognitio will ask Hadoop to do the gathering and filtering prior to sending the data. This tight integration means that Hadoop is doing what it is good at, namely, filtering rows, and Kognitio does what it does best, providing a platform for low-latency, high-frequency, high complexity queries for interactive data analytics.

While the MapReduce Connector works very well when large sub-sets of data have to be filtered out of massive Hadoop file stores, the limitations of MapReduce means that there is always a fixed ‘batch’ overhead, in the order of tens of seconds, to read even the smallest data set from Hadoop. For this reason, Kognitio offers an alternative connector that bypasses MapReduce and reads data directly from files in the Hadoop HDFS / MapR-FS file system. Whole files, or sets of files, must be pulled and no filtering is applied until the data is present in Kognitio – although a semblance of granular filtering can be achieved if the data is stored split across files and directories with referenceable naming conventions.

This connector is primarily designed to allow smaller data sets, such as dimension tables, to be pulled very quickly into RAM.

This bypassing of the standard Java-heavy ways of extracting data from Hadoop is done to provide simplicity and achieve scalable high throughput data transfer rates between HDFS / MapR-FS-based data and Kognitio memory. Because of the scalability, the actual data rates will depend on the relative size of the platform and the core network — throughput rates of tens of terabytes per hour are readily achievable on moderately-sized solutions.

External Connectivity

The Kognitio in-memory analytical platform has been designed to be as open as possible so that it can work with a whole range of front-end (query and visualization) and back-end (ETL, ELT, DI) tools. To this end, Kognitio supports ANSI Standard SQL via ODBC (Open Database Connectivity) standard or JDBC (Java Database Connectivity) standard APIs. Virtually all tool and application vendors support these well-defined standards. Kognitio has been verified against, and has partnerships with, many of the key analytics and data visualization vendors.

Kognitio SQL support is syntactically and functionally very rich. Kognitio fully supports the core features of ANSI SQL:2008 (with certain exceptions) and many of the optional features of ANSI SQL:2008. The ANSI SQL:2008 standard encompasses SQL92, SQL99, SQL:2003 and SQL:2006. In addition, Kognitio supports a number of the new optional features present in the ANSI SQL:2011 standard.

Additionally, Kognitio offers support for the most common Oracle non-standard SQL syntax variations and many of the Oracle non-standard SQL functions. This support simplifies the process of making applications that were written or customized for Oracle to run against Kognitio.

8. Advanced analytics

Because of its ability to deploy large amounts of raw processing power against any given data set, Kognitio has always been an ideal technology for complex analytics.

Keep reading