Using Kognitio Memory

Kognitio memory is declarative, this means that you have to decide what data resides in memory. This has similarities with planning indexing on a conventional database but is much simpler.

This page describes the concepts in some detail, for a quick introduction see:

All Kognitio processing is done in memory - even when data is processed from disk (in which case it is loaded into memory, processed and then discarded). There are optimisations that allow data to be streamed so that it is not necessary to load all of the table at once or indeed build a complete intermediate result set but since the streaming process is transparent to users I will ignore it here.

The foundation of Kognitio memory management is the memory image (usually just shortened to image) which is an in-memory copy of a table, a view or part of a table.

An image is either updateable or static:

1. Static images are derived from views and are generally referred to as view images. They contain a snapshot of the data in the view at the time the image was created. They are more memory efficient than updateable views because they don’t require links to the data source or the metadata associated with CRUD operations.

2. Updateable images are images based directly on internal tables and CRUD operations on the underlying tables are immediately reflected in the images. They are usually referred to as table images. It is also possible to have a memory only table which is effectively a table image with no disk element. This gives very good CRUD performance but if the data needs to be persisted it will have to be inserted into a disk based table or unloaded to a file.

The rows in an image can be distributed in a variety of ways and this can affect query performance.

Kognitio Memory Architecture

Shows images within ramstores within nodes.

To understand how distributing rows in an image can affect query performance you need to know a little bit about how Kognitio stores images.

A Kognitio instance consists of one or more nodes (on Hadoop a node is effectively a YARN container). Each node is internally split into a number of “ramstores” which are logical processing units (memory + CPU).

Images are distributed across ramstores such that every user ramstore contains some or all of the rows from every image.

When a query is executed, some or all of the ramstores will participate in the query ( see Location Hash Values and Asymmetric processing for details of optimisations that allow a subset of ram stores to fulfill a query). Unless stated, we will assume all ramstores are participating in the query in the explanations below.

There are also system ramstores which are reserved for system tables and queries we will also ignore them in the explanations below.

Data Distribution in Memory

Images are distributed across ramstores in three different ways and how they are distributed affects how they perform, particularly for queries with joins in them, group by operations and asymmetric processing:

  • random - (the default) rows are distributed evenly between ramstores.
  • hashed - rows are distributed to ramstores such that all rows that have the same value for the combined hash value of the columns specified in the hashed on clause are distributed to the same ramstores.
  • replicated - every ramstore contains all the rows in the image.

What to Optimize

Distribution is worth taking some time to work on as it can both improve query speed and reduce overall system load but the query compiler will redistribute rows as necessary so you don’t need to do it to run queries.

If your workload is unknown you can analyse the queries are being run by querying the sys.ipe_command table and then using the explain, picture and if necessary the diagnose commands to analyse how queries are being executed.

If however you have large table relationships that often require a join on a particular key then starting off by hashing on that key would make sense.

Joins

Join performance can be optimised by using the correct distribution for a query.

Consider a simple join between two images using the customer_id column. If you hash distribute the rows for both images on the customer_id column, all the rows to be joined will be local to each ramstore. This means the join step can complete without any communication with other ramstores. If the rows weren’t distributed like this, the query compiler would automatically redistribute the rows to allow the join to happen. The redistribute is efficient but it still takes some time.

In another situation you may have a product image that you want to join to a transaction image using product_id. The product image is small and the transaction image large. In this scenario, you can replicate the product image - this means there is a complete copy of the product image on each ramstore so the join can again be performed entirely on the ramstore with no redistribution. In this case the distribution of the transaction image is immaterial so it can be distributed however is most appropriate for the rest of the query or other queries.

Example: Retail data distribution

Shows images within ramstores within nodes.

The image on the right shows a very simple retail table layout where we have a transaction table, an item table containing the items in a transaction and a product table.

In this case the distribution strategy is quite easy to determine - we can hash distribute on transaction key for the transaction and item tables and replicate the product table. This is making the following assumptions:

  • The transaction key is well distributed - if it’s a sequence number this will be the case.
  • The number of items per transaction is reasonably constant. This is usually the case but if the data is for a wholesaler then there may be tens of thousands of items in some transactions and this may cause skew but generally it will even out.
  • The product table is reasonably sized - again, for a large wholesaler this may not be the case and in that situation, it may be best to distribute on product key.

For a more in-depth discussion of strategies for optimizing join processing see section 2.6 of the Kognitio Guide (PDF).

Group by

Group by performance can be improved if the columns being grouped by are hash distributed, if this is done, the rows for the group by will all be located in the same ramstore so the result of the group by from each ramstore will just need concatenating with the results from the other ramstores rather than combining which is a more expensive operation.

Optimising joins will generally make more difference than group by so join distribution should be prioritized.

Asymmetric processing

See Location Hash Values and Asymmetric processing for more details.

References

Section 2.6 of the Kognitio Guide (PDF).