Simple performance checks against your hardware cluster

24

Mar
2017
Posted By : Simon Darkin 1 Comment
performance hardware cluster, cpu. benchmarks

Kognitio have a lot of experience commissioning clusters of new hardware for our MPP software product. As part of that process, we’ve developed a number of steps for validating the performance of new clusters, and these are the topic of this blog entry.

 

There are many Linux based benchmarking tools on the market however they are not usually installed by default, in which case some simple command line tools can be used to quickly establish if there is a potential hardware issue that warrants further investigation.    The following hardware components are covered by this topic:

  • CPU
  • Disk
  • Networking
  • RAM

 

CPU   

A slow CPU or core could have an adverse effect on query performance, and so with the use of basic command line tools you can help identify laggards.  A background ‘for’ loop can be employed to ensure all cores/threads are tested simultaneously.

 

Integer arithmetic test

Invoked 8 times to run simultaneously against 8 cores

 

for i in `seq 1 8`; do time $(i=0; while (( i < 999999 )); do (( i ++ )); done)& done; wait

 

this will return the time taken to increment an integer over the specified range.  A comparison of the time taken by each core will help identify outliers

 

real    0m8.908s
user    0m8.765s
sys     0m0.140s

real    0m8.943s
user    0m8.789s
sys     0m0.156s

real    0m8.997s
user    0m8.761s
sys     0m0.112s

real    0m9.000s
user    0m8.853s
sys     0m0.144s

real    0m9.023s
user    0m8.881s
sys     0m0.140s

real    0m9.028s
user    0m8.861s
sys     0m0.168s

real    0m9.034s
user    0m8.857s
sys     0m0.176s

real    0m9.073s
user    0m8.781s
sys     0m0.156s

 

Whilst the test is running you can check that each core is under load by running top and expanding the output to show all cores.  If you do encounter outliers in the arithmetic test then you can use the output from top to identify which core(s) remain busy when others have finished

 

Cpu0  : 98.3%us,  1.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu1  : 99.0%us,  1.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu2  : 98.7%us,  1.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu3  : 99.3%us,  0.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu4  : 98.7%us,  1.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu5  : 98.0%us,  2.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu6  : 98.3%us,  1.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu7  : 98.7%us,  1.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

 

Compression test

As with the arithmetic test this example loops around 8 times so that 8 cores are tested simultaneously.  Data is written to /dev/null to avoid any overhead associated with disk IO.

 

for i in `seq 1 8`; do dd if=/dev/zero bs=1000 count=1000000 | gzip >/dev/null&  done; wait

 

this will return the rate at which each core is able to compress 1 GB of data

 

1000000000 bytes (1.0 GB) copied, 11.9277 seconds, 83.8 MB/s

1000000000 bytes (1.0 GB) copied, 11.9277 seconds, 83.8 MB/s

1000000000 bytes (1.0 GB) copied, 11.9545 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 11.9799 seconds, 83.5 MB/s

1000000000 bytes (1.0 GB) copied, 11.9831 seconds, 83.5 MB/s

1000000000 bytes (1.0 GB) copied, 12.0085 seconds, 83.3 MB/s

1000000000 bytes (1.0 GB) copied, 12.0382 seconds, 83.1 MB/s

1000000000 bytes (1.0 GB) copied, 12.2655 seconds, 81.5 MB/s

 

With Kognitio software installed you can use the wxtool command  to run the compression test simultaneously against all database nodes to aid comparison across the cluster as a whole.  You can download the software for free at http://kognitio.com/free-download/

 

wxtool -a '{can DB}' -S 'for i in `seq 1 8`; do dd if=/dev/zero bs=1000 count=1000000 | gzip >/dev/null&  done; wait'

 

For node kap1-1 (ecode 0, 866 bytes):

1000000000 bytes (1.0 GB) copied, 11.9422 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 11.9659 seconds, 83.6 MB/s

1000000000 bytes (1.0 GB) copied, 11.9876 seconds, 83.4 MB/s

1000000000 bytes (1.0 GB) copied, 12.0142 seconds, 83.2 MB/s

1000000000 bytes (1.0 GB) copied, 12.1293 seconds, 82.4 MB/s

1000000000 bytes (1.0 GB) copied, 12.3754 seconds, 80.8 MB/s

1000000000 bytes (1.0 GB) copied, 12.4132 seconds, 80.6 MB/s

1000000000 bytes (1.0 GB) copied, 12.4386 seconds, 80.4 MB/s

For node kap1-3 (ecode 0, 864 bytes):

1000000000 bytes (1.0 GB) copied, 11.8398 seconds, 84.5 MB/s

1000000000 bytes (1.0 GB) copied, 11.8661 seconds, 84.3 MB/s

1000000000 bytes (1.0 GB) copied, 11.8893 seconds, 84.1 MB/s

1000000000 bytes (1.0 GB) copied, 11.9165 seconds, 83.9 MB/s

1000000000 bytes (1.0 GB) copied, 11.946 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 11.953 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 11.9637 seconds, 83.6 MB/s

1000000000 bytes (1.0 GB) copied, 12.2996 seconds, 81.3 MB/s

For node kap1-3 (ecode 0, 866 bytes):

1000000000 bytes (1.0 GB) copied, 11.8757 seconds, 84.2 MB/s

1000000000 bytes (1.0 GB) copied, 11.8846 seconds, 84.1 MB/s

1000000000 bytes (1.0 GB) copied, 11.9178 seconds, 83.9 MB/s

1000000000 bytes (1.0 GB) copied, 11.9243 seconds, 83.9 MB/s

1000000000 bytes (1.0 GB) copied, 11.9377 seconds, 83.8 MB/s

1000000000 bytes (1.0 GB) copied, 11.9834 seconds, 83.4 MB/s

1000000000 bytes (1.0 GB) copied, 12.3367 seconds, 81.1 MB/s

1000000000 bytes (1.0 GB) copied, 12.3942 seconds, 80.7 MB/s

For node kap1-4 (ecode 0, 864 bytes):

1000000000 bytes (1.0 GB) copied, 11.91 seconds, 84.0 MB/s

1000000000 bytes (1.0 GB) copied, 11.9291 seconds, 83.8 MB/s

1000000000 bytes (1.0 GB) copied, 11.9448 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 11.9498 seconds, 83.7 MB/s

1000000000 bytes (1.0 GB) copied, 12.1232 seconds, 82.5 MB/s

1000000000 bytes (1.0 GB) copied, 12.3896 seconds, 80.7 MB/s

1000000000 bytes (1.0 GB) copied, 12.4449 seconds, 80.4 MB/s

1000000000 bytes (1.0 GB) copied, 12.4504 seconds, 80.3 MB/s

 

 

 

DISKS

Having just one underperforming disk in the system can significantly impact query performance against disk based tables. Here are some simple tests to help identify any anomalies.

 

Iterative write speed test with dd.

 

WARNING: THIS WRITE TEST WILL OVERWRITE DATA AT THE SPECIFIED DESTINATION AND SO MAKE SURE YOU AVOID AREAS THAT CONTAIN SYSTEM DATA OR USER DATA THAT YOU WISH TO KEEP

 

for i in `seq 1 3`; do echo "Loop $i"; dd if=/dev/zero of=/dev/cciss/c0d0p2 bs=10000 count=100000 conv=fsync; echo ""; done

 

this will return the duration and rate at data can be written out to disk.  In this example 1 GB of data is repeatedly written to a raw partition.  Note that fsync is used to flush the writeback cache and ensures data is written to the physical media.

 

Loop 1
100000+0 records in
100000+0 records out
1000000000 bytes (1.0 GB) copied, 13.6466 seconds, 73.3 MB/s

Loop 2
100000+0 records in
100000+0 records out
1000000000 bytes (1.0 GB) copied, 12.8324 seconds, 77.9 MB/s

Loop 3
100000+0 records in
100000+0 records out
1000000000 bytes (1.0 GB) copied, 12.4271 seconds, 80.5 MB/s

 

With Kognitio software installed, the test can be expanded to run on all database nodes allowing for easy comparison of all disks in the system

 

wxtool -a '{can DB}' -S 'for i in `seq 1 3`; do echo "Loop $i"; dd if=/dev/zero of=/dev/cciss/c0d0p2 bs=10000 count=100000 conv=fsync; echo ""; done'

 

Iterative read speed test with dd

 

for i in `seq 1 3`; do let skip=$i*5000; echo "Loop $i - skip = $skip"; sync ; echo 3 >/proc/sys/vm/drop_caches; dd if=/dev/cciss/c0d0p2 of=/dev/null bs=1000 count=1000000 skip=$skip ;echo ""; done

 

this will return the rate at which data can be read from disk.  In this example 1 GB of data is being read from a raw partition, adjusting the offset and flushing the buffer on each iteration to ensure data is being read from the physical media.

 

Loop 1 - skip = 5000
1000000+0 records in
1000000+0 records out
1000000000 bytes (1.0 GB) copied, 14.4355 seconds, 69.3 MB/s

Loop 2 - skip = 10000
1000000+0 records in
1000000+0 records out
1000000000 bytes (1.0 GB) copied, 12.9884 seconds, 77.0 MB/s

Loop 3 - skip = 15000
1000000+0 records in
1000000+0 records out
1000000000 bytes (1.0 GB) copied, 12.6045 seconds, 79.3 MB/s

 

With Kognitio software installed, the test can be expanded to run on all database nodes to aid comparison across the entire system.

 

wxtool -a '{can DB}' -S 'for i in `seq 1 3`; do let skip=$i*5000; echo "Loop $i - skip = $skip"; sync ; echo 3 >/proc/sys/vm/drop_caches; dd if=/dev/cciss/c0d0p2 of=/dev/null bs=1000 count=1000000 skip=$skip ;echo ""; done'

 

Iterative read speed test with hdparm

 

for i in `seq 1 3`; do echo "Loop $i"; hdparm --direct -t /dev/cciss/c0d0p2; echo ""; done

 

this will return the rate at which data can be read sequentially from disk without any file system overhead.

 

Loop 1
/dev/cciss/c0d0p2:
Timing O_DIRECT disk reads:  236 MB in  3.01 seconds =  78.40 MB/sec

Loop 2
/dev/cciss/c0d0p2:
Timing O_DIRECT disk reads:  236 MB in  3.02 seconds =  78.09 MB/sec

Loop 3
/dev/cciss/c0d0p2:
Timing O_DIRECT disk reads:  230 MB in  3.01 seconds =  76.30 MB/sec

 

With Kognitio software installed, the test can be expanded to run on all database nodes to aid comparison across the entire system.

 

wxtool -a '{can DB}' -S 'for i in `seq 1 3`; do echo "Loop $i"; hdparm --direct -t /dev/cciss/c0d0p2; echo ""; done'

 

Disk based table scan

 

If the cluster is running Kognitio database software you can initiate a scan of a large disk based table and review the output from wxtop in order to spot any disk store processes that remain busy for a significant period after others have finished.   For accurate results you should ensure there is no concurrent activity when performing this test.

 

select *
from <large disk based table>
where <condition unlikely to be true>;

 

Monitor the output from wxtop and look out for any disk store processes that remain busy when all or most others have finished.

 

PID       NODE        PROCESS                           SIZE      TIME
15784       kap1-1      WXDB(55): Diskstore             258036       100
22064       kap1-2      WXDB(18): Diskstore             257176        86
25179       kap1-3      WXDB(73): Diskstore top         258200        84
31237       kap1-4      WXDB(37): Diskstore             258068        77

 

If a disk store process does appear to lag behind, then you should eliminate the possibility of it being attributable to data skew by checking the row counts across all of the disks using the following query

 

select
mpid,
sum(nrows) nrows
from ipe_ftable
where table_id = <table_id being scanned>
group by 1
order by 2 desc;

 

 

NETWORKING

You can test the network links between nodes using some simple netcat commands.  This will allow you to spot links that are underperforming.

 

Link speed test using dd and netcat

 

The name and options associated with the netcat binary will depend on the Linux installation, however with Kognitio software installed you can use wxnetread and wxnetwrite for the data transfer regardless.

 

Setup a listening process on the node performing the read

 

netcat -l -p 2000 > /dev/null &

 

use dd to generate some data and pipe through netcat to the IP and port of the node performing the read

 

dd if=/dev/zero bs=1000 count=1000000 | netcat 14.4.21.2 2000

 

this will return the rate at which data can be copied over the link

 

1000000000 bytes (1.0 GB) copied, 8.54445 seconds, 117 MB/s

 

The same test as above this time using wxnetread/wxnetwrite

 

wxnetread -l -p 2000 > /dev/null &

 

dd if=/dev/zero bs=1000 count=1000000 | wxnetwrite 14.4.21.2 2000

 

1000000000 bytes (1.0 GB) copied, 8.5328 seconds, 117 MB/s

 

Shape tests

With Kognitio software installed you can run Shape tests to measure the speed at which RAM based data is re-distributed between nodes

 

wxtester -s <dsn> -u sys -p <password> -Ishape 5000 9000 1

 

Once the tests have been running for a few minutes you can navigate to the logs directory and check the data rate

 

cd `wxlogd wxtester`
grep TSTSHN results | gawk '{ if ($5==64) print ((($3*$5)/$6)/<number of database nodes>)/1048576 }'

 

With older generation hardware you can expect to see performance of  40MB/s/node given sufficient network bandwidth.       With newer hardware, for example HP Gen9 servers with 2x 56Gb/s links per node this increases to 90MB/s/core.

 

 

RAM

Benchmarking RAM performance is best left to a dedicated test suite, however you can perform a very simple write/read speed test using dd in conjunction with a temporary file storage facility in RAM, which at the very least can show up a mismatch in performance between nodes.

 

Write and read speed test using dd

 

mkdir RAM
mount tmpfs -t tmpfs RAM
cd RAM
dd if=/dev/zero of=ram_data bs=1M count=1000

 

this will return the rate at which data is written to RAM

 

1048576000 bytes (1.0 GB) copied, 1.06452 seconds, 985 MB/s

 

dd if=ram_data of=/dev/null bs=1M count=1000

 

this will return the rate at which data is read from RAM

 

1048576000 bytes (1.0 GB) copied, 0.6346 seconds, 1.7 GB/s

 

Monitoring Kognitio from the Hadoop Resource Manager and HDFS Web UI

03

Jan
2017
Posted By : Alan Kerr Comments are off
monitoring kadoop clusters

If you’ve already installed Kognitio on your Hadoop distribution of choice, or are about to, then you should be aware that Kognitio includes full YARN integration allowing Kognitio to share the Hadoop hardware infrastructure and resources with other Hadoop applications and services.

Latest resoures for Kognitio on Hadoop:

Download:  http://kognitio.com/on-hadoop/

Forum:   http://www.kognitio.com/forums/viewforum.php?f=13

Install guide: (including Hadoop pre-requisites for Kognitio install):

http://www.kognitio.com/forums/Getting%20started%20with%20Kognitio%20on%20Hadoop.pdf

This means that YARN (https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html)  (Hadoop’s preferred resource manager) remains in control of the resource allocation for the Kognitio cluster.

Kognitio clusters can be monitored from the apache YARN resource manager UI, and the HDFS name node UI.

You can reach the YARN resource manager UI from your Hadoop management interface -> YARN -> Web UI, or point your browser to the node running the resource manager (default) port 8088.

hadoop screen running applications

The major Hadoop distributions all support the apache YARN resource manager: Cloudera, Hortonworks, MapR, and IBM.

From the Cloudera management interface reach the YARN Web UI from:

cloudera manager clusters

And for the HDFS UI which is typically accessible by pointing to the name node on port 50070:

hadoop directory

Or, use the Kognitio Console external data browser:

hdfs file structure

The Kognitio on Hadoop cluster

Kognitio is designed as a persistent application running on Hadoop under YARN.

A Kognitio cluster can be made up from 1 or more application containers. Kognitio uses apache slider (https://slider.incubator.apache.org/) to deploy, monitor, restart, and reconfigure the Kognitio cluster.

A single Kognitio application container must fit onto a single data node. It is recommended not to size Kognitio containers less than 8GB RAM. All application containers within a Kognitio cluster will be sized the same. YARN will place the application containers. It is possible to have multiple application containers from the same Kognitio cluster running on the same data node.

For example, to size for a 1TB RAM Kognitio instance you could choose one of the following options:

64 x 16GB RAM application containers,
32 x 32GB RAM application containers,
16 x 64GB RAM application containers,
8 x 128GB RAM application containers,
4 x 256GB RAM application containers,
2 x 512GB RAM application containers

Of course, the choice is restricted by the Hadoop cluster, the size and available resource on the data nodes.

Starting a Kognitio on Hadoop cluster

YARN creates an application when a Kognitio cluster is started. This application will be assigned an ApplicationMaster (AM). A slider management container is launched under this application. The slider manager is responsible for the deployment, starting, stopping, and reconfiguration of the Kognitio cluster.

The slider manager runs within a small container allocated by YARN and it will persist for the lifetime of the Kognitio cluster. Requests are made from the slider manager to YARN to start the Kognitio cluster containers. The YARN ApplicationMaster launches each container request and creates application containers under the original application ID. The Kognitio package and server configuration information will be pulled from HDFS to each of the application containers. The Kognitio server will then start within the application container. Each container will have all of the Kognitio server processes running within it (ramstores, compilers, interpreters, ionodes, watchdog, diskstores, smd).

It should be noted here that Kognitio is dynamically sized to run within the container memory allocated. This includes a 7% default fixed pool of memory for external processes such as external scripts. Kognitio runs within the memory allocated to the container. If you have a requirement to use memory intensive external scripts, then consider increasing the fixed pool size and also increasing the container memory size to improve script performance.

If there is not enough resource available for YARN to allocate an application container then the whole Kognitio cluster will fail to start. The “kodoop create cluster…” command submitted will not complete. Slider will continue to wait for all the application containers to start. It is advisable to exit at this point and verify resource availability and how the YARN resource limits have been configured on the Hadoop cluster.

Hadoop yarn defaults for Hadoop 2.7.3: https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

Settings of interest when starting Kognitio clusters /containers:

yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler The class to use as the resource scheduler.
yarn.scheduler.minimum-allocation-mb 1024 The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException.
yarn.scheduler.maximum-allocation-mb 8192 The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this will throw a InvalidResourceRequestException.
yarn.nodemanager.resource.memory-mb 8192 Amount of physical memory, in MB, that can be allocated for containers.
yarn.nodemanager.pmem-check-enabled true Whether physical memory limits will be enforced for containers.
yarn.nodemanager.vmem-check-enabled true Whether virtual memory limits will be enforced for containers
yarn.nodemanager.vmem-pmem-ratio 2.1 Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.

NOTE: These are YARN default values, not recommended Kodoop settings

The default settings are going to be too small for running Kognitio. As mentioned already, Kognitio containers are sized between 16GB RAM and 512GB RAM, or higher. The ‘yarn.nodemanager.resource-mb’ should be set to a size to accommodate the container(s) allocated to a node. With other services running on the Hadoop cluster having a site-specific value here to limit the memory allocation for a node or group of nodes may be necessary.

Once Kognitio cluster containers have been allocated by the YARN ApplicationMaster the container will transition to a RUNNING state. Once the Kognitio server is started within each of the application containers, a SMD master will be elected for the Kognitio cluster on Hadoop in the same way as SMD would work on a multi-node Kognitio stand-alone appliance. The Kognitio cluster will now run through a system “newsys” to commission.

hadoop application software queues

From the Kognitio edge node (from the command line) you can stop | start | reconfigure the Kognitio cluster. Stopping a Kognitio cluster changes the YARN application state to FINISHED. All of the application containers and slider manager container will be destroyed. Restarting a Kognitio cluster creates new YARN ApplicationMaster and creates a new slider management and application containers.

hadoop applications running

Because the data will persist on HDFS when a Kognitio cluster is restarted all of the existing metadata, and database objects remain. Memory images will not be recoverable after a Kognitio cluster restart, although they will be recoverable after a Kognitio server start.

What if YARN kills the Kognitio on Hadoop cluster?

It is possible for YARN to kill the Kognitio cluster application. This could happen to free up memory resources on the Hadoop cluster. If this happens it should be treated as though the “kodoop cluster stop” command has been submitted. The HDFS for the cluster will persist and it is possible to start the cluster, reconfigure the cluster, or remove the cluster.

hadoop application list killed

Slider Logs

As a resource manager, YARN can “giveth resource and can also taketh away”. The Kognitio server application processes run within the Kognitio application container process group. YARN ApplicationMaster for each Kognitio cluster will monitor the container process groups to make sure allocated resource is not exceeded.

In a pre-release version of Kognitio on Hadoop a bug existed whereby too many processes were being started within a Kognitio application container. This would make the container susceptible to growing larger than the original container resource allocation when the Kognitio cluster was placed under load. The YARN ApplicationMaster would terminate the container. If this happened it would be useful to check the slider logs to determine the root cause of why the container was killed.

The slider logs for the Kognitio cluster can be accessed from the YARN web UI.

hadoop application attempt

The image shows that a Kognitio container has been restarted because the container ID which increments sequentially as containers are added to the Kognitio cluster is now missing “container_1477582273761_0035_01_000003”, and a new “container_1477582273761_0035_01_000004” has been started in its place. It is possible to examine the slider management container log to determine what happened to the container that is no longer present in the running application.

hadoop resource manager logs

With Kognitio auto-recovery enabled, if a container is terminated due to running beyond physical memory limits then the cluster will not restart. It is advised to determine the cause of the termination before manually restarting the cluster. If Kognitio suffers a software failure with auto-recovery enabled, then Kognitio will automatically restart the server.

In the event of a container being terminated, use the slider logs to scroll to where the container was killed. In this example case it was because the container had exceeded its resource limit.

containerID=container_1477582273761_0035_01_000003] is running beyond physical memory limits. Current usage: 17.0 GB of 17 GB physical memory used; 35.5 GB of 35.7 GB virtual memory used. Killing container.

The slider log also contains a dump of the process tree for the container. The RSSMEM_USAGE (PAGES) column for the processes showed that the resident memory for the process group exceeded 18.2GB for a 17GB allocated application container.

Copy the process tree dump from the slider log to a file and sum the rssmem pages to get a total:
awk 'BEGIN{FS=" ";pages=0;rssmem=0;}{pages+=$10;rssmem=(pages*4096);print(pages,rssmem,$10, $6)}' file

Facebook

Twitter

LinkedId