Presto Performance Benchmarks

As part of our ongoing TPC-DS benchmarking of SQL on Hadoop platforms we have recently been looking at Presto. Fully integrating a new platform into our benchmarking process is somewhat time-consuming, so in this blog I wanted to share our initial findings.

Presto has good SQL support as you would expect from an Apache project supported by Teradata – a major data warehouse vendor. However Presto’s performance over the TPC-DS query set at the 1TB scale was disappointing.


We used an AWS EMR cluster deployment for the benchmark. Presto Version 0.170 is available in the initial checklist of products. We commissioned a 4 node system with similar resources to infrastructure we used recently in our Hive LLAP benchmarking:

  • 1 edge node – m3.xlarge instance
  • 4 data nodes – r4.16xlarge instances as worker nodes

AWS Cluster Setup for Presto benchmark

We reviewed the deployment section of Presto documentation to ensure the EMR deployment suited our benchmarking needs. We also installed the presto-admin tool on the cluster for easier administration.

Presto configuration

Finding information on how to configure Presto to run optimally on specific hardware proved somewhat challenging. We followed the instructions in this Configuring Presto article, amending settings to match the resources of our system.

The 1TB TPC-DS data was held in Hive ORC files with SNAPPY compression (see Facebook blog for details). The tables were partitioned on date columns where possible.

SQL syntax support

The ability to migrate existing SQL workloads to run on Hadoop based data is essential for any organisations wishing to utilise Hadoop for big data storage. Therefore the first section of our TPC-DS benchmarking is the evaluation of SQL supported.

Each of the 99 TPC-DS queries is categorised as either

  • runs “out of the box” – no changes to SQL required
  • small syntax changes only – renaming columns, derived table aliases, date syntax etc
  • no support – could not execute the query without major syntax changes

For Presto 4 out of the 5 queries not supported were due to issues parsing the GROUPING function associated with OLAP type queries. In Presto documentation on GROUPING it states that “grouping arguments must match exactly the columns referenced in the corresponding GROUPING SET”. Changing the syntax to reflect this was tried but unsuccessful. Further testing may result in these queries being run by Presto in the future. The other errored query was due to Null values in a semi-join.

Platform Presto Kognitio
Out of box 73 76
Minor changes 21 23
No support 5

Running TPC-DS at scale (single stream)

The next step towards evaluating SQL on Hadoop platforms for enterprise level workloads is running at scale. The TPC-DS benchmark was run in a single stream at the 1TB scale.

Moving to this larger 1TB scale is where Presto started to struggle in our evaluation. There were 8 out of the 99 queries that we classed as long running as they failed to complete within 1 hour.

Platform Presto Kognitio
Queries run 86 99
Long / error 8
No support 5
Fastest query count 99

Kognitio completes all 99 queries at this scale faster than Presto. The longest query (Q67) took 12m 54s but 90 out of the 99 queries took less than 30s and 64 queries took less than 10s.

For Presto there were only 7 queries that completed in under 30 seconds.

The figure below is our standard way of comparing the performance of 2 platforms. Each query is represented by a horizontal block. The faster platform for a given query gets the largest proportion of the block. Therefore, overall, the more a color dominates shows how much better that platform performs.

Kognitio vs Presto graph

The solid blue block at the top of figure 1 represents the 13 queries that Kognitio can run but Presto does not support or are long running at 1TB. Kognitio runs all remaining 86 queries faster than Presto with 56 queries over 10x faster and 11 queries over 100x faster.

More work required on configuring and tuning Presto

Although Presto is designed to run well out-of-the-box it is clear that we need to do more work investigating these initial, disappointing results and tune our system. We plan to look at the configuration settings in much more detail and conduct a thorough analysis of Presto properties before we move to the 3rd stage of our benchmarking – concurrency evaluation.

If you have experience in configuring and tuning Presto on clusters then I would love to hear from you regarding optimizing our deployment. I have a post in the Presto Google Group for any suggestions or you can add a comment on this blog or contact me at

Leave a Reply

Your email address will not be published nor used for any other purpose. Required fields are marked *

SQL Engines on Hadoop

A Bloor Research Market Report

Read the report