This is a commentary on the Information Week article of 27 Feb 2013*

Interesting article.

While there is a great deal more noise about in-memory lately – it has indeed been around a long time.  The fact that SAP is marketing heavily with it’s version, and that Accenture is making a fair amount implementing it with SAP installed accounts to remedy issues around BW obfuscates the much larger capabilities beyond the typical database operations – to those of more advanced analytics.

Having data in-memory is not new, indeed IBM systems from the 1960s did that in essence, but what has changed of late is the precipitous DROP in the cost of commodity DRAM memory.  True in-memory computing is not about SSD or cache, it’s about pure, MPP applications of RAM.

Optimized, in-memory RDBMS MPP systems havebeen maturing for 20 years, but the market awareness generated from “Big Data” and Hadoop have brought them to the forefront – as that technology needs an “accelerator” for ad hoc, on-demand analysis.  Persistence is still important – and as that persistent store moves away (rapidly) from the Data Warehouse to Hadoop, it avails the separation of storage and analysis of data – enabling the use of the most optimal architectures for each.  Namely, in-memory MPP for analytics and commodity MPP on open source Hadoop for the storage (persistence).

Some interesting (albeit mundane) case studies exist at www.kognitio.com/tra,www.kognitio.com/BT and www.kognitio.com/AIMIA.