Hi Friends, Are you sure you have tested SQL Server 2014 In-Memory OLTP the right way?
In one of my recent conversations with a customer, a senior DBA from the customer side expressed that he does not see ‘exponential’ performance when he converts on-disk tables to In-Memory tables. He further expressed that performance by ‘leaps and bounds’ as claimed by Microsoft is only marketing rather than reality.
Is this your opinion too?
Well, instead of a knee-jerk reaction, I was interested in knowing how exactly he tested the In-Memory feature of SQL Server 2014. And after he explained his ‘basic’ test, I kind of figured out where he went wrong.
So here are the two versions of the test. One that he did, and the other that I told him to do!
And for the purpose of this blog post, I am using the codeplex sample provided by Microsoft (Of course, I do not have customers data or their scripts)
So, below are the two scripts: One inserts data (10000 records) in disk based tables and the other inserts the same amount of data in In-Memory tables (both tables are identical). Note the usage of @use_inmem parameter. Based on this parameter value, a stored procedure will insert data either in disk based tables or memory optimized tables.
-- on disk test EXEC Demo.usp_DemoInsertSalesOrders @use_inmem = 0 ,@order_count = 10000 ,@include_update = 0 -- in_mem test EXEC Demo.usp_DemoInsertSalesOrders @use_inmem = 1 ,@order_count = 10000 ,@include_update = 0
And after executing the above scripts in SSMS, the execution times were as follows:
On-Disk tables: 34 seconds
In-Memory tables: 15 seconds
While that’s a fairly nice improvement, the DBA expected performance improvement by ‘leaps & bounds’ 🙂
And here is what I told him to do:
I told him to run exactly the same test, but with around 100 concurrent users. And I advised him to use Adam Machanic’s SQLQueryStress or RML utilities to stimulate 100 users. So here are the results (this time I used 100 users, each inserting 1000 records. Not 10K, as this takes a loooong time for disk based tables)
Results of 100 concurrent users, each inserting 1000 records:
Execution times:
On-Disk tables: 4 mins 2 secs
In-Memory tables: 26 seconds
Conclusion?
Well, In-Memory tables are perfect solutions for tables that witness high latch contention. That’s just one case. With one user, there is hardly any latch contention so you are going to see moderate performance improvement. With 100 concurrent users competing to insert data (and the classic last page insert scenario in SQL), SQL Server engine will see high latch contention for disk based tables. In-Memory tables offer latch free design and the transactional throughput will be very high, as the above results show clearly.
Before running each scripts/test, you can clear the latch statistics, as follows:
DBCC SQLPERF ( 'sys.dm_os_latch_stats' ,clear )
And after each script/test completes, you can record latch stats as follows and compare the difference:
SELECT * FROM sys.dm_os_latch_stats ORDER BY waiting_requests_count DESC
My intention behind writing this post was to make sure that you test a feature the right way, and of course there is a lot more to be talked & discussed about In-Memory tables, it’s a huge topic. This post is just a quick know-how.
Unfortunately you don’t actually show that the reason the on-disk table runs were slow was due to latching issues. It could have simply been log buffer flushes or one of many other issues. It would have been very easy to show one way or the other.
Hi Kevin, thanks for the feedback. I did mention at the end of the post abt observing latch stats using Dmv. Whn i am redo this while recording a video i shall show the latch stats too. Regards, amit.
Hi Amit,
You can try to create Natively Compiled Stored Procedure for in-memory table and improve performance a lot. I have done a few tests and it can be a huge benefit.
Best regards,
Yes indeed.