SQL Server 2014 – Memory Optimized Tables

Our Team Need :
We have about 5-500 GB Databases and very few tables ( ~ About 10 tables ) are high volume high concurrency tables. At peak volume, we may write few thousands per second per table.

We are in the business of providing K-12 Summative Tests ( http://emetric.net/ ) , when the online test goes live, Kids from entire state take the tests.Its very high volume writes and reads for scoring. Our scenario is data related to a tests which are live needs to be in Memory. Rest of the data can be on DISK.

Its like partitioning based on Tests where few tests data needs to be on Memory and rest of the data can be on DISK. This is very critical for our business success. I am trying to explore the possibility of using Memory Optimized Tables to support high stake , high concurrency tests. This is my journey of evaluating this feature in SQL Server 2o14.

Memory Optimized Tables :
SQL Server 2014 released feature called Memory Optimized Tables. As the name suggests, the table would live on Memory and DISK I/O Required.

As layman terms , accessing from Memory is like driving Ferrari compared to accessing from DISK is like riding bullock cart.You get the point.

Next thing, your mind jumps into what happens my server crashes?. What happens if I am in the middle of transaction and didn’t commit or rollback and things go south.

Oh..Well..SQL Server 2014 provides the backup strategy where your data would be written to DISK for durability.

Microsoft link for Memory Optimized Tables :

http://msdn.microsoft.com/en-us/library/dn511014.aspx

Introduction from MSDN Article :

Memory-optimized tables reside in memory. Rows in the table are read from and written to memory. The entire table resides in memory. A second copy of the table data is maintained on disk, but only for durability purposes.
In-Memory OLTP is integrated with SQL Server to provide a seamless experience in all areas such as development, deployment, manageability, and supportability. A database can contain in-memory as well as disk-based objects.Latch Contention

A typical scenario is contention on the last page of an index when inserting rows concurrently in key order. Because In-Memory OLTP does not take latches when accessing data, the scalability issues related to latch contentions are fully removed.

Spinlock Contention

Because In-Memory OLTP does not take latches when accessing data, the scalability issues related to spinlock contentions are fully removed.

Locking Related Contention

If your database application encounters blocking issues between read and write operations, In-Memory OLTP removes the blocking issues because it uses a new form of optimistic concurrency control to implement all transaction isolation levels. In-Memory OLTP does not use TempDB to store row versions.

If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction. In either case, you need to make changes to the application.

If your application experiences frequent conflicts between two write operations, the value of optimistic locking is diminished. The application is not suitable for In-Memory OLTP. Most OLTP applications don’t have a write conflicts unless the conflict is induced by lock escalation.

 

Scalability

Most scaling issues in SQL Server applications are caused by concurrency issues such as contention in locks, latches, and spinlocks.

It almost looks like MEMORY Database Engine from MySQL world. I have been using MySQL InnoDB for a while for homegrown application. I will migrate the data to Memory optimized tables and run our stress testing suite to see how it performs under different loads comparable to regular DISK based tables.Its seems like good shot to try for our needs.

You may also like...