Beyond Weak Memory Consistency: The Challenges of Memory Persistency
Emerging non-volatile memory (NVM) technologies provide fast access to persistent data (guaranteed to endure power failures/crashes) at a performance comparable to volatile memory (RAM). NVM (a.k.a. persistent memory) is believed to supplant RAM in the near future, leading to substantial changes in software and its engineering.
However, the performance gains of NVM are difficult to exploit correctly. A key challenge lies in ensuring correct recovery after a crash by maintaining the consistency of the data in persistent memory. This requires an understanding of the underlying (weak) persistency model, describing the order in which stores are propagated to NVM. The problem is that CPUs are not directly connected to memory; instead there are multiple non-persistent caches in between. Consequently, memory stores are not propagated to NVM at the time and in the order issued by the processor, but rather at a later time and in the order decided by cache coherence protocols.
In this tutorial, we demonstrate three facets of persistency research:
We present the formal persistency semantics of the ubiquitous Intel x86 architecture.
We outline how model checking and program logics can be used for proving safety properties of persistent programs.
We present persistent serializability and persistent linearisability as correctness conditions for persistent transactions and algorithms respectively.
Tue 22 JunDisplayed time zone: Eastern Time (US & Canada) change
13:30 - 16:15
|Beyond Weak Memory Consistency: The Challenges of Memory Persistency|