As SAP HANA environments continue to grow and the demand for real-time data analytics increases, managing memory efficiently becomes critical. In the first part of this series, we explored strategies to handle database size challenges effectively. Now, let’s take it a step further by diving into SAP HANA’s Native Storage Extension (NSE) — a powerful, built-in solution that allows organizations to seamlessly offload less frequently accessed data to disk without sacrificing performance. Coupled with SAP HANA’s Fast Restart option, which leverages persistent memory techniques to speed up database reloads, these features can dramatically reduce downtime and optimize system responsiveness. With the support of modern storage platforms like Pure Storage FlashArray™, NSE and Fast Restart together offer a smart, flexible approach to extending memory capacity, accelerating restarts, and keeping your SAP HANA environment running at peak performance. In this post, I’ll share my hands-on experience testing both solutions, practical tips on data tiering, and the impressive results I’ve observed.
SAP HANA Native Storage Extension
Another fantastic way to manage database growth in memory is the use of SAP HANA’s Native Storage Extension (NSE). NSE is a built-in warm data tiering solution for SAP HANA that allows organizations to store less-frequently accessed data on disk—rather than fully in-memory—without investing in additional hardware. It’s very easy to set up and thanks to the speed of my Pure Storage FlashArray™ there are very little performance penalties as well.

Let’s set the table for my testing. My SAP HANA database is sitting on a 1TB memory system and I loaded up 35 tables with about 800gb of data. That’s roughly 100,000,000 rows of data in the system – not too shabby! The way NSE is activated is by simply updating either the table, column or partition to be page loadable. A reference to the data will be put in the memory buffer cache and once the data is unloaded (whether manually or through a system restart) it will be retrieved from the persistent storage layer going forward.
So now the question arises? What data should I store in the warm tier vs. load in memory and how do I determine this? That’s really going to be a joint IT/business discussion for every organization looking to tier data – similar to that dreaded archiving discussion but with much different results. I’ve seen customers go by age – all sales orders older than 10 years can be offloaded to storage, for example. I’ve also worked with customers who have kept it to just the technical tables to not affect any business activity – BALDAT for example. It’s really up to you. SAP does provide an NSE Advisor tool as part of the HANA Cockpit – you can run it for an extended amount of time and it’ll monitor read access and give you a report of what it thinks you can safely tier to storage.
Now back at my database, I went ahead and set the page_loadable flag on 7 tables which each had just short of 3,000,000 rows in them – so a total of 20.8 million rows tiered – almost 21% of the total data. As you can see below this saved 167.2GB of memory in the main store of the database, although the buffer cache does still use up 63.7GB.
ALTER TABLE WEBPAGESASCII_X01 PAGE LOADABLE CASCADE
SELECT t.TABLE_NAME, t.LOAD_UNIT, t.MEMORY_SIZE_IN_MAIN AS MEMORY_SIZE, p.DISK_SIZEÂ AS DISK_SIZE FROM M_CS_TABLES t JOIN M_TABLE_PERSISTENCE_STATISTICS p ON t.SCHEMA_NAME = p.SCHEMA_NAME AND t.TABLE_NAME = p.TABLE_NAME WHERE t.TABLE_NAME like 'WEBPAGE%' and t.LOAD_UNIT='PAGE'

As you can see below, the physical data size of the column store is just over 800GB while the Physical Page-Loadable Data is at 167.2GB – or just under 21%.

What are the effects of this on the actual database?  Well, I restarted my database five times before enabling NSE. On average it loaded in about 200 seconds and filled 806.8GB of memory.  This database isn’t tuned for fast load times and is on a fairly old FlashArray so I was loading in about 4GB/s – which still isn’t bad considering SAP’s standard is 3.33GB/s (SAP Note 2127458 states “typical acceptable reload throughputs are between 12 and 200GB/minute”).
Duration (s) | Memory Loaded (gb) |
207.4 | 807.1 |
198.48 | 806.8 |
198.51 | 806.73 |
198.58 | 806.68 |
198.63 | 806.72 |
Once I enabled NSE that 20% savings meant I was loading 643.3GB of data into memory and seeing my average restart time (after five more restarts) went down to 157 seconds – about 21% faster (which makes sense given I tiered 21% of the data). That’s a huge win for those times when downtime is necessary – whether planned or unplanned. I’ve talked to SAP customers who need to be very careful with how and when they can take downtimes due to the sheer size of their database and how long it takes to load into memory. And of course, the more aggressive you’re able to go with your data tiering the more you can control memory growth with your compute layer.
Duration (s) | Memory Loaded (gb) |
155.63 | 643.56 |
160.58 | 643.83 |
158.39 | 643.84 |
157.26 | 643.7 |
157.24 | 643.48 |
Bonus: SAP HANA Fast Restart Option
As an added bonus, I decided to take this database to the next level and test out SAP’s Fast Restart Option for SAP HANA. SAP HANA has capabilities to use Persistent Memory to store Main data so that it doesn’t need to be fetched and loaded at start-up. With Intel killing its Optane Persistent Memory program, there aren’t a lot of persistent memory options out there. SAP has taken its persistent memory functionality and adapted it to use a temporary file system, using native Linux tmpfs, and storing data directly in system RAM. Now, obviously a server reboot would wipe this memory but if you’re just restarting your SAP HANA environment you can get real benefit to the reload times by taking advantage of tmpfs.
Heres how to set it up, keeping in mind you must be on SAP HANA 2.0 SP4 to take advantage of this feature. First, you want to see how much memory each of your CPU sockets has:
cat /sys/devices/system/node/node*/meminfo | grep MemTotal | awk 'BEGIN {printf "%10s | %20s\n", "NUMA NODE", "MEMORY GB"; while (i++ < 33) printf "-"; printf "\n"} {printf "%10d | %20.3f\n", $2, $4/1048576}'

You’ll want to create a tmpfs directory for each NUMA node that your server has. I created mine in /hana/tmpfs0/RA1.
Add the following to your etc/fstab file and then mount the directory. You can control how much data you store in tmpfs through the size variable – I did 250GB for now.
tmpfsRA10 /hana/tmpfs0/RA1 tmpfs rw,relatime,mpol=prefer:0:size=250G
mount -a
Now – change your persistent memory options in your SAP HANA database global.ini file. In the [persistence] section your basepath_persistent_memory_volumes should point to your persistent memory mounts with colon separated values (I only needed one). Also, under your [persistent_memory] section set table_default to on.
Its as easy as that. You’ll need to restart your SAP HANA system for this to take effect. You’ll see the tmpfs directory start to grow on the restart.

Once it’s filled, you’ll see the benefits on future restarts. I restarted my database five more times and recorded the results and they were very good – on average I was able to shave another 40% of load times on my database as seen below.

As you can see – combining NSE and SAP HANA Fast Restart has allowed me to reduce my load times from 200 seconds all the way down to 94 seconds. By utilizing these methods, organizations can significantly optimize their SAP HANA environments. Not only does this approach enhance database performance by reducing load times, but it also provides greater flexibility in managing memory growth and minimizing downtime. These strategies empower businesses to maintain a highly efficient and responsive SAP HANA system, ensuring critical operations run smoothly and data is always readily accessible.