How to enable hugepages on Linux - Linux Tutorials - Learn Linux Configuration

Computer memory is allocated to processes as pages. Usually these pages are rather small, meaning that a process consuming a lot of memory will also be consuming a lot of pages. Searching through a multitude of pages can result in system slow downs, which is why some servers can benefit from enabling huge pages.


This is a companion discussion topic for the original entry at https://linuxconfig.org/how-to-enable-hugepages-on-linux

STOP. Do not use the values in this article if you are on a desktop system; the value given is orders of magnitude too large. You can probably get away with writing the value at runtime, as your system is already loaded, and the kernel won’t have that much memory to allocate; but if you write the proposed values to your sysctl.conf, your computer may not boot anymore, as it will be trying to allocate every spare megabyte of memory to hugepages before the system loads, leaving none for the OS. Bear in mind that with the default page size of 2M, setting vm.nr_hugepages to 512 is a gigabyte of memory allocated. The value given in this article allocates 200 gigabytes.

I can’t post actual links, but see the RedHat documentation for vm.nr_hugepages for more info. It applies to debian based distros as well.

1 Like

Thanks so much for this infinitely precious comment! I hope the author of that article really, really corrects their silly mistake.

When I did a sudo ls -la /sys/devices/system/node/node0/hugepages/ and saw a directory named hugepages-1048576kB, I knew something had to be wrong.

Fortunately for me, I did not reboot!

Or else I’d be banging my head against this laptop I’ve got, desperately trying to log in to my pretty useless remote server…