Most of the time, you don't think about memory management in Linux. It just happens automatically - and quite intelligently too. Sometimes though, you might need to deliberately "clear" some memory for some particularly specific task. Like benchmarking for instance. If you want to compare the performance of programs, it's best to have as close to similar initial starting conditions as possible.
But first, what does it mean to "clear out memory" in Linux? There are a couple of facets to this.
Freeing up Memory by Closing or Killing Processes
The most obvious meaning of "free up memory" is about removing processes that are using inordinate amounts of it. There are many ways to do this, but one of my favorite programs is htop - and I've written an article about how to use it to identify processes and kill them.
Beyond this however, is the concept of the "page cache" or the "buffer cache".
In-built Memory Freeing System and Process Killer
Even if you don't explicitly manage memory, Linux has a process called the "oom-killer", which automatically kills processes and writes a message into the message buffer of the kernel. For example, take the following command which uses an endless amount of memory in Linux:
tail -f /dev/zero
Eventually it will be killed off by oom-killer and we can see the message in the kernel by looking at the output of the "dmesg" command:
But what if you want to manually empty the page or buffer cache in Linux? Here's a quick explanation of what that means.
How Page/Buffer Cache Works in Linux
Disk operations are expensive and require the movement of a seeking head. Even SSDs have latency, compared to reading data from RAM. Therefore, whenever you read or write a file from disk, Linux stores that information in memory (RAM) for faster future access.
In the following example, you can see that the initial page/buffer cache is 62MB. Then I create and read a 10MB file. Immediately after, the cache becomes 73MB.
This means shows that the file is now in RAM and will be accessed much faster next time.
This Doesn't Mean We Need to Free the Memory!
Based on the above, you might think that freeing up the page or buffer memory is a good thing. After all, it's using up space that can be best used by other processes. However, this is untrue. Look at the example again. While the "buff/cache" memory has increased, and the "free" memory has gone down by the same amount, look at the value for "available" memory. It's pretty much the same! How is this possible?
This is because the page/buffer memory is the lowest priority and is released as soon as another process requires it. So in reality, it's not affecting performance at all. All that's happening, is that unused memory is being utilized to potentially speed up the system. If a process ever needs more memory and there isn't enough free memory, the page/buffer cache will instantly be utilized.
Manually Clearing the Page/Buffer Cache
Despite the above, you might need to clear the page cache. Either for benchmarking, or you want to make sure that the contents of a file will be stored in RAM and not blocked because of a full page cache. To do this, we use a simple echo statement and redirect it to the following location - /proc/sys/vm/drop_caches. To empty the page/buffer cache, we simply type:
sync echo 1 > /proc/sys/vm/drop_caches
The "sync" command is important to ensure that all unwritten data and changes in the cache are first outputted to the disk. To see how this works, look at the before and after values of the buff/cache:
You can see that a huge chunk of the buff/cache has been freed. Note however, that the "available" memory has remained the same as expected.
As mentioned earlier, most of the time you don't need to do anything. The page cache is emptied and filled automatically as needed. But in case you need to manually wipe it out, you have that option as well.