Free memory really low!

Adam how you defining 'better off' - what metrics you testing for to observe such ? htop doesn't report cache/buffer usage like free, so if you use htop you overlooking cache/buffer usage entirely even though it comes into play.
 
@Adam Howard

Then you don't understand how linux works, I'm afraid.
Indeed.. the memory buffer and cache are there to alleviate slower disk based access. Having little or no pages/items in buffer or cache mean that you have to hit disk first for first access (not a really big deal if you have fast SSD based disks but still 2nd and subsequent access is always faster (from buffer/cache).

Quick test on my KVM based VPS with 512MB memory and SSD cached disk (so non-cache performance isn't noticeably different from cached/buffered) but there is still a difference.

1. sync and drop_cache memory state

Code:
 free -m
            total      used      free    shared    buffers    cached
Mem:          499        105        393          0          0        44
-/+ buffers/cache:        61        437
Swap:            0          0          0

2. using dd creating a 256MB sized file called testfile.txt

Code:
 time dd if=/dev/zero bs=256k count=1024 of=testfile.txt
1024+0 records in
1024+0 records out
268435456 bytes (268 MB) copied, 0.518246 s, 518 MB/s

real    0m0.526s
user    0m0.000s
sys    0m0.420s

this raised memory cached from 44MB to 300MB while real memory used raised from 61MB to 65MB

Code:
 free -m
            total      used      free    shared    buffers    cached
Mem:          499        366        132          0          0        300
-/+ buffers/cache:        65        433
Swap:            0          0          0

3. Now test file transfer performance in cached state - took 4.000 seconds to rsync the file

Code:
 time rsync testfile.txt testfile2.txt

real    0m4.000s
user    0m1.865s
sys    0m1.610s

relatively fast as the rsync hit memory cache raising it from 300MB to 337MB cached while real memory usage = 64MB

Code:
 free -m
            total      used      free    shared    buffers    cached
Mem:          499        402        96          0          0        337
-/+ buffers/cache:        64        434
Swap:            0          0          0

4. lets remove both testfile.txt and testfile2.txt and sync and drop_cache reset memory to this state where 61MB memory used

Code:
 free -m
            total      used      free    shared    buffers    cached
Mem:          499        99        399          0          0        38
-/+ buffers/cache:        61        437
Swap:            0          0          0

5. recreate the testfile.txt 256MB file

Code:
 time dd if=/dev/zero bs=256k count=1024 of=testfile.txt
1024+0 records in
1024+0 records out
268435456 bytes (268 MB) copied, 0.368768 s, 728 MB/s

real    0m0.390s
user    0m0.003s
sys    0m0.284s

6. This time, invalidate the buffer cache via sync and drop_cache reset memory BEFORE doing rsync timed test so that rsync file transfer isn't hitting buffered cache memory

slower transfer at 4.168s vs 4.000s

Code:
 time rsync testfile.txt testfile2.txt

real    0m4.168s
user    0m1.886s
sys    0m1.587s

invalidate cache again and do another rsync file transfer slower at 4.327s vs 4.000s

Code:
 time rsync testfile.txt testfile2.txt

real    0m4.327s
user    0m1.894s
sys    0m1.853s

7. Now rerun rsync file transfer while still hitting buffered cache from previous rsync run

faster than unbuffered rsync transfers at 4.083s with real memory usage = 63MB and 336MB cached

Code:
 time rsync testfile.txt testfile2.txt

real    0m4.083s
user    0m1.791s
sys    0m1.800s

Code:
free -m
            total      used      free    shared    buffers    cached
Mem:          499        401        98          0          0        336
-/+ buffers/cache:        63        435
Swap:            0          0          0

rerun again rsync hitting the memory cache, yet again faster 3.993s

Code:
time rsync testfile.txt testfile2.txt

real    0m3.993s
user    0m1.899s
sys    0m1.643s



The above illustrates what buffer/cached memory is for.

Adam you're saying you get better performance when buffers and cache aren't in use. For that to occur (when buffer/cache not used), it would be only at first access only as any subsequent access is buffered/cached regardless of what htop reports as htop doesn't report buffer/cache usage (well not on CentOS at least).

But yes when htop reports 100% memory used, free command will also report 100% real memory used. In which case yes, you'd be hitting swap.


Another way to look at this is running sar command for memory stats at 1 second interval and then creating the 256MB file, you can see the change in %memused vs kbcached and kbbuffers increase around 17:44:40 time onwards. Sar though reports used memory as including memory cache and buffers as per the correct explanation of differences in how linux reports memory usage at http://www.linuxatemyram.com/.

Code:
17:44:12    kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit  %commit
17:44:13      402712    108328    21.20      1932    42672    425868    83.33
17:44:14      402712    108328    21.20      1932    42704    425868    83.33
17:44:15      402712    108328    21.20      1932    42704    425868    83.33
17:44:16      402712    108328    21.20      1932    42704    425868    83.33
17:44:17      402712    108328    21.20      1932    42704    425868    83.33
17:44:18      402712    108328    21.20      1932    42704    425868    83.33
17:44:19      402712    108328    21.20      1932    42704    425868    83.33
17:44:20      402712    108328    21.20      1932    42704    425868    83.33
17:44:21      402712    108328    21.20      1932    42704    425868    83.33
17:44:22      402588    108452    21.22      1932    42704    425868    83.33
17:44:23      402588    108452    21.22      1932    42728    425868    83.33
17:44:24      402588    108452    21.22      1932    42728    425868    83.33
17:44:25      402588    108452    21.22      1932    42728    425868    83.33
17:44:26      402588    108452    21.22      1932    42728    425868    83.33
17:44:27      402588    108452    21.22      1932    42728    425868    83.33
17:44:28      402588    108452    21.22      1940    42724    425868    83.33
17:44:29      402216    108824    21.29      2024    42800    425868    83.33
17:44:30      402216    108824    21.29      2024    42800    425868    83.33
17:44:31      402216    108824    21.29      2024    42800    425868    83.33
17:44:32      402216    108824    21.29      2024    42800    425868    83.33
17:44:33      402216    108824    21.29      2296    42800    425868    83.33
17:44:34      402216    108824    21.29      2304    42796    425868    83.33
17:44:35      402216    108824    21.29      2304    42800    425868    83.33
17:44:36      402216    108824    21.29      2304    42800    425868    83.33
17:44:37      402216    108824    21.29      2304    42800    425868    83.33
17:44:38      402216    108824    21.29      2304    42800    425868    83.33
17:44:39      402216    108824    21.29      2304    42800    425868    83.33
17:44:40      207148    303892    59.47      2684    233928    426428    83.44
17:44:41      134988    376052    73.59      2832    305132    425868    83.33
17:44:42      134988    376052    73.59      2832    305132    425868    83.33
17:44:43      135112    375928    73.56      2832    305132    425868    83.33
17:44:44      135112    375928    73.56      2832    305132    425868    83.33
17:44:45      134988    376052    73.59      2840    305128    425868    83.33
17:44:46      134988    376052    73.59      2840    305132    425868    83.33
17:44:47      134996    376044    73.58      2840    305132    425868    83.33
 
Last edited:
Top Bottom