Are we still mostly using 4 KiB pages though? I would think 64-bit architectures and desktop systems would be using 16 KiB or even 64 KiB pages by now.
Under Linux a large allocation can be done with an anonymous mapping using mmap. The zero-page will be used for the memory mapping. The zero-page is readonly and managed by the OS; so as long as you don’t write to the page, no page frames will be allocated. And you won’t end up with garbage when you read from a zero page since it is zeroed out. Only when there is a write, the copy on write feature kicks in and a page frame will be allocated. This is when physical RAM is being used.
Ivansays:
I think this post is highly confusing for junior developers.
There is a point in micro optimizing the memory application if you do many small memory allocations. They do not magically become cheap because 100 of them fit in 4kb page. Those allocs still need to be accounted for so malloc and free will work, they are not free..
Are we still mostly using 4 KiB pages though? I would think 64-bit architectures and desktop systems would be using 16 KiB or even 64 KiB pages by now.
What is the page size on your PC?
Under Linux a large allocation can be done with an anonymous mapping using mmap. The zero-page will be used for the memory mapping. The zero-page is readonly and managed by the OS; so as long as you don’t write to the page, no page frames will be allocated. And you won’t end up with garbage when you read from a zero page since it is zeroed out. Only when there is a write, the copy on write feature kicks in and a page frame will be allocated. This is when physical RAM is being used.
I think this post is highly confusing for junior developers.
There is a point in micro optimizing the memory application if you do many small memory allocations. They do not magically become cheap because 100 of them fit in 4kb page. Those allocs still need to be accounted for so malloc and free will work, they are not free..