4 bytes for memory management
4 bytes object hash code? I don’t recall
4 bytes class pointer for object type (8 if >32 GB Mem limit)
4 bytes length (signed, hence maximum array size ~ 2^31
So 16-20 bytes overhead, then rounded up to multiples of 8 for memory alignment and the compressed pointer trick. (Regular objects: 12-16, as they don’t have an array length, the object size is known via the class pointer)
Default settings — it might be possible to tune to compressedOOPS to use 32 bit pointers up to 64GB RAM at the cost of increasing the alignment to 16 bytes. Not sure if you could go to a 16GB limit and 4 byte size padding – there might be other places where 8 bytes memory alignment is desirable (you might know this better than me, which CPUs want this kind of alignment). 8 bytes seems to be the best trade-off.
mesays:
What are the overheads in C and C++?
I’d assume that even C alloc needs to keep track of memory allocations, so there *will* be some overhead associated. I have no idea about the current glibc. I know that optimized allocators exist (last but not least in the templates), that memory alignment is common, and I’ve once debugged a poor memory allocator for a MMUless ARM SoC that simply stored the length before and after each allocated chunk (and a “free” bit, i.e. 8 bytes overhead on 32 bit) – which was of course incredibly prone to corruption by out of bounds writes…
I’d assume that for C++ with OOP there will also be some type information involved. So I’d expect on 64 bit systems overheads of >=16 Bytes for arrays in OOP are common across languages, too.
Well known. And now give the VM 33GB RAM!
You might put in some explanations, too.
4 bytes for memory management
4 bytes object hash code? I don’t recall
4 bytes class pointer for object type (8 if >32 GB Mem limit)
4 bytes length (signed, hence maximum array size ~ 2^31
So 16-20 bytes overhead, then rounded up to multiples of 8 for memory alignment and the compressed pointer trick. (Regular objects: 12-16, as they don’t have an array length, the object size is known via the class pointer)
Default settings — it might be possible to tune to compressedOOPS to use 32 bit pointers up to 64GB RAM at the cost of increasing the alignment to 16 bytes. Not sure if you could go to a 16GB limit and 4 byte size padding – there might be other places where 8 bytes memory alignment is desirable (you might know this better than me, which CPUs want this kind of alignment). 8 bytes seems to be the best trade-off.
What are the overheads in C and C++?
I’d assume that even C alloc needs to keep track of memory allocations, so there *will* be some overhead associated. I have no idea about the current glibc. I know that optimized allocators exist (last but not least in the templates), that memory alignment is common, and I’ve once debugged a poor memory allocator for a MMUless ARM SoC that simply stored the length before and after each allocated chunk (and a “free” bit, i.e. 8 bytes overhead on 32 bit) – which was of course incredibly prone to corruption by out of bounds writes…
I’d assume that for C++ with OOP there will also be some type information involved. So I’d expect on 64 bit systems overheads of >=16 Bytes for arrays in OOP are common across languages, too.