Great question! Honestly, this is a bit outside of my systems knowledge, but I would assume that each primitive type would only take up as much space as it needs (i.e. 32 bits for an integer). That said, I'll defer to the systems experts on this one.
Here's a nice discussion that might better address your question.
In addition, I think we can take this question a step further and ask: "Why not always use something like BigInteger?"
I have done a little research on the matter.
First of all, even in 64bit systems, pointer addresses are of byte granularity (this is why you can get a specific char from a *char array in C). Because of this, it is possible to directly reference variables of less than 64 bit length. So using int instead of long actually saves memory.
In regards to runtime and performance: Arithmetic operations on 32 and 64 bit integers (on a 64 bit cpu) are done on the hardware (ALU). The time difference should not be significant, still, it is possible that adding two 32bit integers will be faster than longs because of some hardware optimizations.
In regards to BigInteger: I have to disagree on this one. BigInteger is an abstraction implemented on the software level. It must be represented by something like an array of integers, and therefore, arithmetic operations on BigInteger are not O(1) time and space complexity (probably logarithmic - because adding two BigIntegers requires you to compute each integer in an array of logarithmic size of N).
Thanks for the follow up! I think a lot of what you shared followed my intuition—although I didn't know the details. Thanks again.
In terms of the BigInteger topic, I totally agree. I was trying to pose the question "why not always go bigger?", and you nailed the response. I figured the answer to that question would be similar to your original question.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.