DEV Community

Lane Wagner
Lane Wagner

Posted on • Originally published at qvault.io on

Don’t Go To Casting Hell; Use Default Native Types in Go

hell red sky

The post Don’t Go To Casting Hell; Use Default Native Types in Go appeared first on Qvault.

Go is strongly typed, and with that, we get many options for simple variable types like integers and floats. The problem arises when we have a uint16, and the function we are trying to pass it into takes an int. We find code riddled with int(myUint16) that can become slow and annoying to read.

Go’s basic types are:

bool

string

int int8 int16 int32 int64
uint uint8 uint16 uint32 uint64 uintptr

byte // alias for uint8

rune // alias for int32
     // represents a Unicode code point

float32 float64

complex64 complex128
Enter fullscreen mode Exit fullscreen mode

There are 5 different types that can represent an integer, 5 types for an unsigned integer, 2 for a float, and 2 for a complex number. While it’s hard to defend the notion that the compiler itself has default types, the standard library certainly plays favorites.

For example, the cmplx package which does math with complex numbers accepts and returns exclusively complex128.

With floats, the vast majority of the math package has function signatures using float64. In the same package ints are usually just the int type, and unsigned integers are typically uint32.

These are what I’ve come to refer to as the “default native types”:

bool

string

int

uint32

byte

rune

float64

complex128
Enter fullscreen mode Exit fullscreen mode

Why Do We Care About Defaults?

There is a good reason that the majority of code uses these values. In all of the above cases, the choice of specific sub-types are based on range and precision. int8 can stores values between -128 and 127, while int64 ranges from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. At the same time, int8 uses a single byte while int64 uses 8x that.

The defaults above were chosen in the standard library (and by the vast majority of Gophers) because they are the common-sense, works-most-of-the-time, big-enough-range values. Exposing a rounding functionfor float32 simply won’t be as useful as float64. It can’t be used by as many values.

func Round(x float64) float64
Enter fullscreen mode Exit fullscreen mode

If you have a float32 that you want to round, you first need to cast it:

math.Round(float64(myFloat32))
Enter fullscreen mode Exit fullscreen mode

This is not only slow but clunky to read. Type conversions take time. Memory must be allocated. My advice is to use the default type (float64 in the case of floats) in your applications unless you have a compelling reason not to.

When Not To Use Default Types

Performance and Memory.

That’s about it. The only reason to deviate from the defaults is to squeeze out every last bit of performance when you are writing an application that is resource-constrained. (Or, in the special case of uint64, you need an absurd range of unsigned integers).

For example, I probably wouldn’t swap out a single uint32 for uint8, even if I was certain I would only need 8 bytes. However, If I have a slice of uints that can potentially hold thousands of values, I may see a significant memory savings by doing a few type conversions and using uint8.

A good example of this are the packages I maintain, go-tinydate, and go-tinytime. Usually, I encourage users NOT to use them, and to just use the default time.Time. However, in my backend career, there have been applications that went from requiring 16GB of RAM down to <4GB by making the swap.

Use Defaults

Make your life and the lives of your coworkers easy. Use the defaults unless you have a very compelling reason not to.

Thanks For Reading

Hit me up on twitter @wagslane if you have any questions or comments.

Follow me on Dev.to: wagslane

The post Don’t Go To Casting Hell; Use Default Native Types in Go appeared first on Qvault.

Top comments (2)

Collapse
 
pinotattari profile image
Riccardo Bernardini

I program in Ada and we (most Ada programmers) have a different view: define your types based on what they actually mean and leave to the compiler the problem of mapping them to native types (unless you have special needs like fitting them into a packet).

For example, a type representing a dice outcome could be defined

type Dice_Outcome is range 1..6;

This says that Dice_Outcome is an integer that assumes values in the range from 1 to 6. What is the underlining native type? Is it a byte (to save space) or maybe an int64 (that the processor handles more efficiently)? I do not know and (in most cases) I do not care. (If I care I can instruct the compiler to do a specific choice)

What about the "casting hell"? Well, my experience is that if you define your types correctly usually you do not have much need for casting. If a function expects as argument a TCP port number and you are trying to pass a Dice_Outcome... Hmmm... the thing is fishy... and it is more probable that you did a mistake somewhere. Sure, every now and then you need conversions, but they are quite rare. Using types with a "meaning" prevents mixing and the necessity of castings.

Collapse
 
freedom profile image
Freedom

Your post is an unpopular opinion in Reddit and this explain:

reddit.com/r/golang/comments/gnge5...