I decided to give Go a try on my journey to pick up a new language that would be useful to my career and interests. This time I've been having a go at Go. I think as first impressions go, it's pretty nice.
This is not a guided tour, and arguably, not written for anyone else other than myself, as some personal reminders.
I gave myself a small project for it called Os-Release-Q . My intention was to be able to have a binary on any system I manage, such that I can print out exactly the information I need, without needing to parse or eye-grep for it.
First hurdle : import
Searching the web talks a lot about importing other people's packages , but very little about organising one's own code. Even the docs focus on go get
rather than separation of concerns.
I encounter this hurdle quite a bit in every language, as each has its own idiosyncratic philosophy on how to go about it, and what limitations each has or imposes.
Of all the activities I undertook in learning the basics, coming from a predominantly python background, splitting my code into multiple files was what took me the longest to get answers to. In summary, I found the following:
- top level needs a
go.mod
declaringmodule module-name
- I can then set a
src/
directory at top level, and asrc/main.go
in which to place my main function , with apackage main
declaration at the top - putting code in other files is a simple as creating a file like
src/others.go
with apackage main
declaration. - All functions and variables become available directly in any other file of
package main
, but the files need to be explicitly stated on thego build FILES
call - e.g.go build src/main.go src/others.go
For local submodules, the submodule must reside in a folder. It can declare a package submodule-name
.
Say it is in src/submod/
, with main implementor in src/submod/submod.go
. In main.go
we do import "module-name/src/submod"
(with module-name
pulled from go.mod
). And then we can call submod.SomeFunction()
.
We note that submodule functions are only available to importers if their name starts with a Capitalised letter. So no doing submod.myFunction()
- it has to be submod.MyFunction()
.
There are surely other considerations around submodules and imports, but as far as keeping code organised and segregated, this is the essentials.
To keep things sane, I tempted to only have one file declaring package main
, and isolating the rest into submodules - these get imported automatically without needing to be declared in the go build FILES
list of files.
Doing basic tasks
After I had resolved this specificity of Go, the rest fell in to place quite easily. For every basic task there was of course a StackOverflow entry, or a GoByExample.com page, and more basically, the Go language reference.
- String handling is done via the
strings
package - Array handling has a number of native functions, of which the
base_array = append(base_array, item1, item2)
pattern - it also works for extending an array with the values of another viaappend(base, other_array...)
- Error handling is done by passing out error objects typically, but not necessarily.
- a
"log"
lib exists for a handy pre-configured no-faffing log. It includes alog.Fatal(message)
call which logs an error, as well as immediately exiting. - Calling subprocesses is easy via the
"os/exec"
library, usingexec.Command(base, args...)
pattern
Compile-time errors were by and large intuitive, based on a basic understanding of the language (it's not the first time I looked at it, and on the sruface, it's a very simple and straightforward syntax)
Two particularly common tasks deserve their own paragraphs.
Error handling
Basic error handling is often commented as being cumbersome, literally needing to handle errors in the midst of control flow. This may be anathema to programmers coming from a try/catch workflow, but handling the issue at the point where it can happen isn't so bad.
// explicit return item `err` forces us to be aware of it
// but having the ability to check it in the same breath is not so bad
if result, err := someCall(); err != nil {
log.Fatal("Sorry.")
}
fmt.Println(result)
Compare try/catch way
try:
result = someCall()
except:
print_exit(1, "Sorry")
print(result)
I've seen hintings at people being unhappy with the whole "got to check err in the middle of my code" which certainly removes the ability to group-handle errors. For example in Python:
try:
res = do_thing()
res = and_more(res)
res = and_other(res)
except ErrType as e:
print(e)
exit(1)
If ErrType
is in fact thrown by all three , this is a handy way to catch each scenario. In Go, it's a little bit more cumbersome
func do_all_the_things() {
if res, err := do_thing(); err != nil {
return err
}
if res, err := and_more(res); err != nil {
return err
}
if res, err := and_other(res); err != nil {
return err
}
return nil
func main() {
if err := do_all_the_things(); err != nil {
log.Fatal("Sorry")
}
}
This is very much predicated on long sequences of calls wanting to be grouped under a same error handling routine, and how much that actually happens to people I cannot say. I guess I'll find out in my own time, but I remain, for now, unfazed.
Argument Parsing
I can't help but feel that the implementation of the flags
library is a bit half-baked. Evidently people are used to and OK with it, given its survival in its current form.
Calling program -flag arg1 arg2
gives us the toggle that flag
is set up to do, and positionals := flags.Args()
returns us the array of ["arg1", "arg2"]
However calling program arg1 arg2 -flag
does not toggle whatever -flags
is supposed to do, and instead gives is positionals
as ["arg1", "arg2", "-flag"]
wherein the flag was not parsed.
This may be useful for passing in a sub-call like program colorize ls -l
where the ls -l
is passed down literally - so I can see a use case.
It's just that most programs out there allow flag arguments anywhere around positional items. ls dir1/ -l dir2/
is the same as ls -l dir1/ dir2/
, and this is a convention that holds on the vast majority of Unix and Linux commands.
It may be just that this is something to get used to - and worth calling out.
Purpose and use case of Go
The file import paradigm aside, I found it pretty easy to get my basic application implemented. Anything I did wrong felt fairly obvious and the errors were meaningful. It really does feel like I can just focus on "getting things done."
From my very meagre amount of use so far, and taking my specific needs into account, I can see the following advantages
- easy to get started
- compiled binary, no runtime dependency
- simple language with types is a step up from shell scripting
- allegedly easy multiprocessing support
I thought having sparse types instead of objects and inheritance would be a hinderance, but so far so good. I get by without them in other languages, so I suppose when I do get around to defining interfaces and types, it will feel like a step up from Lua and bash. I hope.
One of the reasons I wanted to explore a compiled-to-native language was to be able to produce binaries that could easily be shunted around, without needing to rely on a particular version of a runtime being present.
A colleague once walked up to my desk in dismay, trying to solve getting Java 17 onto an old Node base image which was based on Debian 10 . Either he'd have to upgrade the Node version to get a newer base image, use a new Debian base image and install and configure Node manually, or scour the internet for a custom repo hosted by goodness-knows-who for a goodness-knows-if-hacked Java 17 that would run on Debian 10. Each solution would have ended up entailing further refactoring of the code base.
How much easier if the deployed software had no such conflicting runtime dependencies...
From an ops point of view, the one big gain I am feeling I would stand to feel is: I can easily write code, and build an ELF binary to then deploy on "arbitrary system X" and not have to contend with ensuring the right version of a given runtime is in place, and managing conflicting dependencies.
I've seen it touted before as an alternative glue language to replace shell scripting, and I can just about see that. One of the big advantages of shell scripting is the ability to pipe outputs from one command directly into another - but most often, the things that sit in that pipe are sed
, grep
, awk
and other text-massaging utilities. These can be replaced by language-native functions, and my basic experience here with getting output from a command has been pretty good. The only impediment in this scenario is that the Go program needs to be compiled whereas the shell can just run. It really will depend on what the deployment requirements are, but as soon as "runtime dependency" kicks in, I would be fairly tempted to reach for Go.
I'm sure there are other benefits, and I have heard a great deal said about the ease of use of multithreading and multiprocessing in Go, and I do intend on cooking up a mini project to explore that as a next step - probably something that might listen for inputs on multiple channels, and perform some basic tasks in response. I have had a use-case for that in some test automation tasks I've had before, so it's not alien to me at this point.
Top comments (2)
Thanks for sharing your experience with Go! How do you feel about the error handling style in Go compared to Python after using it for a while?
Somehow I have only just seen this comment...
I've put the Go experiments on hold whilst I dive a level deeper with Zig, but I do expect to ckme back to this at some point.
As noted in the main post, I haven't yet hit scenarios where the immediate piecemeal error handling has been an actual detriment in any way.
A fairly common pattern I find though in try/catch error handling is putting entire blocks in the try section, which i am increasingly trying to move away from, at which point it is very similar to immediate-handling anyway...