Stories sliced, goals set, it is now time to spin up the cart.
Config and project start has always been awkward for me. You only start a project once during its duration, so all the steps you have to take to get it code-ready are in neither my muscle memory nor my brain memory. After all, I'm not constantly starting greenfield projects.
But start we must. So in the beginning, here's what we did:
Bootstrapped a Gradle project using https://start.spring.io/ We added the JPA, Devtools, and Postgres dependencies, which you can search for on the site. We will be using MyBatis to run the migrations, but not as a mapper; that will be handled "under the hood" by the JPA repository library.
Initiated a Git repo (remote and local) for Fruit Cart and performed our first commit of the initial codebase.
Installed and ran a Postgres server. Followed these instructions to create a new database 'fruitcart' with superuser 'fruitcart':
Start postgres server and setup database
* check that the fruitcart superuser exists
run "psql -c '\du'"
* if user does not exist
run "c" (no need to set a password, defaults to 'postgres')
* create database fruitcart
run "./gradlew initdb"
* create migrations for fruit table with columns id, description, and
name. Run "./gradlew createdb"
* to add some data
run "psql -h localhost -U fruitcart --password -d fruitcart < db/backup.sql"
Ran a clean build to see if there were any errors (spoiler alert: there were--database wasn't configured properly).
Created Gradle tasks to initialize and create our database on clean build. After we got our database up correctly and running on port :5432, we were ready to start with our first tests.
But let's back up a second. Let's talk takeaways.
First, migrations must be created and run from the command line in the same directory where your database environment is located. I did know this before, but it bears repeating because I routinely forget it (hey, this blog is a resource for me too).
./mybatis/bin/migrate new name-of-migration
This should create a file (the name of which begins with a timestamp--migrations run in the order in which they were created) in the scripts directory. Use this to add you SQL statements.
Second: If you treat it right, .gitignore is your friend.
So there are certain files we don't want to appear in our remote repo. I usually think of .gitignore as a convenient place to hide secrets, but I think more practically it allows us to hide those pieces of our project that are specific to our local builds and not necessary for building our project in other environments, such as those that track changes in our workspace or build.
We used https://gitignore.io/ to generated the correct .gitignore file for IntelliJ, our IDE for this project. It's a little overkill: we don't have Jira or the Crashalytics plugin enabled (pure IntelliJ CE level for us), but it gives us a good sense of what should be included. So we just copied and pasted that into our very own .gitignore. And all is well.
Well, not so much for Jeff.
One of the .idea files--workspace.xml--kept escaping from the .gitignore. He would try and commit/push his code, and there would be a failure: there were changes to his workspace.xml file that weren't tracked. But of course there were and of course it wouldn't be tracked: it's a file that tracks his location in the IDE and other internal structures particular to his workspace. For some reason, .gitignore was not telling git that this file shouldn't be tracked.
Turns out, if you already committed that file and pushed it to your remote repo, it will haunt you. In Jeff's words, "It's a piece sh**".
Solution: Delete the .idea folder (it will be generated automatically by IntelliJ anyway), commit your changes, then add .idea/workspace.xml to your .gitignore, and add/commit/push all that up.
tl;dr: if a file has already been committed to your repo it will be automatically tracked regardless of whether or not you put it in .gitignore
Third: there may be manual set up of SQL with your Postgres DB.
Our build was failing. Turns out our DB wasn't being created. So weird, given that Postgres was running (I think it always is on my machine; that and Docker). But the database wasn't being created. It just wasn't there.
Turns out we had to create a shell script to actually create the database, and when we did we created an owner, fruitcart, that didn't actually exist.
Our shell script made the createdb function just fine, and we executed the task in our build, but the database would actually fail because there was no superuser or even user with write permissions. Basically we hadn't followed the first two * of the instructions at the beginning of this post. Once we had, our fruit cart snapped into existence.
Now all this took so much longer than we thought. And this is going to be a running theme with us: we spike, go into internet wormholes, and spend a lot of time exploring. We looked at getting Jetty happening, discarded Jetty after numerous failures. We spent a long time with workspace.xml. We tinkered in Postgres. It's getting better. But it's hard to even know what questions to ask when you're learning. It's hard to even know where to look when you don't know what you don't know.
And after all, that's what this is: learning.