DEV Community

Brandon Allmark
Brandon Allmark

Posted on

Cloud Resume Challenge pt 5: Exploring CosmoDB and Query Languages

Please check out my resume at https://resume.allmark.me!
All feedback, good or bad is appreciated 😄

Progress thus far:
Certification ✔️
HTML ✔️
CSS ✔️
Static Website ✔️
HTTPS ✔️
DNS ✔️
JavaScript 🚧
Database 🚧
API ❌
Python ❌
Python Tests ❌
Infrastructure as Code 🚧
Source Control ❌
Backend CI/CD ❌
Frontend CI/CD ❌
Blog Post 🚧

What was done

My plan for this part was to configure my database, connect it to my Static Web App and then create the IP Address monitor of my Visitor Counter. Developing this was going to be done in the following phases:

ImagsdFFXC

This is a big deal because it is one half of my final function. The function that collects the public IP, increases its visit tally by 1 and if it isn't in the database, then add it to the database and set its visit tally to 1.

How it was achieved

To do this I first had to connect my Database to my new Static Web App. This was pretty simple as Azure SWA's have a preview that allows a direct secure connection to a database without having to configure an API for it.

This means I do not have to configure an API for my project.

Creating a connection between the two required selecting the Cosmo database in the portal and creating two files:

  1. staticwebapp.database.config.json
  2. staticwebapp.database.schema.gql

After adding these two files to my repo, the Action workflow started detecting my build as Node.JS and then failing. I resolved this by deleting the package-lock.json file that was generated when I ran the npm commands to install the static web app cli tools.

Phase 1 turned out to be remarkably easy. This is thanks to the API from ipify.org. A simple API that returns the public address of the client that is making the request. The best part about this is that it can be queried millions of times a minute without issue, this should be sufficient for my future scaling requirements.

Phase 2 started off somewhat half complete. The process queried the entire database for the list of visitor IPs and then running a check on if the visitor is in that list. This wasn't ideal as it displayed every visitor in the console. But it did respond back whether or not the visitor was within the database or not.

The code for Phase 1 and 2 were set aside for later. I had significant difficulties with Phase 3 and decided to focus on writing the code that manipulates my database first. This meant I wasn't interested in recording IP addresses, I just had a set of static buttons on my website that performed specific GET, LIST, UPDATE and DELETE functions.

Phase 3 was easily the most difficult phase of this block. I estimate completing this phase took me about 20 hours of research , troubleshooting, and trial n error to complete. Creating a new entry in the database was a relatively simple exercise, it was the "check if exists" part that caused me the most grief. In the end, I changed from CosmoDB to AzureSQL to allow myself to remove the need for a schema file and query my DB using Rest rather than GraphQL.

Image descriptAAion

The journey of coding the JavaScript to increase a value in my database by '1' was long and painful and my success was savoured by continuously mashing the "Add One" button.

Difficulties encountered

The most challenging part was writing the JavaScript that communicates with my database. When thinking about it there were so many components to the simple query. I had to write JavaScript that uses the Fetch API to send a JSON query via Rest/GraphQL to an Azure Hosted SQL database. Each of those items were a potential issue and it was difficult honing in on the exact cause.

Querying my database for its entire contents was a simple procedure, the hard part was specifically asking for a certain IP address and to return the visit count. Attempting to do this resulted in a sea of 400 errors. These were mainly caused by bad schema as defined in staticwebapp.database.schema.gql. While troubleshooting the 400 errors, I then encountered a 500 error.

Subsequent research advised me that a 500 error was related to the server, not so much the client. This sent me in the completely wrong direction and the next 10 -15 hours of pain can be summarised with:
1. Reverting to old 'working' code
2. Rebuilding the database in another subscription
3. Updating CORS rules to whitelist my domain
4. Checking health advisory for any issues
5. Rebuilding the database AGAIN, this time with serverless capacity rather than provisioned
6. Adding my Primary Key as an environmental variable to my Static webapp
7. Removed my mutations and custom queries from the schema

It turns out it was bad schema. Which means it was client issue. Caused by a simple typo I made while troubleshooting the 400 error.

Image deAAscription

With the schema situation sorted I continued working on the 400 error. At this point, I'm still stuck trying to create a query that will take the public IP address of the visitor and query the server for its existence.

After another 10 hours of reading documentation and getting no results, I gave up and decided to try using Rest to query my DB instead of GraphQL. This meant spinning up an Azure SQL resource. I deployed the Azure SQL server and Azure SQL Database via the Azure Portal. IaC be damned I just wanted my website to work.

Azure SQL also doesn't require a schema file to connect to the SWA which means my failure surface was significantly reduced. This ended up working well and I was able to create the write queries to database using Rest rather than GraphQL.

Reflecting back and what I learned

The first thing I learned is that your schema is also where you define what kind of queries you're going to be making. If you don't define the query, it can't be made and will throw a 400 error. This schema file sort of defines how you plan to interact with the database including things you directly reference within your code. It's important to marry up your code with your schema because bad configs here cause bad request errors.

I'm pretty happy to learn that and I'm just as happy to throw that knowledge in the bin because I moved to a database that doesn't require it.

I also understand the hate for Microsoft documentation. I thought Microsoft documentation was great but I have learned that I was wrong.

Top comments (0)