DEV Community

Brian Crites
Brian Crites

Posted on • Updated on

Twilio Hackathon Project: REACH Out

What We Built

During the global COVID-19 pandemic and the shelter at home orders that have followed we have seen large numbers of people being cut off from their traditional social safety networks. One area where this is particularly pronounced is with independent seniors, with over 11 million of them in the US alone [1]. This group has not only been cut off from their traditional social safety network but is most at risk when it comes to leaving their homes to reach out to that network.

At the same time there has been a huge increase in the number of people reaching out to every form of social safety network: food banks, government agencies, doctors and hospitals, etc. This has put an increasing strain on these networks, requiring them to provide an increasing amount of good with increasingly dwindling resources.

In order to try and bridge that gap we developed REACH Out. REACH stands for Remote Elderly Assessment of Care and Health and is a system to help social safety networks of all types (doctors, nurses, social workers, non-profits, etc.) to automatically check in regularly with large numbers of seniors and identify those who may be most at risk so they can best utilize their scarce resources for effective interventions.

Link to Code

GitHub logo brrcrites / reach-out

Twilio x Dev.to Hackathon Submission

How We Built It

TL;DR

  • MERN stack (MongoDB, ExpressJS, React, NodeJS)
  • WebPack for bundling
  • Docker containers (and docker-compose) for orchestration
  • Slack for communication
  • GitHub for version control and issue tracking

Our Process

There were only two of us working on the project and in addition to both of us respecting social distancing measures and shelter in place orders we are also both geographically distributed (one in San Jose and one in Santa Monica). This meant that while in the past we could have done much of this work sitting down together and hash things out we needed to have a bit more of an asynchronous process this time around. On top of this, Jeffrey was still working his full-time lecturing job remotely due to the COVID-19 campus closure and Brian was actively applying for a new position having been part of a round of COVID-19 related layoffs at Bird.

All of this meant that we were working on the project at very different times and only able to work sporadically throughout the hackathon period. To help us to coordinate, we set up a dedicated slack channel to communicate and coordinated our changes through GitHub. We created issues and a Kanban board through GitHub Issues tracking and Projects features to keep track of our progress and who was working on what pieces at any given time. We set up our branches and workflow to try and minimize conflicts and allow us to work as independently and efficiently as possible. Here were some of the things we found to be useful for coordination, many of which we have used in other projects as well:

We made all changes to master through pull requests

Generally we used a feature branching scheme where new features and patches each had their own branch off of master, which gets merged back into master through a pull request. We tried to keep the features and patches generally atomic and related to a single issue and used the "Squash & Merge" option to clean up the final message going into master.

We (tried) to write good and consistent commit messages

Brian has always been a fan of this advice from Chris Beams suggesting rules for good commit messages, so we generally followed that in this project. The combination of well written commit messages along with actively using the issue tracking for features and bugs meant that we generally had a good sense of the recent changes (both over the life of the project and when looking at an individual branch).

We locked the master branch behind passing tests

One of the things we added fairly early in the process, after we had put together some skeleton for the project, was to set up continuous integration. We used Travis CI since we both have experience working with it previously. We then locked the master branch so that PR’s could not be merged unless they passed all the tests to try and guard against regressions. Since we had limited time there isn’t as much test coverage as we would like, but we tried to cover the major components that were likely to regress. It didn’t catch everything, but it helped.

We didn’t require code reviews on all pull requests

This might sound crazy to people who have worked in companies that have very strong code review requirements, but hear me out. Since the both of us had limited hours we tried to only request reviews for portions of that code that we felt needed an extra set of eyes to look out for possible bugs or because the other person needed to understand the feature in order to be effective in creating their next feature.

We put in some temporary mocks to keep each other unblocked

There were several times where portions of the system had not been completed that were necessary for a full feature. An example of this might be creating an endpoint to retrieve some data from the database when the new table hasn’t been completed yet. Rather than being blocked on the new feature we would build in a temporary mock that allowed us to move forward, such as returning some static data from an endpoint until the new table was complete. When the underlying component was complete, we wired it in as necessary and updated anything downstream that was affected by having incomplete information (updating the frontend because the data format of the static data didn’t quite match the format of the database data for example).

Our Journey

By the end of the project we ended up using a MERN stack (MongoDB, ExpressJS, React, NodeJS) for our development with Webpack creating the bundles and running inside of Docker containers. We had three docker containers total for the client, the server, and the database. These Docker containers were built, networked and brought up using docker-compose to make local development easier. While this feels like a very typical stack to use for the type of system we built, we essentially started out with "lets just use javascript throughout the whole thing" and figured out the rest as we went.

Jeff’s Side Note: I had brought up a couple Docker containers the week before in an effort to create autograders hosted on Gradescope for his Embedded Systems and Computer Architecture courses at UC Riverside. That and some previous JS programming experience were the only familiarity he had with the technology stack.

Brian’s Side Note: I had been working with React for a frontend project at Bird and wanted to leverage what I had learned there in this project. Neither of us have a strong javascript background, but I didn’t want to be switching languages everytime I moved from working on the frontend to the backend, so it was my suggestion that we use javascript throughout.

A Tangled Web of Docker Containers

When we first started working on this project we were building the systems through npm/webpack commands directly. While this made it fairly quick to develop we wanted to leverage Docker to make the build process consistent across everyones machines (both ours in the short term and users in the longer term). As we started moving to docker we built separate containers for the server and the client and were originally bringing up each docker container separately and having them communicate through exposed ports.

$ cd client; docker build .
$ docker run --rm -d -p 8080 <image from build>
$ cd ../server; docker build . 
$ docker run --rm -d -p 8081 <image from build>
Enter fullscreen mode Exit fullscreen mode

Any changes made required us to bring down the client and/or server and bring it back up. The --rm flag removes the container when it ends, preventing a lot of dead containers from floating around. This was already a cumbersome process, and when we first looked into integrating the database we decided it would be inefficient. Jeffrey happened upon this Dev.to post on how to use Docker containers while building MERN apps.

Jeff’s Side Note: I forgot what blog post I got this information from (lots of tabs open at the time), so I had to go back to the commit history to find when I integrated docker-compose then back in my browser history to that date to find what article first mentioned it.

The post described how to create multiple containers and then bring them up together using the docker-compose system and so Jeff started building out an initial docker-compose.yml file. This docker-compose.yml file brought up a MongoDB container, a server container and a client side container and connected all of them through a Docker network. This gave us a much easier [2] build and development process requiring only one line to bring up and tear down the project:

$ docker-compose up --build     // bring up the project
$ docker-compose down           // tear down the project
Enter fullscreen mode Exit fullscreen mode

From Logging to Texting

The database gave us a persistent layer to store our messages across executions, meaning we didn’t need to re-generate test data each time we spun up the server. The core server functionality of our app was built around a system to send recurring messages out to users and correlate responses with the messages they are in response to. For this system we chose to use the cron system to perform the task scheduling, more specifically we used the node-schedule package to avoid having to re-shim cron ourselves.

For our initial development we had the cron job simply log to the console that it executed correctly, but not actually send a text message. This was primarily done to avoid using up all our twilio credits, and also so our phones weren’t vibrating every minute during testing. It was especially useful in early testing when we accidentally created crons that would run every second! Once we had the major bugs ironed out, rather than simply replace the console logging with twilio sms messaging we kept both and added a selector field to the endpoint which created the crons. This way we could still run the console logging the majority of our time when debugging and only use the sms option for “live fire” testing (a feature you can see in the app today). We created a class around the cron jobs making it easy to perform CRUD operations on them and to act as a central point for logging messages that had been sent to MongoDB.

Brian’s Side Note: One of the difficulties here is timezones (isn’t that always the case). The user could be in any timezone and the server is running with a UTC clock, so we need to do extra work to make sure the user gets their message when they expect it and not at the time they enter in the UTC timezone. The short term patch for this was to convert it to UTC before sending it to the server, but a more long term fix to account for timezones (and time in general) is still on the horizon.

To receive an sms response, we needed to create a hook for twilio to send responses back to our web server, but our API was only hosted locally. We couldn’t find an easy way to get the Docker run or docker-compose process to set up a proper ngrok tunnel, so we opted to run the twilio command:

twilio phone-numbers:update <number> --sms-url http://localhost:8081/sms-response
Enter fullscreen mode Exit fullscreen mode

Which sets up an ngrok tunnel end-point for twilio to hit and get forwarded to our localhost in addition to bringing up the containers through docker-compose. It's slightly less elegant, but since you can keep the command running in another tab relatively long-term and can reboot the docker-containers without rebooting the tunnel, it isn’t a huge overhead.

Both the sending and receiving of messages have their own endpoints which log the sent/received message to MongoDB for long term storage (either directly in the case of received messages or through the cron containing class in the case of sent messages). We also developed a number of retrieval endpoints to pull the messages and running cron’s from the server for use in the frontend.

Bug Hunting

At this point our system is more or less complete with the following testing flow:

  • Schedule a message to be sent some time in the future
  • Wait for the message to be sent, check to make sure the scheduled job shows up everywhere it should and nowhere it shouldn’t
  • Receive message, check to make sure the message shows up everywhere it should and nowhere it shouldn’t
  • Reply to message and, you guessed it, check to make sure it shows up everywhere it should and nowhere it shouldn’t

This all seemed straightforward to us, and it being about 9:53 AM at the time, Jeffrey decided to schedule a message for 10:00 AM (easier than changing the hour and minutes) to ask "Did you take your vitamins?" Which he hadn’t, hence the reminder. At 10:00 AM (after taking his vitamins) he received a text message...and then at 10:01AM … and then at 10:02 AM …

It turns out that if you leave a value null in the cron timer rules, for instance the minutes as null, it schedules the job to run every minute. This was specified in the node-schedule documentation and Brian had written a sensible seeming ternary operator to check if the value existed before setting it, and if it didn’t, to use a null. However, that turned out to be a problem as the 0 had ended up being evaluated as false causing it to use a null value instead, which led to the message being sent every minute. So much for not spending all our twilio credits in one place.

Luckily, Jeffrey was using Postman to do his testing and it didn’t take more than 10:02 AM for him to look up and send the /delete-recurring POST with the ID of the runaway job to stop it. Of course, it would have taken him even less time to hit Ctrl-c on the terminal running his docker containers to bring down his “servers”, as Brian pointed out at 10:05 AM, but hey a fix is a fix.

The last step we took when putting together our final submission for this hackathon was to get a person outside the team to go through our readme instructions and try to launch and use the system. This is something that we highly recommend to anyone as it is the best way to avoid situations of
"works on my machine." Through this process we refactored our readme to both make it more clear and include some initialization values that we had originally omitted.

Aside from these and a few other minor issues [3], the end-to-end system was working. That meant all that was left to do was clean some minor cleanup, double check the submission guidelines, and to write this blog post.

Deployment

There is currently not a deployment process...we forgot that part and focused on local development. Luckily it is developed entirely in docker containers so deployment effort is somewhat reduced, in theory...according to blog posts I’ve read.

References & Footnotes

[1] Information on the number of seniors living alone america from the institute on aging https://www.ioaging.org/aging-in-america

[2] Unfortunately, I was never able to connect the Webpack build process inside the docker container to the status of the files outside of the container...meaning we did not have hot-reloading during development. Definitely delayed development, especially when making minor changes for bug fixes near the end of the hack-a-thon. This is supposed to be possible using volumes, but alas, a task for another day.

[3] One issue that did come from our relatively lax review system and asynchronous schedules is that we got into the habit of leaving pull requests open overnight so the other person could read it the next morning and merge. This became a problem when Brian opened a work in progress pull request with the prefix WIP and left it open overnight. Jeff proceeded to read it and miss the WIP tag and merge it, even though it hadn’t been completed. Brian then reverted the merge (no big deal you would think) but something happened after the revert (we still aren’t quite sure what) and some of the changes that were in that pull request disappeared. This led to lots of small bugs popping up after the full pull request was merged, since things that had been in it were now randomly missing.

About the Authors

Top comments (1)

Collapse
 
jcs224 profile image
Joe Sweeney

Wow, what a process! Thanks for the great detail. I just watched Kilo Loco's YouTube vid last night about getting laid off from Bird, that is tough. Sorry you also went through that.

I created a similar app, a bit smaller in scope. Almost more trying to get the elderly into the social media realm without needing more than a landline phone.