DEV Community

Discussion on: I used to work with vessel tracking data. Ask Me Anything!

Collapse
 
cathodion profile image
Dustin King

How is edge computing, new CDN services and concepts, and new web features (offline mode) affecting this space?

At least for the part of the industry I was in, not a lot. My team's web apps didn't use a CDN, but they were mostly only available on the Coast Guard network. I know more public-facing ones used one, but beyond that I don't know how it was set up.

As far as the client side of web apps go, I don't think we had much need for edge computing or offline mode. The end users for those were on dry land.

Cloud servers were beginning to be a thing where I worked, but it was done at an in-house datacenter, as of a few years ago. I think AWS was starting to be viable and certified for government work, but we weren't using it yet.

Elsewhere in the industry, there's the privately-run MarineTraffic, which might take advantage of newer front-end stuff.

My team didn't run it, but we had a viewer like that which used Microsoft Silverlight. I left almost 2 years ago, so everything I'm saying here could have changed (but government work moves slowly, so probably a lot hasn't).

I imagine the industry always found ways to solve offline sync issues, but it seems like some of the standards or general-purpose services are catching up with the needs?

The offline sync issues we had to deal with weren't with web interfaces, but we did have to handle message drops or lost connections along the route data took from receivers, to our main server, to the database. The original system was set up with freshness of the data in mind. If someone was getting a feed from us, it wouldn't matter if a few messages were lost as long as the data they were seeing was current (a lot of messages were duplicates, and anyway the ship would send a new one every few seconds if it was moving). But since our mission was to store everything, we had processes at each receiver, and at our main and DR servers, that would stream everything to a flat file. Then we had processes that would check for missing data, grab the files for each hour (definitely from the main and DR servers, and from the receivers if there had been any connection loss), merge them, and load them into the database, replacing the stuff that had been saved as it streamed in. The fetch and merge stuff was mostly perl scripts written by our sysadmin (who was also a skilled programmer), even though there was supposed to have been some Java code to do it from previous contractors.