DEV Community

Cover image for Personal Safety, GPS, and Machine Learning: Are You Running from Danger?
Josiah Bryan
Josiah Bryan

Posted on • Edited on

Personal Safety, GPS, and Machine Learning: Are You Running from Danger?

Imagine that you're getting a text every minute from your best friend, and all it has in that text is their current speed. Then you have to write back to them what you think they're doing - are they walking, running, driving, or sitting still?

In my app, I went from "Hey, I've got some GPS points being streamed to my server" to "real-time machine learning classification triggering push notifications" and it took me less than a day of coding. Here's how I did it.

Walk Safe

That's exactly the scenario I'm addressing in an app I'm making. I get a GPS speed reading from the user, and I want to know if they're walking, running, etc. This app is called "WalkSafe", and making it available for free in the Play Store and App Store. (Not published yet - still in the review stages, hence why I have time to blog while waiting for the reviewers to approve it!)

I decided to create WalkSafe after my sister moved into an apartment with her young son where she felt very unsafe. It was a good move for her, but being a single mom and out at night alone - well, she felt unsafe. My family lived near by, but sometimes she might not be able to whip out her phone and call if something happened. Enter the idea for "WalkSafe."

With WalkSafe, you can set a timer when you're in danger. If the timer goes off before you stop it, a SMS and voice phone call is sent to your emergency contacts with your location and any notes you enter. Of course, if you get to where you're going safely, you just stop the timer and all is well! But if you can't stop it for whatever reason, our cloud servers will monitor your timer and if it goes off, the SOS is sent immediately. That means that even if your phone is destroyed, offline, or no service, the SOS still gets sent.

When you set the timer in WalkSafe, it starts recording your GPS location and streaming it to the server for the duration of the timer. No GPS is stored before or after, only while you're in danger. However, I felt like simply logging the GPS while in danger wasn't enough. I thought there might be some way I can use the GPS to try to tell if the person using the app is in danger (or safe) without their interaction.

Drawing the Line

That's how we arrive at this example at the start - how do we interpret a stream of speeds coming in with no other context? How do we decide if it represents running/driving/walking/etc?

Sure, sitting still is easy. Less than 0.5 m/s? Probably sitting still. What about driving? Over 15 m/s? Yeah, probably driving. But then it get's fuzzy. Where do you draw the line at for walking? Running? How do you tell running from driving based on just speed?

To answer those questions, you can do one of two things (or three, but I'll get back to that.) You can either:

  1. Write a bunch of if/then statements, taking into account the last few speed readings from them, how long they've been at that speed, what they did this time yesterday, etc.
  2. Train a simple neural network to classify data for you while you sit and drink tea.

Obviously, since this post is tagged #machinelearning, I decided to use a neural network.

In my case, I used the excellent brain.js library since I was writing my server in javascript. I've also used brain.js in the bast, and I've found it to be incredibly easy to use and quick to pick up and implement in a project.

All in all, going from "Hey, I've got some GPS points being streamed to my server" to "real-time machine learning classification triggering push notifications" took me less than a day of coding. Here's basically how I did it.

Client-side, I'm using the Cordova project to make the Android/iOS apps, writing my UI in React, and utilizing the excellent @mauron85/cordova-plugin-background-geolocation plugin to stream GPS to my server in the background.

Server-Side Magic

The server is where the magic happens.

Everyone knows that to train a neural network you need labeled data. You put data in, run the training, get a trained set of weights, then use it later. Pretty simple, yes? Well, allow me to walk you though how I did it and the interesting parts along the way.

Gathering Data

I started by just logging a ton of GPS points from my own usage of the app. Over the course of two days, I logged GPS points when I was walking, running, driving, walking to my car and driving, running up to my car and driving, driving, parking, then walking, and many other scenarios. I kept a notebook with timestamps of when I did each action as well.

Labeling Data

Later, I dumped the timestamps and speeds to a CSV file and applied a simple naïve pre-labeling of the speeds. (E.g. 0m/s=STILL, <2m/s=WALKING, <10m/s=RUNNING, >10m/s=DRIVING) Then I opened each of the CSV files and compared the timestamps to my notebook, making sure the naïve labels were correct. Changed a lot of DRIVING>RUNNING or RUNNING>DRIVING when I was driving slow, stuff like that. When I was done, I had a set of ~5,000 speed measurements in CSV files, all hand-labeled with activity labels from a simple set of STILL, WALKING, RUNNING, or DRIVING.

Formatting Data: N-Grams

Now I had a set of speed measurements in sequence, looking something like:

[ 0, 1.2, 0.78, 1.9, 2.1, 1.8, 2.8, 3.3, 3.6, 4.1, 3.3, 4.9, 5.7 ]
Enter fullscreen mode Exit fullscreen mode

Can you see anything interesting in that? (Assume they are meters per second) If you look carefully, you'll notice an uptick where they start to trend above 2 m/s for a while - right there is where I started to run. Before that, I was walking.

In order to capture sequentiality in my data, I decided to train my network with a set of points representing the previous X values, with the final value being the "current" point we are classifying. This is similar in concept to n-grams in language modeling, where they break up a sequence of text into a set of finite item sets. Ex. given "abcd" and an n-gram size of two, we could generate "ab", "bc", "cd".

Therefore, I wrote a simple makeNgramsTrainingNN routine that took the raw stream of speeds and packaged them into sets of speed readings. It was a lot like taking a sliding window of a fixed size and running it over my data set, one item at a time, and recording each set of data inside the window as a new "n-gram". So my makeNgramsTrainingNN routine would take an array of speed objects (speed and label), and return a new array that looked like this:

[
  { input: { speed0: 0, speed1: 1.2, speed3: 0.78 }, output: { WALKING: 1 } }, 
  { input: { speed0: 1.2, speed1: 0.78, speed3: 1.9 }, output { WALKING: 1 } },
  { input: { speed0: 0.78, speed1: 1.9, speed3: 2.1 }, output { WALKING: 1 } }
]
Enter fullscreen mode Exit fullscreen mode

The label is always the label from my hand-edited data set for the last speed value in the n-gram.

Training the Neural Network

Then, I had to decide how I wanted to train my network - and what type of network to use. After much trial and error, I found that brain.CrossValidate worked amazingly well to reduce error rates.

Once I had all my n-grams in a nice big ngrams array, all I had to do to train the network was this:

const trainingOptions = {
    iterations: 35000,
    learningRate: 0.2,
    hiddenLayers: [ngramSize+2],
    log: details => console.log(details),
};

// Use CrossValidation because it seems to give better accuracy
const crossValidate = new brain.CrossValidate(brain.NeuralNetwork, trainingOptions);

// Found it doesn't do us any good to specify kfolds manually
const stats = crossValidate.train(ngrams, trainingOptions);

// Convert the CV to a nerual network for output (below)
const net = crossValidate.toNeuralNetwork();
Enter fullscreen mode Exit fullscreen mode

Once I had the network trained, I saved it to a json file so I could use it in real time to classify GPS:

// Stringify the nerual network 
const json = JSON.stringify(net.toJSON());
const outFile = 'gps-speed-classifier.net.json';
fs.writeFileSync(outFile, json);
Enter fullscreen mode Exit fullscreen mode

It was pure trial and error to discover that iterations of 35000 was a good number, and to discover that adding a hidden layer sized at my ngramSize + 2 was a good number. All just testing and re-testing and seeing what error rates came out.

For what it's worth, I'm using an ngramSize of 6 - which means my neural network sees 6 speed readings at once to make it's classification decision. I've configured the GPS plugin client-side to try to send me GPS readings every 1000ms, so an ngram size of 6 means approx 6 seconds of data is used in training and classification. It's important to note that I must use the same ngram size when using the trained network in production.

Proving to Myself it Worked

To test the error rates, first I bucketed all my training ngrams by class and tested the recall rates on each of the classes. I considered the training a success when I received >95% recall rate for every class.

The final test I did on every trained network was to take a single "session" of data and run it through as if it was being streamed live, and compare the predicted labels with the hand-labeled data. Once I hit over 90% accuracy on that, I was happy.

Getting from "hand labeling data sets" to finally having a trained network that I was happy with took roughly 6 hours or so of testing and trial and error.

Integrating the Trained Network into the App

Integrating it into the app was a very quick process by comparison - maybe two hours, if that. I created a "simple" class I call GpsActivityClassifier that loads the trained network weights from gps-speed-classifier.net.json. This class is responsible for the classification and updating of the user's "motionState"

The app's API into the GpsActivityClassifier is deceptively simple:

const result = await GpsActivityClassifier.updateUserMotionState(gpsLogEntry);
Enter fullscreen mode Exit fullscreen mode

The gpsLogEntry is our internal database record for the current GPS entry. Really the only thing the classifier needs from the log entry is the speed, the current timer, and the user that we're classifying.

Internally, it is rather simple, but the code looks a bit more complex, so I'll break it down here. Internally, updateUserMotionState looks something like this:

  1. Take the timestamp of the given gpsLogEntry and load the previous ngramSize entries for the current timer
  2. Convert that list of X entries (which looks like [{speed:0.1,...},{speed:0.5,...}, {speed:1.23,...}, ...]) into a single ngram object that looks like {speed0:0.1, speed1:0.5, speed2:1.23, ...}. The conversion code looks like:
const ngram = {};
Array.from(speedValues)
    .slice(0, TRAINED_NGRAM_SIZE)
    .forEach((value, idx) => ngram[`speed${idx}`] = value);
Enter fullscreen mode Exit fullscreen mode

After making the ngram, it uses the preloaded brain.js NeuralNetwork object (with weights already loaded from disk) to run the ngram like this:

const rawClassification = this.net.run(ngram);
const classification = maxClass(rawClassification);
Enter fullscreen mode Exit fullscreen mode

The utility maxClass(...) just takes the raw output of the final layer of the network and returns the predicted class label that has the highest probability.

Pressure to Change

At this point, we have a predicted label (predictedState) for the gpsLogEntry. But here's where we do that "third thing" we hinted at earlier in this blog.

Instead of just applying the predictedState directly to the user and calling it that user's current motionState, we apply a little bit of hard logic to the state.

We don't just want the user's motionState to oscillate wildly if the classification changes quickly from one point to the other, so I built in a simple "pressure" mechanism whereby the prediction must stay stable for at least CLASSIFICATIONS_NEEDED_TO_CHANGE counts. Through trial and error, I found 5 to be a good number.

That means that for a given gpsLogEntry, the classifier may return RUNNING. Only after it returns RUNNING for five continuous gps readings do we then update the user's motionState. Should the classifier go to a different classification before it hits 5 times, the counter starts over. (For example, if on the 3rd point the classifier returns DRIVING, we reset the counter and wait for 5 points until we actually set the user's motionState to DRIVING.)

Change is Good (or Bad)

Once the counter to change motionStates is actually met, we update the user record in the database with the new motionState and return to the caller of our GpsActivityClassifier.updateUserMotionState method an object that looks like { changed: "DRIVING", confidence: 0.98, previousState: "RUNNING" }. I consider this an "event", since we only get a return value of { changed: truthy } if the user's motionState ACTUALLY changed. All other times, if classification stayed the same or was "about to change", the object would look like {changed: false, ...}.

So what do we do with a changed event when it occurs?

In the case of WalkSafe, what we do with this event is we run a bit of "business logic" when the change happens. We take the stateFrom (previousState) and the stateTo (changed), build up a simple transition map (txMap) that defines valid/useful transitions, and then react accordingly.

For kicks and grins, here's what our txMap looks like in WalkSafe:

const { WALK, RUN, DRIVE, STILL } = GpsActivityClassifier.CLASSIFIER_STATES,
    OK_30   = 'OK_30',
    OK_60   = 'OK_60',
    SAFE_60 = 'SAFE_60',
    SAFE_5  = 'SAFE_5',
    NOOP    = 'NOOP',
    txMap   = {
        [ WALK + RUN  ]: OK_30,
        [STILL + RUN  ]: OK_30,
        [DRIVE + RUN  ]: OK_60,
        [STILL + DRIVE]: SAFE_60,
        [ WALK + DRIVE]: SAFE_60,
        [  RUN + DRIVE]: SAFE_60,
        [  RUN + WALK ]: SAFE_5,
        [  RUN + STILL]: NOOP,
        [ WALK + STILL]: NOOP,
        [DRIVE + STILL]: NOOP,
        [STILL + WALK ]: NOOP,
        [DRIVE + WALK ]: NOOP,
    };
Enter fullscreen mode Exit fullscreen mode

Then we just query the txMap when the user's motionState changes with the from and the to state, and react accordingly. For illustrations sake, here's what that looks like as well:

const txTest = stateFrom + stateTo,
    txAction = txMap[txTest];

if(!txAction) {
    // Should never encounter, but if we find a tx we don't have defined,
    // we throw which should be caught by Sentry and dashboarded/emailed
    throw new Error(`Undefined transition from ${stateFrom} to state ${stateTo})`);
}

switch(txAction) {
    case OK_30:
    case OK_60: {
        const time = txAction === OK_60 ? 60 : 30;
        return await this._txAreYouInDanger({ time, stateTo, stateFrom, ...props });
    }
    case SAFE_60:
    case SAFE_5: {
        const time = txAction === SAFE_60 ? 60 : 60 * 5;
        return await this._txAreYouSafe({ time, stateTo, stateFrom, ...props });
    }
    default: 
        // NOOP;
        break;
}   
Enter fullscreen mode Exit fullscreen mode

Won't go into detail on the _txAreYouSafe or _txAreYouInDanger functions, but they basically add to (if safe) or set (if in danger) the remaining time in the running timer, and then send a push notification via Firebase to the user's device.

To tie a bow on it though, here's what it looks like to send the push notification shown in the screenshot at the top of this article:

// Triggered possible danger scenario, so reduce time remaining
// to only `time` seconds...
await timer.setSecondsRemaining(time);

// Alert the user to this change ...
user.alert({
    // Channel is Android-specific and MUST EXIST OR 
    // NO NOTIFICATION DELIVERED on Androids. 
    // See list in client/src/utils/NativePushPlugin of valid channels.
    channel: "sos",
    title: "Are you running??",
    body:  `
        If you're not okay, KEEP RUNNING! We'll send an SOS in 
        less than a minute unless you stop the timer or add more time. 
        Don't stop unless it's safe to do so!
    `,

    // onClick is base64-encoded and sent via Firebase 
    // as the action URL for this push notification
    onClick: {
        // This event key is "special":
        // When the user clicks on the notification,
        // our app will emit this event on the ServerStore object...
        // Any other properties in this onClick handler are passed as
        // a data object to the event. This is emitted in PushNotifyService.
        // Obviously, the event does nothing unless some other part of the
        // app is listening for it.
        event:  'gps.areYouInDanger',
        // Extra args for the event:
        timerId: timer.id,
        stateTo, 
        stateFrom,
    },
});
Enter fullscreen mode Exit fullscreen mode

Walk Safely but Run if Needed, We've Got You

The combination all of this effects an additional safeguard for people using WalkSafe. If they set a danger timer, but start running in the middle of the timer, the server will recognize this state change, reduce the time left on the timer so it will send an SOS right away if they are in fact running from danger.

And that's how we tie Personal Safety, GPS, and Machine Learning together to improve the real-world safety of people who use a simple personal safety SOS timer!

Beta Testers Wanted

If you want to test out this app, send me a message. Or if you're interested in working with me on the app, I'd be open to talking! And if you're interested in hiring me for consulting work - drop me a line as well! You can reach me at josiahbryan@gmail.com. Cheers and crackers!

Top comments (1)

Collapse
 
josiahbryan profile image
Josiah Bryan

The app is now available for beta testing on Androids in the Google Play Store: play.google.com/store/apps/details... - iOS app pending review still!