DEV Community

Cover image for Analyzing recent mentions of a user on Twitter with TensorflowJs Part 2
bijanGh
bijanGh

Posted on

Analyzing recent mentions of a user on Twitter with TensorflowJs Part 2

Hello, guys now it's time to use some TensorflowJs magic in our front-end app to process the tweets we received from our Twitter-API in the previous post.
in this part, I'll try to use a pre-trained sentiment analysis Tensorflow model to grade each tweet. and show the results on this minimal NextJs app. I wanted to go through setting up the front-end process and stuff but it was unnecessary and rather long. but you can browse the repository, it's a simple react SSR app.

so let's dive into the main stuff of this tutorial:(TLDR; head to Code section)

you can also look at the demo here

  • MachineLearning/AI
  • TensorflowJs
  • Code
  • Saying something meaningful at the end

MachineLearning/AI

Let me put it simply, it's the ability for a machine/program to determine and execute tasks in situations that we did not program it to do in a deterministic way. basically, a program that can receive its environment as an input and outputs a non-deterministic(not always right) judgment, and like us, it can learn and perfect itself in various ways, even by forgetting stuff. and yeah AI is a good thing for things that you can tolerate mistakes on.

TensorflowJs

Tfjs is the web's gateway to enter AI and make use of countless possibilities available to us. their own description of the Tfjs is "Develop ML models in JavaScript, and use ML directly in the browser or in Node.js." But trust me as of now it is still only for using ML models in the browser but you'll develop your ML models somewhere else. let's go into code to see how it's done with Tfjs.

CODE!

as I said we're not going to train a model here, we're here to use one. let's start with a simple js file(TLDR; you can see it in full here). first, we need to import our models, models are pre-trained algorithms for AI computations and decision making, I chose the simple demo sentiment analysis model of Tfjs which is trained on IMBD, not so much of a great model to use but it'll do for now, you can use anything you want I'll recommend facebook's BERT, I'll make another post about transforming the pre-trained model for specific use-cases, for example, I want to use this model on non-English tweets, what should I do? train a model in french? no that's too expensive we can use any sentiment analysis and repurpose it for another case. so let's import our model:

//index.js

const tf = require("@tensorflow/tfjs");

// you can also get the LSTM version if u want
const loadModel = async () => {
  const url = `https://storage.googleapis.com/tfjs-models/tfjs/sentiment_cnn_v1/model.json`;
  const model = await tf.loadLayersModel(url);
  return model;
};

// we'll get to the meta data in a minute
const getMetaData = async () => {
  const metadata = await fetch(
    "https://storage.googleapis.com/tfjs-models/tfjs/sentiment_cnn_v1/metadata.json"
  );
  return metadata.json();
};

Enter fullscreen mode Exit fullscreen mode

now it's going to get a bit complicated we first need to prepare our input to feed the model for prediction. First, we make our pad_sequence function. as the Tensorflow name suggests it works with tensors; multi-dimensional arrays basically. with pad_sequence we make sure to make these parts of the same length to be able to be processed correctly and we need our model's metadata here to achieve our goal

//index.js 

// each sequence is basically a word index

const padSequences = (sequences, metadata) => {
  return sequences.map((seq) => {
    if (seq.length > metadata.max_len) {
      seq.splice(0, seq.length - metadata.max_len);
    }
    if (seq.length < metadata.max_len) {
      const pad = [];
      for (let i = 0; i < metadata.max_len - seq.length; ++i) {
        pad.push(0);
      }
      seq = pad.concat(seq);
    }
    return seq;
  });
};

Enter fullscreen mode Exit fullscreen mode

now we can use the model to predict:

//index.js

const predict = (text, model, metadata) => {
// text should be sanitized before sequencing and chunked word by word
  const trimmed = text
    .trim()
    .toLowerCase()
    .replace(/(\.|\,|\!,|\#,|\@)/g, "")
    .split(" ");
// prepare word indexes as sequences
  const sequence = trimmed.map((word) => {
    const wordIndex = metadata.word_index[word];
    if (typeof wordIndex === "undefined") {
      return 2; //oov_index
    }
    return wordIndex + metadata.index_from;
  });

//padding sequences 
  const paddedSequence = padSequences([sequence], metadata);
  const input = tf.tensor2d(paddedSequence, [1, metadata.max_len]);

// I really don't know why Tfjs guys added this extra step in api
  const predictOut = model.predict(input);
// finally our prediction
  const score = predictOut.dataSync()[0];
// always clean up after
  predictOut.dispose();
  return score;
};

Enter fullscreen mode Exit fullscreen mode

It'll give us a score between 0 and 1 which its interpretation in code is:

// index.js

const getSentiment = (score) => {
  if (score > 0.66) return `POSITIVE`;
  else if (score > 0.4) return `NEUTRAL`;
  else return `NEGATIVE`;
};

Enter fullscreen mode Exit fullscreen mode

And also remember to exclude all URLs and links from the tweets before feeding them to our prediction method:

//index.js

const sentimentAnalysis = (text, model, metadata) => {
  let sum = 0;
  const tweet = text.replace(/(?:https?|ftp):\/\/[\n\S]+/g, "").split(" ");
  for (const prediction of tweet) {
    const perc = predict(prediction, model, metadata);

    sum += parseFloat(perc, 10);
  }

  return getSentiment(sum / tweet.length);
};

Enter fullscreen mode Exit fullscreen mode

you can run it with node and some data received from our Twitter API( but be patient if you're testing it in node)

Conclusion

Our conclusion here is the results; and results for me is a working demo to show: Twitter-Sentiment-Analysis-With-TensorflowJS

If you like to see the implementation of it in the front-end app provided in the demo, leave a comment and I'll try to gather it in Part 3.

Top comments (0)