DEV Community

Cover image for Machine Learning, Neural Networks and Algorithms
Henk Pelk
Henk Pelk

Posted on

Machine Learning, Neural Networks and Algorithms

Following the previous article about “The Core of the Modern Chatbot,” this article gives a deeper understanding about the technologies needed for chatbots.

Machine Learning

NLP (Natural language processing) and Machine Learning are both fields in computer science related to AI (Artificial Intelligence). Machine learning can be applied in many different fields. NLP takes care of “understanding” the natural language of the human that the program (e.g. chatbot) is trying to communicate with. This understanding enables the program e.g. chatbot) to both interpret input and produce output in the form of human language.

The machine “learns” and uses its algorithms through supervised and unsupervised learning. Supervised learning means to train the machine to translate the input data into a desired output value. In other words, it assigns an inferred function to the data so that newer examples of data will give the same output for that “learned” interpretation. Unsupervised learning means discovering new patterns in the data without any prior information and training. The machine itself assigns an inferred function to the data through careful analysis and extrapolation of patterns from raw data. The layers are for analyzing the data in an hierarchical way. This is to extract, with hidden layers, the feature through supervised or unsupervised learning. Hidden layers are part of the data processing layers in a neural network.

Featured CBM: Building an IBM Watson Powered AI Chatbot

Neural Networks
Neural networks are one of the learning algorithms used within machine learning. They consist of different layers for analyzing and learning data.


Hidden learning layers and neurons by Nvidia

Every hidden layer tries to detect patterns on the picture. When a pattern is detected the next hidden layer is activated and so on. The picture of the Audi A7 above illustrates this perfectly. The first layer detects edges. Then the following layers combine other edges found in the data, ultimately a specified layer attempts to detect a wheel pattern or a window pattern. Depending on the amount of layers, it will be or not be able to define what is on the picture, in this case a car.The more layers in a neural network, the more is learned and the more accurate the pattern detection is. Neural Networks learn and attribute weights to the connections between the different neurons each time the network processes data. This means the next time it comes across such a picture, it will have learned that this particular section of the picture is probably associated with for example a tire or a door.

Featured CBM: Unsupervised Deep Learning for Vertical Conversational Chatbots

Machine Learning Algorithms

This chapter shows some of the most important machine learning algorithms, more information about algorithms can be found via the following links. [1][2][3]

Decision Tree Algorithms
In this algorithm a decision tree is used to map decisions and their possible consequences, including chances, costs and utilities. This method allows the problem to be approached logically and stepwise to get to the right conclusion. An important algorithm that evolved from this algorithm is the Random Tree algorithm. This algorithm uses multiple trees to avoid overfitting that often occurs with using decision trees.

Bayesian Algorithms
Applies Bayesian theorem for regression and classification problems involved with probability. It attempts to show the probabilistic relationship between different variables and determine, given the variables, which category it more likely belongs to.

Regression Algorithms
Well suited to statistical machine learning, regressions seek to model the relationship between variables. By observing these relationships you aim to establish a function that more or less mimics this relationship. This mean that when you observe more variables you can say with some confidence and with a margin of error, where they may lay along the function.

Support Vector
The support vector algorithm is used in the grouping of points on a dimensional plane. The grouping is done by creating a hyperplane that separates the groups with a margin that is as wide as possible. This helps with the classification and is used for example in advertising or human RNA splicing.

Ensemble Methods
Ensemble methods combine various weaker supervised learning algorithms. A combination of very different models will usually produce better results. By combining the various methods you can handle bias with certain models, reduce the variance and reduce overfitting by averaging it out more.

Clustering Algorithms
The main purpose of this algorithm is to cluster the available data into groups, where the data points in such a group are more similar to each other than those in other groups. The more important clustering methods are hierarchical, centroid, distribution and density.

Association Rule Learning Algorithms
This is about the rules that can be established between the itemsets and the transactions for these items and item sets. The relation between X and Y, thus the probability of when you obtain X you also obtain Y. This rule is found in the database by observing the itemsets and the items therein.

Artificial Neural Network Algorithms
Artificial Neural Network algorithms are inspired by the human brain. The artificial neurons are interconnected and communicate with each other. Each connection is weighted by previous learning events and with each new input of data more learning takes place. A lot of different algorithms are associated with Artificial Neural Networks and one of the most important is Deep learning. An example of Deep Learning can be seen in the picture above. It is especially concerned with building much larger complex neural networks.

Dimensionality Analysis Algorithms
Dimensionality is about the amount of variables in the data and the dimensions they belong to. This type of analysis is aimed at reducing the amount of dimensions with the associated variables while at the same time retaining the same information. In other words it seeks to remove the less meaningful data while at the same time ensuring the same end result.

This concludes our 3rd part in the chatbot series, if you’d like to share your opinion and have any feedback or suggestions please do so in the comment section or through email. You can reach me at henk.pelk@linkit.nl.

Top comments (0)