DEV Community

Cover image for Simplifying the Fundamentals of Machine Learning
Rory Murphy for APIDNA

Posted on

Simplifying the Fundamentals of Machine Learning

With the recent advancements in Artificial Intelligence through the emergence of chatbots like ChatGPT, you’ve likely heard the term Machine Learning being discussed. It is often misunderstood, and confused with Artificial Intelligence itself despite being around for decades, serving as the backbone to many modern applications. Whether it’s on Google search, YouTube suggestions, or your email spam folder, machine learning is working away behind the scenes.

What is Machine Learning?

The easiest way to understand the role of machine learning is by thinking about it as the underlying technology behind artificial intelligence. With modern artificial intelligence, our objective is to model the human brain and its behaviour, and machine learning accommodates the mathematical and logic based aspects of that behaviour.

Image description

Simply put, machine learning enables computers to perform pattern recognition, by creating a set of algorithms learned from some example data that are trained to complete a task, without requiring explicit instructions for how to do it. Each task requires a different set of algorithms, which detect the patterns involved in completing specific tasks.

Let’s begin then by taking a deeper look into the building blocks behind machine learning.

The Building Blocks of Machine Learning

Four main ingredients of machine learning:

  • Data: the input values.
  • Tasks (Algorithms): problems that require a mapping from data to desired outputs (insights).
  • Features: characteristics of the data used to describe domain objects.
  • Models: encode the required task mapping.

Image description

Data: The Fuel

At the heart of every machine learning model lies data. Data is the fuel that powers machine learning algorithms. Developers work with two main types of data when it comes to machine learning, labelled and unlabelled:

Labelled data is data that comes with a tag or description. For instance, a dataset of images of cats and dogs where each image is labelled as “cat” or “dog.”
Unlabelled data, on the other hand, lacks these tags. It’s raw data that hasn’t been categorised or identified. This could be the same collection of cat and dog images, however with no descriptions.

Features: Data’s Building Blocks

Features are the individual data points within a dataset. For example, in a dataset of cats and dogs, features could include their height, colour, or ear length. You can start to see how features like these would serve as the input that the model uses to make predictions or decisions.

Features are crucial, since without them pattern recognition is impossible, especially if the data is unlabelled to begin with.

Types of Tasks (or Algorithms): The Brains

Algorithms are the core of machine learning. These are sets of rules and procedures that guide the model in processing and understanding data. They make sense of the data, detect patterns, and make predictions or decisions based on that information.

These are the different types of tasks that are used in machine learning:

  • Classification: Separating data into distinct groups.
    • An example of binary classification would be distinguishing people at risk from heart disease (+), from those not at risk (−).
  • Regression: Involves mapping from data items to real values
    • e.g. Quantifying the risk of heart disease on the basis of personal health records.
  • Clustering: Separating data into different clusters/concepts on the basis of their characteristics.
    • e.g. Grouping people according to their genetic characteristics.
  • Collaborative filtering: Identifying rules or associations from data.
    • e.g. “Recommending” products on the basis of patient records & diseases of similar patients.
  • Dimensionality reduction: Visualising the data.
    • e.g. Plotting a graph.

Models: The Process

A machine learning model is the result of an algorithm applied to a dataset. It represents the machine’s understanding of the underlying patterns in the data. Think of a model as a student who learns from a teacher (the algorithm) using textbooks (the dataset) to gain knowledge about a specific subject.

Types of Machine Learning

Now that we’ve laid the groundwork, let’s explore the various types of machine learning. Machine learning can be categorised into three main types: supervised, unsupervised, and reinforcement learning.

Image description

Supervised Learning: Learning from Labelled Data

In this approach, the algorithm is given a human-labelled dataset, and its task is to learn the underlying patterns that link the features to the labels. It does this by taking a set of inputs with the corresponding correct outputs, and learns by comparing these with its actual outputs in order to find errors. Once the model has learned, it can make predictions on new, unseen data.

This type of machine learning is commonly used for tasks where the desired output is likely known based on the input. Supervised learning is used for tasks such as image classification, spam email detection, and language translation.

Unsupervised Learning: Exploring the Unknown

In unsupervised learning, the algorithm deals with unlabelled data. It must identify patterns, clusters, or structures within the data without any predefined guidance. This approach is often used for tasks such as data clustering, anomaly detection, and recommendation systems, where data is categorised into groups with other similar data points.

However, the choice isn’t always full or no supervision, but instead can be semi-supervised. This is where labels are placed on a small percentage of the data, known as the subset. This becomes particularly relevant when the labels in the data are computationally expensive, having the potential to bottleneck the efficiency of the entire model.

Reinforcement Learning: Learning Through Interaction

Reinforcement learning is heavily inspired by reinforcement behaviour from psychology, the idea of learning by doing. The algorithm interacts with an environment, taking actions to achieve a specific goal. It receives feedback in the form of rewards or penalties based on its actions. Over time, it learns to make decisions that maximise its rewards, resulting in the desirable outcome.

This type of machine learning is prevalent in fields like robotics and game playing. It’s the technology behind self-driving cars, game-playing AI, and even virtual assistants.

The Machine Learning Workflow

To bring the fundamentals of machine learning into sharper focus, it’s essential to understand the typical workflow involved in developing a machine learning application.

Let’s dive into the key steps!

Image description

Data Collection and Pre-Processing

The first step is gathering data relevant to the problem you want to solve. There are many existing datasets already out there that could provide you with the exact data required to train your model. The quality of your data is crucial. Inaccurate or biased data can lead to incorrect or unfair predictions.

Although in principle, models can uncover patterns in any data, in practice it is easier if this data is structured and pre-processed by applying some human intelligence:

  • Input normalisation is a technique that can be useful for the visual inspection of data on a more sensible scale, such as when the emerging patterns are logarithmic.
  • Imputing is simply filling in data that is missing from the dataset.
  • Feature construction and selection is when domain knowledge is incorporated to select features from the data that are likely to be useful, based on visual inspection and consideration of the task being completed.
  • Dimensionality reduction can be useful for more complex data which needs to be reduced to fewer new variables that represent the multi-dimensional raw data in a more meaningful way, which accounts for most of the variance.

Model Selection

Once you have your data and features ready, you must choose an appropriate algorithm to apply to your problem. The choice of algorithm depends on the nature of the task, the data, and your desired outcomes, and sometimes the simplest models are the most suitable.

Overly complex models might overfit the data, meaning they perform well on the training data but poorly on new data. Simpler models might underfit the data and fail to capture important patterns. Machine learning models can inherit biases present in the data they are trained on. Ensuring fairness and ethical considerations are vital, especially when deploying machine learning in sensitive areas like finance or healthcare.

The simplest models are just functions that return a constant number, which would be visually represented as a straight line. However, these models are likely to have a large degree of error and bias for most tasks.

More complex models that utilise more data points tend to fit training samples much better. However, if the model becomes “too” complex, small changes in the training data can lead to larger differences in the trained model, known as high variance.

Training and Testing

In supervised learning, you’ll split your dataset into two parts: the training set and the testing set. The model learns from the training set and is then tested on the testing set to evaluate its performance. This can be based on simple metrics such as accuracy (the proportion of correctly predicted instances) and error (the proportion of incorrectly predicted instances), or more meaningful metrics such as precision (the proportions of positive predictions that are correct) and recall (the proportion of positive instances that are predicted).

Once you have a trained and well-performing model, you can deploy it in a real-world environment, where it can make predictions or decisions based on new, unseen data.

Conclusion

Machine learning is a powerful technology with the potential to revolutionise industries and create innovative solutions to complex problems. For developers, understanding the fundamentals is the first step in harnessing the full potential of this field.

As you embark on your journey into machine learning, remember that learning by doing is one of the most effective approaches. Experiment, make mistakes, and continuously expand your knowledge. Machine learning is a dynamic field, and staying up-to-date with the latest advancements is essential.

At APIDNA we’re thrilled to be introducing our cutting-edge AI-powered API integration platform, designed to simplify the integration process like never before. With our platform, you’ll be able to accomplish complex integrations in a matter of minutes, freeing up your valuable time to focus on what truly matters—creating innovative software that will shape the digital landscape in the years to come.

So, let us be your trusted partner in your API integration endeavours. Together, we can unlock the full potential of your projects and drive them towards success. Sign up to the APIDNA mailing list today to be the first to hear about our launch and updates!

Reading List:

Fundamentals of Machine Learning

Microsoft Fundamentals of Machine Learning Course

Machine Learning – Fundamentals

Top comments (0)