DEV Community

Roksolana Kryshtanovych
Roksolana Kryshtanovych

Posted on

Self-supervised Learning, Future of AI

self-supervised learning: patch positions

Self-supervised learning is one of those recent ML techniques that have made waves in the data science community, but have so far been flying under the radar as far as Entrepreneurs and Fortunes of the world go; the general public is yet to learn about the concept but lots of AI folk deem it revolutionary.

The paradigm holds vast potential for enterprises too as it can help tackle deep learning’s most daunting issue: data/sample inefficiency and subsequent costly training.

We believe strongly that, as a business owner, you should acquaint yourself with this inherently complex subject and we’ll gladly help.

Hand-crafted Feature Learners and Deep Learning Systems: AI Past vs AI Future

These days, when someone mentions AI’s transformative potential, they’re likely speaking about machine learning. And when you’re reading another sensationalist post about the groundbreaking progress in ML, the authors are probably referring to deep learning (and supervised deep learning in particular).

Since human-generated features tend to be brittle, the AI community is increasingly embracing deep learning systems (for a number of tasks), which can learn data’s distinctive properties by themselves given an objective function.

These neural nets aren’t magic, though, they’re just cleverly applied statistics and algebra.

The basic build of a deep learning system includes a succession of linear and point-wise non-linear operators. Neural nets first encode every input, turning into a vector or a list of numbers, then do computations on the received values using various matrices in their neurons (which their layers consist of) and, afterward, pass the resulting vectors through a bank of nonlinear functions (such as ReLU) and a classifier at the very end of the architecture.

When the machine fails to output what is expected of it (a correct class/label in supervised learning), data scientists can tweak the network’s parameters (all the modules are trainable) until the results are satisfactory. This is done via gradient descent which, in turn, is computed by a backprop algorithm.

It turns out computers are exceptionally good at learning functions that map inputs to human-generated labels under one condition: an enormous amount of labeled data must be fed to them first. There’s also another drawback: these models, determined to classify the input into a category, don’t learn much about the inherent properties of input elements. The feedback the machine is given is scarce in supervised learning, so naturally the networks are very sample inefficient.

This creates a significant issue: high-quality data is often hard to come by for a lot of companies and obtaining an annotated dataset can prove too costly an undertaking even for large organizations.

The post Self-supervised Learning, Future of AI appeared first on Software Development Company Perfectial.

Top comments (0)