You can find the Part 1 of this post here:
Unsupervised Learning aims to find patterns like similarity, structure, or correlation among our data. Thus, we don't need to select an attribute of our dataset as a target output. Typical Unsupervised Learning tasks include clustering, dimensionality reduction, and association rules discovery.
Clustering is an unsupervised learning task that groups instances from a dataset according to their similarity. In clustering, we don't need to provide examples of the group of a sample. However, depending on the chosen clustering algorithm, we need to set parameters like the number of clusters to be found and the similarity criteria used to compare the instances.
Clustering has many real-world applications. Marketing uses it to find groups of customers who share interests and preferences. Data mining uses clustering to find groups of documents and how to group them in similar topics. These applications can extend to preparation tasks, like exploratory data analysis or sampling. Whenever we don't have a clear expected output, clustering helps us gain insights about our dataset.
We call high dimension datasets the ones that have an increased number of attributes. As the number of a dataset's dimensions increases, the dataset's analysis and processing become harder, making many learning algorithms inefficient. This phenomenon is known as the curse of dimensionality.
Dealing with the curse of dimensionality involves reducing the number of attributes of a dataset. However, manually selecting the most representatives attributes of a high-dimensional dataset is impractical. Thus we can use dimensionality reduction algorithms to rank the most significant dimensions of a dataset automatically. Once we sorted the attributes by their rank, we can select the most important ones and reduce our dataset's dimensionality.
In Reinforcement Learning, we train an agent to interact with an environment. Typical reinforcement learning applications are autonomous vehicles, intelligent game players, and recommender systems.
In general, the Reinforcement Learning paradigm is useful when training data is not available. Instead of feeding our training algorithm with training data, we run many simulations where the agent performs actions that alter the environment's state. Based on the action and the new state, the environment returns a reward signal to indicate if the action was positive, negative, or neutral.
An exciting characteristic of Reinforcement Learning algorithms is that they can learn not just immediate rewards but also future ones. This capability is especially interesting for environments that require strategic planning. For example, it's common practice in chess games to give up pieces in exchange for a better position or victory. Another good example is stock trading. Often we need to support some temporary losses to achieve higher profits in the long term. Reinforcement Learning algorithms are suitable for these cases because the reward for an action propagates to the previous ones so that the agent is capable of making long-term decisions.
Choosing the correct Machine Learning paradigm is crucial for the success of your project. We can solve the same business problem using different approaches or even a combination of them. Thus, we need to understand each paradigm's details and have a deep comprehension of the business problem we need to solve to make the right choice.
Introduction to Reinforcement Learning: the Frozen Lake Example: