Different approaches to Machine Learning

AGICortex Team

6 min read
Share
Share article via
Or copy link
https://agicortex.com/different-approaches-to-machine-learning/

Continual learning

Getting real benefits from automation with robots requires them to learn independently – without people. To make them fully autonomous, combining multiple approaches to Machine Learning is necessary.

Below you can find the descriptions of various approaches to ML: supervised, unsupervised, and reinforcement learning.

But what if we could combine and even extend them to make the robots able to learn and operate without the need for our participation?

This is precisely what separates us from the higher robot adoption rate and experiencing the associated benefits in our daily lives.

To learn continually, the machine must deal with streams of newly observed data alone. It needs to provide a structure where it was lacking before.

When processing audiovisual data – segment the visible scene into separated objects or incoming audio signals into separated sounds or words. Only then it is possible to recognize them and update the memory bank accordingly.

Continual learning requires efficient memory management. In the case of robotic devices – it does not end with perception; we also need to define action procedures.

While supervised and unsupervised Machine Learning methods may be extended by the usage of sophisticated external memory – for the proper management of available and incoming information – machine activity in response to the state of the environment can be regulated by something more powerful than traditional Reinforcement Learning.

In the biological brain, many more factors influence our behavior than just the reward function, modulated by dopamine. We have multiple other neuromodulators that help our organisms to generalize the internal response to the state of the external environment.

By extending the number of numerical parameters, we can allow machines to decide the optimal way to navigate through the external environment and internal memory resources, what has the highest priority at the moment, and how much attention should be paid to various activities.

This is how we can make the machines autonomous – by making them more similar to biological intelligence.

Supervised learning

The traditional approach to Machine Learning is lengthy and costly. In addition, it works when you have access to rich datasets of labeled data or resources that can fulfill that need.

Data preparation is a significant part of the job of ML practitioners.

You provide the data and correct answers – the ML model learns to recognize the patterns. To make it accurate – you need to give lots of examples – so the model can generalize beyond previously seen samples.

If anything changes, you need to repeat the process.

This requires a dedicated maintenance team that takes care of ML models & data preparation.

Supervised learning is associated with a set of common problems. You need a lot of labeled examples – it takes time, costs money, and in some cases, is extremely hard to realize.

It works well enough in the digital world, where data is processed in the cloud environment. The incoming information is universal enough to provide satisfying results – recognize words or generate videos.

But if you deal with the physical world, you need frequent updates, high accuracy, and low latency.

The truth is that supervised learning is an approach that is not suitable for the physical machines that are deployed in various and frequently changing environments.

Unsupervised learning

When there is no label/correct answer provided by the human supervisor – the ML model needs to rely on the existing knowledge structure.

It needs to consider previously observed data samples and determine how the input data is associated with them.

There are multiple existing techniques for unsupervised learning, while the most popular ones are different kinds of clustering and using neural networks called autoencoders.

Autoencoders memorize hidden patterns in data by learning how to reconstruct the input data and compress and then decompress it effectively, so the results are satisfying for a large enough set of examples.

Clustering and contrastive learning approaches check how similar or different an input data sample is from previously observed to categorize it.

This is similar to how people learn – if we do not have clarity about a specific object, we use deduction to define a set of probable answers and make the final decision.

Self-supervised learning is an interesting unsupervised learning method. It also learns from unlabeled data (without supervision) – but uses a smaller set of data annotated by people for guidance.

As a result, an ML model requires only a fraction of the data necessary for the supervised approach. Data augmentation techniques may enhance the learning process – by transforming the original input into a changed version – re-scaled, rotated, blurred, with changed colors, etc. In such a way – the model is exposed to multiple variants of existing data and learns to deal with more situations during the inference phase.

Because of the cost and effort related to getting enough labeled data – unsupervised and self-supervised learning mechanisms are crucial to developing accurate ML models. And in the constantly changing physical world – they are a must. And probably the most significant factor influencing the robot adoption rate in our daily lives.

Reinforcement learning

Learning to predict the correct label with supervised learning methods or relations between the data samples in unsupervised learning is pretty straightforward. You show the data, and the machine learns to recognize the underlying patterns or relations.

But what if we need to deal with sequences of actions? There are multiple ways in which things may work out. And there are various options and numerous steps in completing the task.

Reinforcement Learning is used when there is a need to learn and then use knowledge about how to behave to reach the target goal. It was applied to the problem of board games – Chess, Go, and computer games – where ML models were trained to compete with and finally beat human players.

RL is based on a reward concept inspired by the activity of neurochemical called dopamine. The closer the agent is to the target – the higher the reward value, so it motivates the system to repeat successful steps to the desired outcome, even in complex situations and tasks.

Your agent needs to plan strategically (games) or express complex behavior (e.g., multi-part movement)? You should use Reinforcement Learning to teach him to act optimally in specific situations.

RL can help to deal with the complexity of the physical world, make autonomous decisions, and allow for real-time learning (within the task scope). While other Machine Learning techniques support perception, RL can provide the possibility to initiate action that will lead to a specified goal.