Easy to launch and manage AI for the physical world.

Achieve more with less resources.

Just turn it on.

Gallery

Company details

SIA AGICortex

Audeju street 15-4, LV-1050 Riga, Latvia

contact@agicortex.com

A new type of Machine Learning

Most of AI solutions we see in the world are based on the same procedure: collect the dataset → train the model → evaluate → re-train to optimize the results. It is called supervised learning as we are guiding AI with our answers.

But what are the other ways? In Machine Learning we can recognize the following types of solutions:

In supervised learning we provide many examples to teach machines recognizing the most useful patterns that are common for specific elements, like the look of digits we want to recognize or physical objects in the world.

If we will make a good job, the result allows for some degree of generalization, so it is possible to recognize previously unseen objects of the same class.

In reality supervised learning often suffers from so called “model degradation”, when previously prepared solution does not perform very well with real world data. That is why ML solutions need to be often updated (re-trained).

With unsupervised learning we don’t tell anything about the data, we don’t provide labels. The goal of machine is to find underlying structures and build meaningful representation in a form of encoded vectors or clusters of similar data.

Ability to work without human supervision is a long-term goal of many AI researchers. However unsupervised learning methods are not yet as developed as supervised ones.

Reinforcement learning is very different from two previous types. It is used for tasks, where the machine needs to pursue some goal and is rewarded for progress. It tries to maximize the sum of future rewards, therefore optimize its own behavior by making a form of a plan.

Unlike supervised and unsupervised learning methods – which in many cases are used for processing sensory data (images, videos, sounds etc.) and generating it – RL is focused on action.

It can help with building an algorithm that has a goal to win some game or learn to move virtual or physical parts of the agent’s body.

It learns during the process of operation, learning from experience and not carefully created dataset. It is one of the most promising areas of future research. With many potential benefits, but also some drawbacks.

And now it is time for continuous learning – the most mysterious type as it exists mostly in concept phases of various projects. This is the area where AGICortex specializes. We use it in our solutions to continuously grow machine’s knowledge and the ability to deal with constantly changing real world by its actions.

Few years ago we realized that to move forward, ML/AI needs to combine the benefits of various types of learning and add the missing ones. The human brain contains multiple functional areas that all use neurons, but have different purposes.

In our variant of continuous learning – you don’t need a separate training sessions. The data is processed in real-time, learning takes place and the source may be discarded without saving – ensuring the privacy of people in range of the camera view or microphone sensor.

You may provide labels for the data at any time, but the algorithms are able to learn without it. Just like humans that can observe something and understand it without knowing the name of it. Once they got the name – they just assign it to the previously learned data.

Although some people try – it would be hard to try to contain the real world in a dataset. We believe in a future where machines are more like humans. Quickly adapting and most effective in dealing with the nearby environment.

Able to combine multiple types of data, work in an autonomous way and explain own decisions and behavior.

The future of Machine Learning is automated and equipped with continuous learning ability.

If you are curious about this topic, there is a free book providing summary of the techniques related to the Automated Machine Learning:

Automated Machine Learning: Methods, Systems, Challenges by Frank Hutter, Lars Kotthoff, Joaquin Vanschoren