A moment of realization
Our journey began a couple of years ago with realization that current technology is not able to fulfill its promises as a method to achieve truly powerful Artificial Intelligence.
Despite many successes – especially audiovisual media generation and other narrow tasks, it lacked so many attributes of intelligence that it is not very probable that we will use similar approaches in the distant future.
From necessity of huge number of examples to learn anything, enormous power consumption, inability to adapt to fresh knowledge instantly, work autonomously to absolute lack of transparency.
Imagine that to use the number of neurons similar to the human brain – we would need to use as much electricity as a large city. This is not something you could one day put in the robotic body.
It would also cost a lot (available only to rich organizations) and is bad for the environment…
Our point of view was also confirmed by the leading figures of the Deep Learning world.
“My view is throw it all away and start again”
“I don’t think it’s how the brain works”
“We clearly don’t need all the labeled data”
– Geoff Hinton, the godfather of Deep Learning
“Deep learning is an amazing technology and hugely useful in itself, but in my opinion it’s definitely not enough to solve AI, [not] by a long shot”
– Demis Hassabis, DeepMind CEO
Thankfully, we have been challenged in the past with projects that restricted usage of the cloud servers for machine learning and required autonomous learning and operation on mobile devices.
It opened the doors for new ideas and possibilities. And a crazy idea that… we can try to do it better: energy-efficient, transparent and self-learning.
While reading many papers, books and materials about neuroscience – it was clearly visible that nature took a very different approach. In the same time it was sure, that we don’t need to build something equally complex as the human brain, that as a biological organ needs to do much more to process data than computers, e.g. sustain itself or overcome physical skull limitations.
A plan without the deadline
As a CEO of AGICortex, in the past I created a plan without a deadline – to read, learn, experiment and find better ways to one day realize powerful Artificial General Intelligence. If it could be achieved in 20 years it would be a huge success.
It took around 5 years of really hard work to find and code prototype solutions that are able to mimic the most crucial attributes of human intelligence, such as:
1) autonomous real-time learning and decision making
2) combining multiple data types together in a single architecture
3) explaining own decisions in natural language or structured data
4) simulating multiple variants of potential actions and picking the best one
5) utilize only the most useful parts of potentially huge neural architecture
After making the early prototypes we succeeded with getting the first pre-seed investment and began work on our first product.
The operating system for audiovisual AI assistants
We aim to do for AI what Windows did to computers. Our assistants will have rich awareness of the environment and support people in a whole range of tasks.
The first version of the product will be targeted to individual users from general population. But in the next phases we also plan to develop a tool for AI professionals, allowing to achieve even more with our technology.
Both groups will have access to convenient application with visual interface to the contents of neural networks. Our goal is to establish true meaning of Explainable AI.
This is not our most ambitious goal – and you can read more about the next ones in the summary of our strategy.
The current shape of the product was formed by countless discussions inside the team and with external mentors, design and coding experiments and talks with first potential users.
Initial complex UI concepts were replaced by simpler ideas to allow almost anyone to interact with Artificial Intelligence – even without the coding skills.
It works autonomously, but you can always check how it makes decisions or what it stores in the memory. If you need something more – you can extend its capabilities with a new AI skill.
We plan to gradually increase the amount of things that will be configured via graphical user interface, adjusting it to the needs of our users.