Towards Assumption-free AI
the future of artificial intelligence
In July 2019, we published our White Paper on the Future of Artificial Intelligence where we describe the potential we see in interdisciplinary research between mathematics or theoretical physics, informatics and neuroscience. Our starting point is the (old) hypothesis of neuroscientists that one mysterious algorithm is executed highly parallelized in the brain’s neocortex (therefore ‘cortical algorithm’) and that its central task is to create so-called invariant representations – you could also name them ‘abstractions’ – of the things that surround us and the sensory perceptions that they create. The conjecture that this algorithm exists is supported by the anatomical homogeneity of the neocortex and its impressive plasticity, i.e. the ability of cortical tissue to adapt to very different tasks like visual or auditory processing or motion control.
A central proposition of our White Paper is that, given the enormous anatomical complexity of the brain, it seems unlikely that one can understand the cortical algorithm by observation, rather one must invent it. And the starting point for this endeavor can only be to (mathematically) define the problem which the algorithm solves. Vaguely speaking, this problem seems to be learning ‘entities’ (which could be every-day things like physical objects, words, songs, etc.) and their allowed transformations (or ‘invariances’, e.g. changes in position or color) from almost arbitrary types of data input streams. That means the algorithm should work on visual information as well as on auditory perception, and one can speculate if it would even work for exotic non-human senses (like the echolocation of a bat, for example). It also implies that we should not engineer our algorithm to perform in a specific domain. Even seemingly trivial assumptions – like the facts that images are two-dimensional or that we can perceive the three different color channels red, green and blue – are specific to certain types of sensory perception and should ideally neither explicitly nor implicitly be encoded in the algorithm. I propose the term ‘assumption-free AI’ for this approach, though it is a slight overstatement: certain very basic assumptions about the world will still be necessary, e.g. that it (approximately) preserves time continuity or that it consists of hierarchically structured entities.
Taking these into consideration, our research strategy can be summed up as follows: search for statistical principles that can uncover entities and invariances in a wide range of possible data streams, test those by implementing them as software prototypes in a computer and finally test whether they match experimental evidence from neuroscience.
Now, three months after we published this bold approach in our White Paper, our AI Research group at Merck KGaA, Darmstadt, Germany is onboard and actively working to pursue the strategy described above – and slowly but surely this vision is coming to life. In this article, we are sharing a few preliminary results which give hope that our strategy – as philosophical as it may appear – will lead to practically useful results.
One of the most basic tasks that the conjectured cortical algorithm needs to perform is segmenting data: recognizing an entity (like a physical object or a spoken sentence) implies that we can distinguish this entity from all the other things around us. In the case of vision, this means that we can distinguish between an object and its background. In the case of spoken language, it is our ability to distinguish conversational words from the background noise at a party.
We have now devised an algorithm that does this type of data segmentation and is very close to being assumption-free (in the sense described above). The algorithm consists of a single layer of artificial ‘neurons’ which are equipped with an additional mechanism that is unusual in the machine learning field, but we believe might be biologically plausible. The algorithm works in a completely unsupervised way and has been trained (only) on a set of natural scene photos.
We have tested the algorithm on different types of images and found that it exhibits interesting properties, which seem to be novel in this combination, especially given the unsupervised and assumption-free approach. The two figures above show different examples of input images (top line) together with the respective output (bottom line), where the color-coding indicates the segmentation of the image. The algorithm does not only create a meaningful segmentation on the photographic images, but it also generalizes to different abstractions of the image in a way that seems natural to the human eye.
In its present form, the algorithm is, at best, a small component of the hypothetical cortical algorithm, but it demonstrates that the idea of ‘assumption-free AI’ can inspire new approaches with interesting outcomes. We are thrilled to see where this journey will take us, what new capabilities we can teach to a computer, and what we might ultimately learn about the human brain.