As if out of nowhere, ChatGPT emerged last year into the consciousness of a wider public that vacillates between fear and fascination. It cannot be overlooked that this generative text AI has already taken a firm place in the lives of many people. Be it with students who use ChatGPT for homework, in cars, where Mercedes and Volkswagen now use it as an extended voice assistant, in politics with some inglorious examples, or even in companies that speed up their processes.
Not only are the adoption rates exceptionally high, but the rate at which AI itself continues to develop is also astonishing. The AIs hallucinate less, become more careful with their answers, pass even the most difficult tests with flying colors, and create increasingly realistic images and videos that can no longer be distinguished from reality. And all within just a few months. Some people who had just firmly claimed that an “AI will never be able to do that” will be proven wrong in a few weeks or months.
But what are the reasons why AI is developing so rapidly? First of all, it should be noted that the development of AI itself is already several decades old. While the first founders of the field of artificial intelligence in the 1950s assumed that a few students under the guidance of professors would be able to recreate the human brain in a few weeks over the summer, disillusionment soon followed. Several AI winters, many setbacks and wasted billions later, there was a quicker succession of breakthroughs from the early 2010s with the ImageNet competition, AlphaGo, self-driving cars and then ChatGPT that caught the public’s attention and imagination.
In other words, AI did not come about overnight, but experienced a number of setbacks on its decades-long journey to where it is today. Bill Buxton, a senior scientist at Microsoft, referred to this as the Long Nose of Innovation (I wrote about this in my book Foresight Mindset, among others). Every technological innovation announces itself long before its breakthrough; it flies under the radar and is only visible to the experts. Every innovation already exists for at least 15 years before it becomes visible to a wider audience, before it flies over the radar, so to speak, and then takes another five years to unfold its effects.

We are therefore just above the radar in the initial phase in which the technology is beginning to take effect. This visibility means that more resources are brought in. Not only do more people want to be part of it and take advantage of the opportunities offered by this new technology, but increasing sums of money are also being invested.
It also helps that many of the algorithms and methods that have been known for decades, not least neural networks and deep learning, have only been able to prove their power with the progress that has been made in the meantime in terms of computing speed and the amount of data available. Winning the ImageNet competition in 2012, AlexNet showed the experts that this combination was the most promising route to usable AIs. The whole scene began to reorient itself.
Critics will also point out that a lack of regulation and the disregard of copyrights or bias in the data have also contributed to what they see as a reckless and rapid development that leaves collateral damage in its wake.
But there may be another factor contributing to the rapid pace of AI progress, and Yann LeCun points this out. The chief scientist of the AI department at Facebook parent company Meta mentions a special feature of his field that is applied to the way progress is published. Scientific papers and the source code for AI are published almost exclusively on arXiv or OpenReview, which were originally intended to serve as public archives for the pre-publication of studies before they are officially printed in accordance with a peer review process that is standard in the sciences.
In some scientific fields, peer reviews take months and even years before a paper is finally printed and published after submission. And then mostly in journals to which research institutes or practitioners have to subscribe for expensive money. This process is now paralyzing entire areas of science and practical implementation, which can be seen from the fact that many subjects are progressing ever more slowly with new findings.
In computer science and AI, this is not possible or practicable for several reasons. Even the exponential development of computing power, as recognized by Intel co-founder Gordon Moore in his Moore’s Law, makes it possible to test computationally intensive methods in short periods of time. And with the Internet, data has been generated and made accessible in huge quantities. This is crucial for algorithms that require massive amounts of data.
In AI, we have often seen improvements within a few days of papers being published on arXiv in recent months. If you want to become and remain relevant as an AI researcher, you have no choice but to pre-publish on arXiv. Interestingly, the peer review process then takes place there publicly and, above all, through the application of the algorithms and methods by other groups. If the findings are valid, they are immediately and directly put into practice.
This speed in the development of AI and the perceived importance of AI for companies, regardless of whether they develop or use AI, is also leading to a break-up of rigid in-house processes. So Apple had no choice but to allow its in-house AI researchers and developers to quickly publish their own studies and papers. A cultural change for the otherwise secretive company.
But the next acceleration away from arXiv is imminent, and it has to do with a special feature of this field of science. The product from this area has the ability to contribute to its own improvement. AI can recognize new patterns and explore different paths much more quickly than humans can. This allows AI to improve itself.

For example, a Google AI has already found new sorting algorithms that perform sorting up to three times faster than those generated by humans. We are therefore moving from an exponential to a super-exponential development. As humans, we already have problems understanding exponential developments, but super-exponential is yet another dimension more difficult to grasp.
The development of AI will therefore not slow down in the foreseeable future; on the contrary, it will actually accelerate. So there will be no respite for us.
I wrote a lot more about the latest developments in (generative) AI and where it will lead in my (German, sorry!) book Kreative Intelligenz, which was published a month ago. Of course I say: Buy!
KREATIVE INTELLIGENZ
Über ChatGPT hat man viel gelesen in der letzten Zeit: die künstliche Intelligenz, die ganze Bücher schreiben kann und der bereits jetzt unterstellt wird, Legionen von Autoren, Textern und Übersetzern arbeitslos zu machen. Und ChatGPT ist nicht allein, die KI-Familie wächst beständig. So malt DALL-E Bilder, Face Generator simuliert Gesichter und MusicLM komponiert Musik. Was erleben wir da? Das Ende der Zivilisation oder den Beginn von etwas völlig Neuem? Zukunftsforscher Dr. Mario Herger ordnet die neuesten Entwicklungen aus dem Silicon Valley ein und zeigt auf, welche teils bahnbrechenden Veränderungen unmittelbar vor der Tür stehen.

