The arrival of ChatGPT 15 months ago and what this voice bot can do was so surprising to many that this AI continues to dominate public discussions. What makes us human? Do we need to regulate AI? Where will this lead us?
Dabei habe ich in meinem aktuellen KI-Buch Kreative Intelligenz einen Ausblick gegeben, wie sich generative KI entwickeln wird und was wir erwarten können. Ich verglich das mit dem Web 1.0/2.0/3.0. Vom statischen Internet zu einem mit Online Banking, Bezahlsystemen, mobilem Internet und dynamischen Inhalten zu einem mit Kryptowährungen, Metaverse, AR und VR.
AI is currently undergoing a similar transformation. While ChatGPT was still a stand-alone AI in the first phase, in the second phase we see an integration with other software systems such as Microsoft Office, Github, video tools or Adobe Photoshop, in which AI can independently take over many tasks that were previously done by humans. We call this second stage autonomous AI or AI assistants.

In the third phase, the AI is given a body. It is integrated in cars as a voice bot, in the AI Pin from hu.ma.ne in a plug-in mobile device, or on robots.
While it took around 25 years for the web to get from Web 1.0 to today’s Web 3.0, AI has gone from version 1.0 to version 3.0 in a tenth of the time. Fifteen months after the launch of ChatGPT by OpenAI and the reactions of competitors such as Google or Meta, who followed suit with their own language models such as Gemini, Llama or Claude, we are now seeing the first examples of these AIs being integrated into robots.

The robotics company Figure.AI, which is based in Sunnyvale, California, just a 45-minute drive south of San Francisco, published absolutely impressive videos of its humanoid robot Figure 01, which has integrated ChatGPT and can carry out human activities as well as conversations.
In the following video, we see how the robot uses a voice to describe which objects are on the table in front of it, hands the apple to the human instructor with the request to give it something edible, sorts garbage into a basket while explaining to the human why it gave it the apple when asked to hand it something edible, and then also explains to the human what it should probably do with the cup and plate in front of it: namely put them in the drainer.
With AI, robotics development is suddenly accelerating dramatically. First of all, language models like ChatGPT can create natural-sounding texts and use them to have a conversation with people. Thanks to generative AI, objects can then also be recognized and their context retrieved. And ultimately, the approaches of generative AI can be used to train robots so that they do not have to be painstakingly taught to move by humans. The following video shows how a robotic hand uses AI to train itself to move its fingers and skillfully manipulate objects:
We can now experience live the AI development that I outlined in my book Kreative Intelligenz on a Timeline, which was published three months ago. And the fact that this video appears on the same day as the adoption of the EU AI Act shows that AI development is already a big step ahead of legislation and that the EU AI Act should actually be revised and amended again.
Incidentally, in addition to my second AI book Kreative Intelligenz, which is hot off the press, my first book on AI, Wenn Affen von Affen lernen, which was published in 2020, has just been released in paperback. There I already dealt with many philosophical questions about AI.
In other words: it remains exciting and my books are a good place to start!


