AI: From Painter and Conversationalist to Architect and Mechanical Engineer

Only a few weeks and months ChatGPT, DALL-E and other Generative Pre-trained Transformers (GPT) are public, and the discussions do not want to stop. On the one hand, there is amazement at what these AI systems can do and creatively produce paintings and texts based only on a few simple instructions; on the other hand, others already see the end of art and literature looming on the horizon.

The discussion is not a new one. With the invention of photography, the end of painting was predicted. But the opposite occurred: not only were painters brought in to colorize black-and-white photographs in the era of black-and-white photography, there was more demand for pictures and illustrations. Even today, with the emergence and prevalence of smartphone cameras, the number of professional photographers did not decline; it increased in the U.S. from 160,000 in 2002 (before smartphones) to 230,000 in 2021.

In its February 2023 issue, Wired magazine looks at how artists today are already actively using image transformers such as DALL-E, Midjourney, Stable Diffusion, and Artbreeder to create novel images, and very quickly at that. These artificial intelligences thus become nothing more than the modern equivalent of a paintbrush or a camera. Just as the paintbrush added painting on a surface to carving in stones or clay, or the camera gave artists entirely new angles of vision, these GPTs are new creative tools in the artists’ arsenal.

High school teacher Cherie Shields in Oregon told the New York Time podcast Hard Fork how she came across ChatGPT and immediately started using it in the classroom. She teaches her students how to use ChatGPT as a tool to set a starting point for their essays. In doing so, she also points out the limitations of the software and thus teaches her students digital media literacy. Just as cheat sheets or text summaries are sometimes used today, ChatGPT should be viewed in the same way.

I also use DALL-E all the time to quickly create cover images for my blog posts. And it turns out that you have to learn how to give the AI the right “prompts” – the right instructions – to get good results. This is another area where artists already differ, having spent hundreds of hours tweaking their prompts and shaping the results of AI to their liking.

What is being demonstrated on the basis of millions of images and billions of texts with which these AIs have been trained is expanding the possibilities in other fields. For example, feeding an AI with known compounds and their effects could speed up the design of new drugs, and even lead to personalized medicine, where drugs are tailored to the individual patient.

A DALL-E for videos or virtual worlds like the Metaverse could give us AI-generated movies and video games. An Ingmar Bergmann-style drama with a touch of Quentin Tarantino as an anime? Why not? A mysterious world like in Avatar only in the Middle Ages? Please, you tell the system.

But why stay in the digital world and not switch to the physical world right away? An ArchtectureGPT would be fed with millions of plans of various building projects from all over the world and time and already houses or office towers could be generated to choose from. And that complete with parts lists and detailed plans.

Or why do we still sit down and design machines by hand? Just tell MachineGPT, trained on millions of design plans, what product you want, and the AI spits out the plans for such a machine. In the process, it is already optimizing energy and material use.

It will soon seem anachronistic to still do all this by hand and without the help of such AIs, just as telephone operators made telephone connections, draftsmen bent over large tables with pens, rulers and slide rules designed houses, or thick spreadsheets were used to calculate bridge spans. ChatGPT, DALL-E and the like are the harbingers of a revolution in how creative work will soon look in many professions.

Leave a Reply