Botshit instead of bullshit?

We have heard more than enough about disinformation (intentionally false information) and misinformation (unintentionally false information). We have also received more than enough of the bullshit that is deliberately put forward by someone who doesn’t really care what others think about it and about him in recent years.

Thanks to AI, bullshit is now being followed by what we used to euphemistically describe as hallucinations and now call “botshit”. And it differs from bullshit in several respects. Firstly, it is generated by the AI, which can no longer be indifferent to what people think about it and the AI itself, and secondly, the AI presents it with a conviction that is second to none.

The more scientific definition of botshit is provided in the study Beware of Botshit: How to manage the epistemic risks of generative chatbots:

Chatbots can produce coherent-sounding but inaccurate or made-up content, known as “hallucinations”. When people use this untrue content for tasks, it becomes what we call ‘botshit’

The authors of the study from Canada, Italy and the UK compare the definition, types and findings between bullshit and botshit:

BullshitBotshit
DefinitionHuman-generated content that has no regard for the truth which a human then applies for communication and decision-making tasksChatbot generated content that is not grounded in truth (i.e., hallucinations) and is then uncritically used by a human for
communication and decision-making tasks.
TypesPseudo-profound bullshit: statements that appear profound and meaningful
Persuasive bullshit: statements that aim to impress or persuade
Evasive bullshit: statements that strategically circumvent the truth
Social bullshit: statements that tease, exaggerate, joke, or troll
Intrinsic botshit: the human application of a chatbot response that contradicts the chatbot’s training data
Extrinsic botshit: the human application of a chatbot response that cannot be verified as true or false by the chatbot’s training data
InsightsHumans are more likely to generate and use bullshit:
– The more unintelligent, dishonest, and insincere they are
– The expectations for them to have an opinion are high, and they expect to get away with it.
-If their bosses frequently spout bullshit.

Humans are more likely to believe and spread bullshit:
– If they have a low capacity for analytical thinking
– Problems with the training and modeling choices of the LLM transformer
– If they think it is made by a scientist
– If it is appealing, aligned with existing beliefs, and seems credible
Chatbots are more likely to generate hallucinations for humans to use and transform into botshit when there are:
– Data collection, preprocessing and tokenization problems limit factual knowledge alignment between the training data and the desired response
– Ambiguous prompts misdirect the chatbot
– Problems with the training and modeling choices of the LLM transformer
– Issues with fine-tuning efforts based on uncertainty around ground truth

The circumstances under which a review of the results presented by chatbots after Botshit is possible are shown in a matrix:

CrucialAuthenticated chatbot work
Users skepticall submit tasks to chatbots and then meticulously verify responses for factual accuracy, logical coherence and truthfulness.

Examples:
legal, safety and budgetary tasks.
Automated chatbot work
Users systematically assign routine and standard tasks to chatbots and then use responses for efficient and detached execution.

Examples:
application assessment and delection tasks.
UnimportantAdvanced chatbot work
Users openly prompt chatbots to generate ideas and concepts, and then evaluate, organize, combine and select from the generated responses.

Examples:
brainstorming and idea-generation tasks.
Autonomous chatbot work
Users selectively delegate tasks to chatbots with domain training and expertise and then allow the chatbots to learn and adaptthat have the appropriate training and expertise, and then allow the chatbots to learn and adapt.

Examples:
support and assistance tasks.
Difficult to verifyEasily verifiable

This definition, breakdown and categorization should help users and developers of generative text AIs to better classify the chatbots’ statements and understand where it becomes critical to check again and incorporate appropriate checking mechanisms in autonomous AI assistants. The authors have also prepared a clearer presentation, which can be viewed here:

Wer noch tiefer in generative KI eintauchen will,
dem/der sei mein neuestes Buch ans Herz gelegt:
Kreative Intelligenz: Wie ChatGPT und Co die Welt verändern werden.
Erhältlich im Buchhandel, beim Verlag und auf Amazon.

KREATIVE INTELLIGENZ

Über ChatGPT hat man viel gelesen in der letzten Zeit: die künstliche Intelligenz, die ganze Bücher schreiben kann und der bereits jetzt unterstellt wird, Legionen von Autoren, Textern und Übersetzern arbeitslos zu machen. Und ChatGPT ist nicht allein, die KI-Familie wächst beständig. So malt DALL-E Bilder, Face Generator simuliert Gesichter und MusicLM komponiert Musik. Was erleben wir da? Das Ende der Zivilisation oder den Beginn von etwas völlig Neuem? Zukunftsforscher Dr. Mario Herger ordnet die neuesten Entwicklungen aus dem Silicon Valley ein und zeigt auf, welche teils bahnbrechenden Veränderungen unmittelbar vor der Tür stehen.

Erhältlich im Buchhandel, beim Verlag und auf Amazon.

Leave a Reply