NEW! Workshop Series "Linguistics Meets ChatGPT: From Prompt to Theory". ***** Call for Data Submission: AI Hallucinations
Hallucinations — confident but incorrect outputs produced by Large Language Models (LLMs) and related AI systems — are one of the most pressing challenges in the responsible use of AI.
Most research on hallucinations is machine-centred: it evaluates LLM outputs and labels errors as failures of reasoning or logic.
While this view may help technical specialists, it can mislead others by suggesting that LLMs are “dangerous” because they are expected to reason like humans — yet inevitably fail.
At MANOVA AI, we take a different perspective.
We explain hallucinations through the mechanics of tokenisation — how LLMs split input into subword units (tokens) and predict tokens rather than words.
This view makes errors intelligible: for example, the tokenisation of numbers clarifies why models often struggle with counting and calculations.
By showing how hallucinations emerge from token-based processing, we offer a transparent and practical account of their sources.
Websites (currently in preparation):
PI: Dr. Stela Manova
Contact: contact@manova-ai.com
Gauss:AI is the research platform for the linguistic and theoretical findings of MANOVA AI.
It brings together three thematic Transformers, each dedicated to a core question of human and artificial intelligence:
LingTransformer — Language research in the light of mathematics and Large Language Models (LLMs)
LearningTransformer — Learning in the AI era: human learning versus machine learning
CodeTransformer — Vibe coding: natural languages versus programming languages
The project promotes interdisciplinary dialogue across linguistics, computer science, and education — continuing the Gauss-inspired spirit of uniting human and artificial intelligence.
Website: gaussaiglobal.com
PI: Dr. Stela Manova
Contact: contact@gaussaiglobal.comÂ
Inquiries about our project management and project hosting services should be sent to office@manova-ai.com.