
When a Linguist talks to ChatGPT: Language, Meaning, and AI
I have a background in Applied Linguistics. My postgraduate studies and research focused on sentence structure and meaning. I followed the principles of Systemic Functional Grammar, especially the Cardiff Model, and I’ve used this knowledge for years in instructional design and digital product design.
In my recent explorations, I've discovered exciting intersections between these linguistic principles and prompt engineering—particularly when working with tools like ChatGPT.
I found that well-structured prompts that reflect functional linguistic roles produce more accurate, coherent, and useful AI outputs.
Systemic Functional Grammar and The Cardiff Model
The Cardiff Grammar (CG; R. Fawcett, 2000, 2008) builds on M.A.K. Halliday's (1985) systemic-functional theory. It simplifies and expands this theory by being a generative, formal, and functional grammar; that is, a grammar oriented toward the generation of texts. (Villar, M.A., (2012), “Modelización Contextual y Lingüística de los Folletos de Promoción Turística en Línea y Aplicaciones Pedagógicas Informatizadas”[Master's thesis], Universidad Nacional de Cuyo).
The linguistic model of the CG consists of four basic components: 1) networks of semantic features, 2) representation of selected options from those networks, 3) realization rules, and 4) form. When we engage in a communicative act, we start with the semantic potential of the language. Then, we explore networks of semantic features. We select options from among the eight "strands of meaning" recognized by Cardiff Grammar (experiential, logical relations, interpersonal, polarity, validity establishment, affective, thematic, and informational) (Fawcett, 2008b: 45). These options appear as a set of chosen semantic features. They serve as input for the realization component, which contains the realization rules. These rules allow the selected semantic options to be turned into forms. (Villar, 2012)
In short, the Cardiff Grammar focuses on the analysis of clauses/sentences as meaning-making devices.

Clause Analysis based on the Cardiff Grammar Model. Excerpt from Villar, 2012, Master's Thesis.
My Systemic-Funtional-Grammar-Inspired Prompt Formula
This is an overview of how I adapted functional sentence elements to prompt engineering:

Simplified version of the Prompt Structure Framework I created based on SFG (Cardiff Grammar) to use with my team
This structure reflects how we create meaning in language: set context → identify action → define target → add conditions.
The AI, trained on large human-language corpora, responds better to prompts that align with this semantic logic.
Example:
Common Prompt: "Explain climate change."
SFG-Based Prompt: "As a science communicator, explain climate change in simple terms for a general audience. Include the main causes, current impacts, and practical ways individuals can help. Write in a clear, engaging tone with real-world examples."
Notice the difference:
- The first is broad and under-specified.
- The second guides AI to “think” with context, role, focus, and style, producing a more coherent and actionable response.
Key Takeaway
I have been working on turning these findings into a functional prompt-building framework to use with my team. The framework is based on Systemic Functional Grammar (Cardiff Grammar) and is shaped by real-world AI interactions. The results have been both consistent and scalable across writing, design, and learning experiences.
To summarize my findings:
Applied Linguistics isn’t just relevant to AI—it’s foundational
to how we shape its outputs.
As NLP and AI continue to evolve, I believe linguistic experts—especially those trained in meaning-driven grammar—play a key role in bridging the gap between human intention and machine response.
This is the first article of my AI + Learning Design series. If you’re curious about the future of AI in learning, or you’re exploring this space yourself, follow along—I’d love to hear your thoughts along the way.