Toward Robust Evaluation of Emirati Dialect Capabilities in Arabic LLMs

-


image

Arabic is one of the vital widely spoken languages on this planet, with a whole lot of tens of millions of speakers across greater than twenty countries. Despite this global reach, Arabic is just not a monolithic language. Modern Standard Arabic coexists with a wealthy landscape of regional dialects that differ significantly in vocabulary, syntax, phonology, and cultural grounding. These dialects are the first medium of day by day communication, oral storytelling, poetry, and social interaction. Nonetheless, most existing benchmarks for Arabic large language models focus almost exclusively on Modern Standard Arabic, leaving dialectal Arabic largely under-evaluated and under-represented.

This gap is especially problematic as large language models increasingly interact with users in informal, culturally grounded, and conversational settings. A model that performs well on formal newswire text should still fail to know a greeting, an idiomatic expression, or a brief anecdote expressed in a neighborhood dialect. To deal with this limitation, our team introduces Alyah الياه (which suggests North Star ⭐️ in Emirati), an Emirati-centric benchmark designed to evaluate how well Arabic LLMs capture the linguistic, cultural, and pragmatic points of the Emirati dialect.



Benchmark Motivation and Scope

The Emirati dialect is deeply intertwined with local culture, heritage, and history. It appears in on a regular basis greetings, oral poetry, proverbs, folk narratives, and expressions whose meanings can’t be inferred through literal translation alone. Our benchmark is intentionally designed to probe this depth. Quite than testing surface-level lexical knowledge, it challenges models on their ability to interpret culturally embedded meaning, pragmatic usage, and dialect-specific nuances.

The benchmark covers a wide selection of content, including common and unusual local expressions, culturally grounded greetings, short anecdotes, heritage-related questions, and references to Emirati poetry. The goal is just not only to measure correctness, but additionally to know where models systematically succeed or fail when confronted with authentic Emirati language use.



Dataset Structure

Following further development and consolidation, the benchmark has been unified right into a single dataset called Alyah. The ultimate benchmark incorporates 1,173 samples, all collected manually from native Emirati speakers to make sure linguistic authenticity and cultural grounding. This manual curation step was essential to capture expressions, meanings, and usages which can be rarely documented in written resources and are difficult to infer from Modern Standard Arabic alone.

Each sample is formulated as a multiple-choice query with 4 candidate answers, exactly considered one of which is correct. Large language models were used to synthetically generate the distractor decisions, after which they were reviewed to make sure plausibility and semantic closeness to the right answer. To avoid positional bias during evaluation, the index of the right answer follows a randomized distribution across the dataset. Below is the distribution of word count per query and candidate answers.

image

Alyah spans a broad spectrum of linguistic and cultural phenomena within the Emirati dialect, starting from on a regular basis expressions to culturally sensitive and figurative language. The distribution across categories is summarized below.

Category Variety of Samples Difficulty
Greetings & Every day Expressions 61 Easy
Religious & Social Sensitivity 78 Medium
Imagery & Figurative Meaning 121 Medium
Etiquette & Values 173 Medium
Poetry & Creative Expression 32 Difficult
Historical & Heritage Knowledge 89 Difficult
Language & Dialect 619 Difficult

Below are examples of every category:

image

This composition allows Alyah to jointly evaluate surface-level conversational fluency and deeper cultural, semantic, and pragmatic understanding, with a specific emphasis on dialect-specific language phenomena that remain difficult for current models.



Model Evaluation Setup

We evaluated a complete of 54 language models, comprising 23 base models and 31 instruction-tuned models, spanning several architectural and training paradigms. These include Arabic-native LLMs equivalent to Jais and Allam, multilingual models with strong Arabic support equivalent to Qwen and LLaMA, and adapted or regionally specialized models equivalent to Fanar and AceGPT. For every family, each base and instruction-tuned variants were evaluated to be able to understand the impact of alignment and instruction tuning on dialectal performance.

All models were evaluated under a consistent prompting and scoring protocol. Responses were assessed for semantic correctness and appropriateness with respect to Emirati usage, moderately than literal overlap with a reference answer. This is especially necessary for dialectal evaluation, where multiple valid phrasings may exist.

For every query category, we estimated difficulty empirically based on model performance. Categories where most models struggled were labeled as harder, while those consistently answered appropriately across model families were considered easier. This approach allows difficulty to emerge from observed behavior moderately than from subjective annotation alone.



Evaluation Results on Alyah (Emirati Dialect)

We evaluate a broad set of up to date Arabic and multilingual large language models on Alyah, using accuracy on multiple-choice questions as the first metric. The evaluation covers 53 models in total, including 22 base models and 31 instruction-tuned models, spanning Arabic-native, multilingual, and regionally adapted systems. Below is a radar plot showing the performance of top models of various sizes per query category.

image

These results are intended as reference measurements inside the scope of Alyah, moderately than absolute rankings across all Arabic benchmarks.



Base Models

Model Accuracy
google/gemma-3-27b-pt 74.68
tiiuae/Falcon-H1-34B-Base 73.66
FreedomIntelligence/AceGPT-v2-32B 67.35
google/gemma-3-4b-pt 63.17
QCRI/Fanar-1-9B 62.75
tiiuae/Falcon-H1-7B-Base 60.78
meta-llama/Llama-3.1-8B 58.23
Qwen/Qwen3-14B-Base 57.29
inceptionai/jais-adapted-13b 56.01
Qwen/Qwen2.5-32B 53.03
FreedomIntelligence/AceGPT-13B 50.81
Qwen/Qwen2.5-72B 47.91
Qwen/Qwen2.5-14B 46.8
google/gemma-2-2b 41.86
tiiuae/Falcon3-7B-Base 41.43
Qwen/Qwen3-8B-Base 40.75
tiiuae/Falcon-H1-3B-Base 40.41
Qwen/Qwen2.5-7B 36.57
Qwen/Qwen2.5-3B 35.29
meta-llama/Llama-3.2-3B 35.12
inceptionai/jais-adapted-7b 33.5
Qwen/Qwen3-4B-Base 27.45



Instruction-Tuned Models

Model Accuracy
falcon-h1-arabic-7b-instruct 82.18
humain-ai/ALLaM-7B-Instruct-preview 77.24
google/gemma-3-27b-it 74.68
falcon-h1-arabic-3b-instruct 74.51
Qwen/Qwen2.5-72B-Instruct 74.6
CohereForAI/aya-expanse-32b 73.66
Navid-AI/Yehia-7B-preview 73.32
FreedomIntelligence/AceGPT-v2-32B-Chat 72.8
Qwen/Qwen2.5-32B-Instruct 71.61
tiiuae/Falcon-H1-34B-Instruct 71.1
meta-llama/Llama-3.3-70B-Instruct 69.74
QCRI/Fanar-1-9B-Instruct 69.22
tiiuae/Falcon-H1-7B-Instruct 65.13
CohereForAI/c4ai-command-r7b-arabic-02-2025 64.54
silma-ai/SILMA-9B-Instruct-v1.0 63.94
FreedomIntelligence/AceGPT-v2-8B-Chat 63.43
CohereLabs/aya-expanse-8b 61.21
yasserrmd/kallamni-2.6b-v1 61.13
yasserrmd/kallamni-4b-v1 60.7
microsoft/Phi-4-mini-instruct 58.57
tiiuae/Falcon-H1-3B-Instruct 57.12
silma-ai/SILMA-Kashif-2B-Instruct-v1.0 48.51
Qwen/Qwen2.5-7B-Instruct 45.44
google/gemma-3-4b-it 46.12
meta-llama/Llama-3.1-8B-Instruct 46.29
meta-llama/Llama-3.2-3B-Instruct 39.64
yasserrmd/kallamni-1.2b-v1 37.77
Qwen/Qwen3-4B 26.26
google/gemma-2-2b-it 26.00
Qwen/Qwen3-14B 26.00
Qwen/Qwen3-8B 25.66



Evaluation and Observed Trends

Figure 1: Models’ accuracy across categories based on size.
Figure 2: Models’ accuracy across categories based on language.

Several trends emerge from the evaluation. Instruction-tuned models generally outperform their base counterparts as shown in Figures 1 and a couple of. This is especially the case on questions involving conversational norms and culturally appropriate responses (i.e. the Etiquette & Values Category). Moreover, it’s the case with questions that test imagery and figurative meaning. This could be attributed, to the model’s original strong capabilities with understanding MSA-based imagery and figurative language whatever the dialect at hand. The models are capable of draw patterns of non-literal description no matter dialect. Generally, essentially the most difficult categories for the models were consistently “Language and Dialect” and “Greeting and Every day expressions” across model sizes as shown in figure 1. These results reflect the present state of Emirati dialect presence in written media, because the dialect is generally spoken rarely written, which explains its novelty relative to the evaluated models. Nonetheless, there’s a transparent profit to instruct models with understanding the dialect (and the opposite evaluation categories) as compared to their counterparts, especially in small and medium models. This is especially noticeable with the Poetry and Creative Expression category, which is where the big instruct models performed marginally higher than the smaller models.

Figure 3: Evaluated models average accuracy.

As shown in Figure 3, even strong multilingual models show notable degradation on essentially the most difficult Alyah questions, suggesting that dialect-specific semantic knowledge is just not easily acquired through generic multilingual training alone. It should be noted that while Arabic-native models are inclined to perform more robustly on culturally grounded content, their performances aren’t uniform across all categories (figure 2). Specifically, questions involving implicit meanings and rare expressions remain difficult across nearly all evaluated models. This highlights a persistent gap between surface-level dialect familiarity and deeper cultural understanding. The high variance in performance across categories , where a model that excels at imagery and figurative meaning should still struggle with poetry or heritage-related creative questions, indicates that dialectal competence is multi-dimensional and can’t be captured by a single rating. Figure 3 shows that the very best scoring large model in Jais-2-70B, followed by the 2 small models jais-2-8B and ALLaM-7B-instruct, that are all Arabic instruct-tuned models.



Conclusion and Community Impact

This benchmark represents a step toward more realistic and culturally grounded evaluation of Arabic language models. By specializing in the Emirati dialect, we aim to support the event of models that higher serve local communities, institutions, and users within the UAE. Beyond model rating, the benchmark is meant as a diagnostic tool to guide future data collection, training, and adaptation efforts.

We invite researchers, practitioners, and the broader community to make use of the benchmark, explore the outcomes, and share feedback. Community input might be essential to refining the dataset, expanding coverage, and ensuring that dialectal Arabic receives the eye it deserves within the evaluation of Large Language Models.



Citation

@misc{emirati_dialect_benchmark_2026,
title = {Alyah: An Emirati Dialect Benchmark for Evaluating Arabic Large Language Models},
writer={Omar Alkaabi and Ahmed Alzubaidi and Hamza Alobeidli and Shaikha Alsuwaidi and Mohammed Alyafeai and Leen AlQadi and Basma El Amel Boussaha and Hakim Hacid},
yr = {2026},
month = {january},
}



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x