in a sentence provide numerous information, corresponding to what they mean in the true world, how they hook up with other words, how they alter the meaning of other words, and sometimes their true meaning will be ambiguous, and may even confuse humans!
All of this should be discovered to construct applications with Natural Language Understanding capabilities. Three fundamental tasks help to capture different kinds of data from text:
- Part-of-speech (POS) tagging
- Dependency parsing
- Named entity recognition
A part of Speech (POS) Tagging

In POS tagging, we classify words under certain categories, based on their function in a sentence. For instance we wish to distinguish a noun from a verb. This may help us understand the meaning of some text.
Essentially the most common tags are the next.
- NOUN: Names an individual, place, thing, or idea (e.g., “dog”, “city”).
- VERB: Describes an motion, state, or occurrence (e.g., “run”, “is”).
- ADJ: Modifies a noun to explain its quality, quantity, or extent (e.g., “big”, “joyful”).
- ADV: Modifies a verb, adjective, or other adverb, often indicating manner, time, or degree (e.g., “quickly”, “very”).
- PRON: Replaces a noun or noun phrase (e.g., “he”, “they”).
- DET: Introduces or specifies a noun (e.g., “the”, “a”).
- ADP: Shows the connection of a noun or pronoun to a different word (e.g., “in”, “on”).
- NUM: Represents a number or quantity (e.g., “one”, “fifty”).
- CONJ: Connects words, phrases, or clauses (e.g., “and”, “but”).
- PRT: A particle, often a part of a verb phrase or preposition (e.g., “up” in “surrender”).
- PUNCT: Marks punctuation symbols (e.g., “.”, “,”).
- X: Catch-all for other or unclear categories (e.g., foreign words, symbols).
These are called Universal Tags. Then each language can have more granular tags. For instance we are able to expand the “noun” tag so as to add the singular/plural information etc.
In spaCy tags are represented with acronyms like “VBD”. In case you aren’t sure what an acronym refers to, you possibly can ask spaCy to clarify with spacy.explain()
Let’s see some examples.
import spacy
spacy.explain("VBD")
>>> verb, past tense
Let’s try now to research the POS tags of a complete sentence
nlp = spacy.load("en_core_web_sm")
doc = nlp("I like Rome, it's the most effective city on the earth!"
)
for token in doc:
print(f"{token.text} --> {token.tag_}--> {spacy.explain(token.tag_)}")

The tag of a word relies on the words nearby, their tags, and the word itself.
POS taggers are based on statistical models. We now have mainly
- Rule-Based Taggers: Use hand-crafted linguistic rules (e.g., “a word after ‘the’ is commonly a noun”).
- Statistical Taggers: Use probabilistic models like Hidden Markov Models (HMMs) or Conditional Random Fields (CRFs) to predict tags based on word and tag sequences.
- Neural Network Taggers: Use deep learning models like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, or Transformers (e.g., BERT) to capture context and predict tags.
Dependency Parsing
With POS tagging we’re in a position to categorize the words in out document, but we don’t know what are the relationships among the many words. This is strictly what dependency parsing does. This helps us understand the structure of a sentence.
We will think a dependency as a direct edge/link that goes from a parent word to a toddler, which defines the connection between the 2. This is the reason we use dependency trees to represent the structure of sentences. See the next image.

In a dependency relation, we at all times have a parent, also called the head, and a dependent, also called the child. Within the phrase “red automobile”, automobile is the top and red is the kid.

In spaCy the relation is at all times assigned to the kid and will be accessed with the attribute token.dep_
doc = nlp("red automobile")
for token in doc:
print(f"{token.text}, {token.dep_} ")
>>> red, amod
>>> automobile, ROOT
As you possibly can see in a sentence, the fundamental word, often a verb, on this case a noun, has the role of ROOT. From the foundation, we construct our dependency tree.
It can be crucial to know, also that a word can have multiple children but just one parent.
So on this case what does the connection tells us?
The relation applies whether the meaning of the noun is modified in a compositional way (e.g., ) or an idiomatic way ().
Indeed, the “red” is a word that modifies the word “automobile” by adding some information to it.
I’ll list now probably the most fundamental relationship yow will discover in a dependency parsing and their meaning.
Fot a comprehensive list check this website: https://universaldependencies.org/u/dep/index.html
- root
- Meaning: The fundamental predicate or head of the sentence, typically a verb, anchoring the dependency tree.
- Example: In “She runs,” “runs” is the foundation.
- nsubj (Nominal Subject)
- Meaning: A noun phrase acting as the topic of a verb.
- Example: In “The cat sleeps,” “cat” is the nsubj of “sleeps.”
- obj (Object)
- Meaning: A noun phrase directly receiving the motion of a verb.
- Example: In “She kicked the ball,” “ball” is the obj of “kicked.”
- iobj (Indirect Object)
- Meaning: A noun phrase not directly affected by the verb, often a recipient.
- Example: In “She gave him a book,” “him” is the iobj of “gave.”
- obl (Oblique Nominal)
- Meaning: A noun phrase acting as a non-core argument or adjunct (e.g., time, place).
- Example: In “She runs within the park,” “park” is the obl of “runs.”
- advmod (Adverbial Modifier)
- Meaning: An adverb modifying a verb, adjective, or adverb.
- Example: In “She runs quickly,” “quickly” is the advmod of “runs.”
- amod (Adjectival Modifier)
- Meaning: An adjective modifying a noun.
- Example: In “A red apple,” “red” is the amod of “apple.”
- det (Determiner)
- Meaning: A word specifying the reference of a noun (e.g., articles, demonstrations).
- Example: In “The cat,” “the” is the det of “cat.”
- case (Case Marking)
- Meaning: A word (e.g., preposition) marking the role of a noun phrase.
- Example: In “Within the park,” “in” is the case of “park.”
- conj (Conjunct)
- Meaning: A coordinated word or phrase linked via a conjunction.
- Example: In “She runs and jumps,” “jumps” is the conj of “runs.”
- cc (Coordinating Conjunction)
- Meaning: A conjunction linking coordinated elements.
- Example: In “She runs and jumps,” “and” is the cc.
- aux (Auxiliary)
- Meaning: An auxiliary verb supporting the fundamental verb (tense, mood, aspect).
- Example: In “She has eaten,” “has” is the aux of “eaten.”
We will visualize the dependency tree in spaCy using the display module. Let’s see an example.
from spacy import displacy
sentence = "A dependency parser analyzes the grammatical structure of a sentence."
nlp = spacy.load("en_core_web_sm")
doc = nlp(sentence)
displacy.serve(doc, style="dep")

Named Entity Recognition (NER)
A POS tag provides with information in regards to the role of a word in a sentence. Once we perform NER we search for words that represent objects in the true world: an organization name, a correct name, a location etc.
We consult with these words as named entity. See this instance.

Within the sentence ““, Rome and Italy are named entity, while capital it’s not since it is a generic noun.
spaCy supports many named entities already, to visualise them:
nlp.get_pipe("ner").labels
Named entity are accessible in spaCy with the doc.ents
attribute
sentence = "A dependency parser analyzes the grammatical structure of a sentence."
nlp = spacy.load("en_core_web_sm")
doc = nlp("Rome is the bast city in Italy based on my Google search")
doc.ents
>>> (Rome, Italy, Google)
We may ask spaCy provide some explanation in regards to the named entities.
doc[0], doc[0].ent_type_, spacy.explain(doc[0].ent_type_)
>>> (Rome, 'GPE', 'Countries, cities, states')
Again, we are able to depend on displacy to visualise the outcomes of NER.
displacy.serve(doc, style="ent")

Final Thoughts
Understanding how language is structured and the way it really works is essential to constructing higher tools that may handle text in meaningful ways. Techniques like part-of-speech tagging, dependency parsing, and named entity recognition help break down sentences so we are able to see how words function, how they connect, and what real-world things they consult with.
These methods give us a practical technique to pull useful information out of text, things like identifying who did what to whom, or spotting names, dates, and places. Libraries like spaCy make it easier to explore these ideas, offering clear ways to see how language suits together.