SIS code: 
Semester: 
summer
E-credits: 
4
Examination: 
C + Ex

NPFL124 – Natural Language Processing

About

NPFL124 provides students with knowledge and hands-on experience related to basic (mostly statistical) methods in the field of Natural Language Processing. The students will be acquainted with fundamental components such as corpora and language modes, as well as with complex end-user applications such as Machine Translation.

The course consists of six two-week thematic blocks taught by five lecturers:

Scheduled time and location

  • lectures on Thursdays at 10:40 in S3, every week starting from the first week of the summer semester (note that two Thursdays in May are cancelled because of the state holidays)
  • practicals on Wednesdays at 9:00, in SU2 in odd weeks of the semester (starting from the 3rd week), in S4 in even weeks of the semester (starting from the second week)

Lectures

1. Introduction Intro to NLP Questions

2. Language modeling. Language Models Questions

3. Morphological analysis Morphology Questions

4. Syntactic analysis Syntax Questions

5. Information retrieval IR Assignment on IR

6. Information retrieval, cont. IR cont. Questions

7. Introduction to Deep Learning in NLP Deep learning intro Recording Assignment on NN interpretation

8. Deep learning applications in NLP DL in applications LLMs Recording Questions

9. Machine translation MT intro+Word Alignment+PBMT Word Alignment by Philipp Koehn Recording of the Lecture Lab: IBM1 Word Alignment

10. Machine translation, cont. Main Slides: Neural MT Extra Slides: Transformer Recording of the Lecture Questions

11. Overview of Language Data Resources Data resources Questions Assignment on diacritics

12. Evaluation measures in NLP Evaluation Questions

13. Early exam


License

Unless otherwise stated, teaching materials for this course are available under CC BY-SA 4.0.

1. Introduction

 February 20, 2025 Intro to NLP Questions

Lecturer: Jindřich Helcl

Topics:

  • Motivation for NLP.
  • Basic notions from probability and information theory.

2. Language modeling.

 February 27, 2025 Language Models Questions

Lecturer: Jindřich Helcl

Topics:

  • Language models.
  • The noisy channel model.
  • Markov models.

3. Morphological analysis

 March 6, 2025 Morphology Questions

Lecturer: Daniel Zeman

Topics:

  • Morphological tags, parts of speech, morphological categories.
  • Finite-state morphology.

(Slides covered down to no. 46. To be completed next week.)

Practicals:

4. Syntactic analysis

 March 13, 2025 Syntax Questions

Lecturer: Daniel Zeman

Topics:

  • Dependency vs. phrase-based model.
  • Dependency parsing.

5. Information retrieval

 March 20, 2025 IR Assignment on IR

Lecturer: Pavel Pecina

Topics:

  • Intro to IR.
  • Boolean model.
  • Inverted index.

6. Information retrieval, cont.

 March 27, 2025 IR cont. Questions

Lecturer: Pavel Pecina

Topics:

  • Probabilistic models for Information Retrieval.

7. Introduction to Deep Learning in NLP

 April 3, 2025 Deep learning intro Recording Assignment on NN interpretation

Lecturer: Jindřich Libovický

Topics:

  • Neural network basics
  • Word embeddings, sequence-processing architectures
  • Pre-trained models: Word2Vec, BERT

The excercise is available in a Google Colab Sheet.

8. Deep learning applications in NLP

 April 10, 2025 DL in applications LLMs Recording Questions

Lecturer: Jindřich Libovický

Topics:

  • Named entity recognition
  • Answer span selection
  • Generative language models

9. Machine translation

 April 17, 2025

MT intro+Word Alignment+PBMT Word Alignment by Philipp Koehn Recording of the Lecture

Lab: IBM1 Word Alignment

Lecturer: Ondřej Bojar

Topics:

  • Introduction to MT.
  • MT evaluation.
  • Alignment.
  • Phrase-Based MT.

Additonal materials:

10. Machine translation, cont.

 April 24, 2025 Main Slides: Neural MT Extra Slides: Transformer Recording of the Lecture Questions

Topics:

  • Fundamental problems of PBMT.
  • Neural machine translation (NMT).
    • Brief summary of NNs.
    • Sequence-to-sequence, with attention.
    • Transformer, self-attention.
    • Linguistic features in NMT.

11. Overview of Language Data Resources

 May 14, 2025, exceptionally Wednesday!!! exceptionally lecture in S4!!! Data resources Questions Assignment on diacritics

Lecturer: Zdeněk Žabokrtský

Lecture topics:

  • Types of language data resources.
  • Annotation principles.

Practicals:

12. Evaluation measures in NLP

 May 15, 2025 Evaluation Questions

Lecturer: Zdeněk Žabokrtský

Topics:

  • Purposes of evaluation.
  • Evaluation best practices, estimating upper and lower bounds.
  • Task-specific measures.

13. Early exam

Date: May 22, 2025

  • The first option for passing the final exam written test ("předtermín").
  • Additional exam dates can be offered in the exam period.

1. Language Identification

1. Language Identification

 Deadline: 4th April 2025, 23:59 Submission form

This assignment is an application of the topics covered in lectures 1 and 2. Your task is to gather text data from various online sources and in multiple languages and to train n-gram language models to identify the language the texts are in.

The submissions will consist of a single IPython notebook (preferably a link to Google Colab), plus a filled in checklist.

Include also any code used for data gathering. In case it is not trivial to replicate the data gathering phase, you might consider putting the resulting dataset on a publicly accessible URL (such as the public_html folder in your lab account) and calling !wget from the IPython notebook to retrieve it.

Proceed in the following steps:

  • Gather plain text data in multiple languages, save each in separate files, one file per language. The language choice is up to you - it should be at least two languages plus English. If you choose to work with languages that do not use the Latin script, you can replace English by a third language; in all cases please only work with languages that share the same script (language-specific characters like "ř" in Czech are fine).

  • Tokenize everything (you can use the Sacremoses library for this).

  • Report the size of the data in tokens and bytes. You should collect at least 200k tokens per language.

  • Split your data into training, heldout and test sets. (Use 80% of the data for training and 10% for heldout and test.)

  • Estimate the unigram, bigram, and trigram probabilities of character n-grams in each language separately.

  • Report 5 most common character trigrams per language, along with their counts and relative frequencies (count divided by the size of the data).

  • Estimate the "add less than one" smoothing parameter (described in slide 16 of Lecture 2) for the trigram language model. Remember to use the heldout set for this!

  • Report the values of the smoothing parameters (one per language).

  • Calculate the cross-entropies of all (trigram) language models on all test sets.

  • Write a function to identify language by comparing probabilities given by your models (the highest probability wins). This function should accept a string (of arbitrary length, containing more words or sentences) and return a list of pairs (probability, language) ordered by the probability, highest first.

  • Submit everything using the submission form.

Pool of possible exam questions

All variants of the final written exam tests will be assembled exclusively from questions selected from the following list:

(warning: the question list might be subject to occasional changes during the semester; the final version will be announced here no later than three weeks before the first exam date.)

  • Basic notions from probability and information theory.
    1. What are the three basic properties of a probability function? (1 point)
    2. When do we say that two events are (statistically) independent? (1 point)
    3. Show how Bayes' Theorem can be derived. (1 point)
    4. Explain Chain Rule. (1 point)
    5. Explain the notion of Entropy (formula expected too). (1 point)
    6. Explain Kullback-Leibler distance (formula expected too). (1 point)
    7. Explain Mutual Information (formula expected too). (1 point)
  • Language models. The noisy channel model.
    1. Explain the notion of The Noisy Channel. (1 point)
    2. Explain the notion of the n-gram language model. (1 point)
    3. Describe how Maximum Likelihood estimate of a trigram language model is computed. (2 points)
    4. Why do we need smoothing (in language modelling)? (1 point)
    5. Give at least two examples of smoothing methods. (2 points)
  • Morphological analysis.
    1. What is a morphological tag? List at least five features that are often encoded in morphological tag sets. (1 point)
    2. List the open and closed part-of-speech classes and explain the difference between open and closed classes. (1 point)
    3. Explain the difference between a finite-state automaton and a finite-state transducer. Describe the algorithm of using a finite-state transducer to transform a surface string to a lexical string (pseudocode or source code in your favorite programming language). (2 points)
    4. Give an example of a phonological or an orthographical change caused by morphological inflection (any natural language). Describe the rule that would take care of the change during analysis or generation. It is not required that you draw a transducer, although drawing a transducer is one of the possible ways of describing the rule. (1 point)
    5. Give an example of a long-distance dependency in morphology (any natural language). How would you handle it in a morphological analyzer? (1 point)
  • Syntactic analysis.
    1. Describe dependency trees, constituent trees, differences between them and phenomena that must be addressed when converting between them. (2 points)
    2. Give an example of a sentence (in any natural language) that has at least two plausible, semantically different syntactic analyses (readings). Draw the corresponding dependency trees and explain the difference in meaning. Are there other additional readings that are less probable but still grammatically acceptable? (2 points)
    3. What is coordination? Why is it difficult in dependency parsing? How would you capture coordination in a dependency structure? What are the advantages and disadvantages of your solution? (1 point)
    4. What is ellipsis? Why is it difficult in parsing? Give examples of different kinds of ellipsis (any natural language). (1 point)
  • Information retrieval.
    1. Explain the difference between information need and query. (1 point)
    2. What is inverted index and what are the optimal data structures for it? (1 point)
    3. What is stopword and what is it useful for? (1 point)
    4. Explain the bag-of-word principle? (1 point)
    5. What is the main advantage and disadvantage of boolean model. (1 point)
    6. Explain the role of the two components in the TF-IDF weighting scheme. (1 point)
    7. Explain length normalization in vector space model what is it useful for? (1 point)
  • Language data resources.
    1. Explain what a corpus is. (1 point)
    2. Explain what annotation is (in the context of language resources). What types of annotation do you know? (2 points)
    3. What are the reasons for variability of even basic types of annotation, such as the annotation of morphological categories (parts of speech etc.).(1 point)
    4. Explain what a treebank is. Why trees are used? (2 points)
    5. Explain what a parallel corpus is. What kind of alignments can we distinguish? (2 points)
    6. What is a sentiment-annotated corpus? How can it be used? (1 points)
    7. What is a coreference-annotated corpus? (1 points)
    8. Explain how WordNet is structured? (1 points)
    9. Explain the difference between derivation and inflection? (1 points)
  • Evaluation measures in NLP.
    1. Give at least two examples of situations in which measuring a percentage accuracy is not adequate. (1 point)
    2. Explain: precision, recall (1 point)
    3. What is F-measure, what is it useful for? (1 point)
    4. What is k-fold cross-validation ? (1 point)
    5. Explain BLEU (the exact formula not needed, just the main principles). (1 point)
    6. Explain the purpose of brevity penalty in BLEU. (1 point)
    7. What is Labeled Attachment Score (in parsing)? (1 point)
    8. What is Word Error Rate (in speech recognition)? (1 point)
    9. What is inter-annotator agreement? How can it be measured? (1 point)
    10. What is Cohen's kappa? (1 point)
  • Deep learning for NLP.
    1. Describe the two methods for training of the Word2Vec model. (1 point)
    2. Explain the difference between Word2Vec and FastText embeddings. (1 point)
    3. Explain convolutional networks for sequence processing. (1 point)
    4. What are residual connections in neural networks? Why do we use them? (1 point)
    5. Explain layer normalization and its effect to the training process. (1 point, 2 points with formula)
    6. Explain the vanishing gradient problem in recurrent neural networks; name architectures that deal with the issue. (1 point)
    7. Describe the LSTM networks. (1 point)
    8. Use formulas to express the loss function for training sequence labeling? (1 point)
    9. Sketch the structure of the Transformer model. (2 points)
    10. Why do we use positional encodings in the Transformer model. (1 point)
    11. Explain the training procedure of the BERT model. (2 points)
  • Machine translation fundamentals.
    1. Why is MT difficult from linguistic point of view? Provide examples and explanation for at least three different phenomena. (2 points)
    2. Why is MT difficult from computational point of view? (1 point)
    3. Briefly describe at least three methods of manual MT evaluation. (1-2 points)
    4. Describe BLEU. 1 point for the core properties explained, 1 point for the (commented) formula.
    5. Describe IBM Model 1 for word alignment, highlighting the EM structure of the algorithm. (1 point)
    6. Explain using equations the relation between Noisy channel model and log-linear model for classical statistical MT. (2 points)
    7. Describe the loop of weight optimization for the log-linear model as used in phrase-based MT. (1 point)
  • Neural machine translation.
    1. Describe the critical limitation of PBMT that NMT solves. Provide example training data and example input where PBMT is very likely to introduce an error. (1 points)
    2. Use formulas to highlight the similarity of NMT and LMs. (1 point)
    3. Describe, how words are fed to current NMT architectures and explain why is this beneficial over 1-hot representation. (1 point)
    4. Sketch the structure of an encoder-decoder architecture of neural MT, remember to describe the components in the picture (2 points)
    5. What is the difference in RNN decoder application at training time vs. at runtime? (1 point)
    6. What problem does attention in NMT address? Provide the key idea of the method. (1 point)
    7. What problem/task do both RNN and self-attention resolve and what is the main benefit of self-attention over RNN? (1 point)
    8. What are the three roles each state at a Transformer encoder layer takes in self-attention. (1 point)
    9. What are the three uses of self-attention in the Transformer model? (1 point)
    10. Provide an example of NMT improvement that was assumed to come from additional linguistic information but occurred also for a simpler reason. (1 point)
    11. Summarize and compare the strategy of "classical statistical MT" vs. the strategy of neural approaches to MT. (1 point)
  • Homework assignments

    • There will be 3 homework assignments.
    • For each assignment, you will get points, up to a given maximum (the maximum is specified with each assignment).
    • All assignments will have a fixed deadline (usually in two weeks).
    • If you submit the assignment after the deadline, you will get:
      • up to 50% of the maximum points if it is less than 2 weeks after the deadline;
      • 0 points if it is more than 2 weeks after the deadline.
    • Once we check the submitted assignments, you will see the points you got and the comments from us in:
    • To be allowed to take the test (which is required to pass the course), you need to get at least 50% of the total points from the assignments.
    • Attendance to lectures is voluntary but recommended.

    • Attendance to practicals is mandatory. No more than three absences per semester will be allowed.

    Exam test

    • There will be a written exam test at the end of the semester.
    • To pass the course, you need to get at least 50% of the total points from the test.
    • You can find a sample of test questions on the website; the list may be updated during the semester.

    Grading

    Your grade is based on the average of your performance; the exam test and the homework assignments are weighted 1:1.

    • ≥ 90%: grade 1 (excellent)
    • ≥ 70%: grade 2 (very good)
    • ≥ 50%: grade 3 (good)
    • < 50%: grade 4 (fail)

    For example, if you get 600 out of 1000 points for homework assignments (60%) and 36 out of 40 points for the test (90%), your total performance is 75% and you get a 2.

    No cheating

    • Cheating is strictly prohibited and any student found cheating will be punished. The punishment can involve failing the whole course, or, in grave cases, being expelled from the faculty.
    • Discussing homework assignments with your classmates is OK. Sharing code is not OK (unless explicitly allowed); by default, you must complete the assignments yourself.
    • All students involved in cheating will be punished. E.g. if you share your assignment with a friend, both you and your friend will be punished.