NPTEL : NOC:Applied Natural Language Processing (Computer Science and Engineering)

Co-ordinators : Prof. Ramaseshan R


Lecture 1 - Introduction

Lecture 2 - Operations on a Corpus

Lecture 3 - Probability and NLP

Lecture 4 - Vector Space models

Lecture 5 - Sequence Learning

Lecture 6 - Machine Translation

Lecture 7 - Preprocessing

Lecture 8 - Statistical Properties of Words - Part 1

Lecture 9 - Statistical Properties of Words - Part 2

Lecture 10 - Statistical Properties of Words - Part 3

Lecture 11 - Vector Space Models for NLP

Lecture 12 - Document Similarity - Demo, Inverted index, Exercise

Lecture 13 - Vector Representation of words

Lecture 14 - Contextual understanding of text

Lecture 15 - Co-occurence matrix, n-grams

Lecture 16 - Collocations, Dense word Vectors

Lecture 17 - SVD, Dimensionality reduction, Demo

Lecture 18 - Query Processing

Lecture 19 - Topic Modeling

Lecture 20 - Examples for word prediction

Lecture 21 - Introduction to Probability in the context of NLP

Lecture 22 - Joint and conditional probabilities, independence with examples

Lecture 23 - The definition of probabilistic language model

Lecture 24 - Chain rule and Markov assumption

Lecture 25 - Generative Models

Lecture 26 - Bigram and Trigram Language models - peeking indide the model building

Lecture 27 - Out of vocabulary words and curse of dimensionality

Lecture 28 - Exercise

Lecture 29 - Naive-Bayes, classification

Lecture 30 - Machine learning, perceptron, linearly separable

Lecture 31 - Linear Models for Claassification

Lecture 32 - Biological Neural Network

Lecture 33 - Perceptron

Lecture 34 - Perceptron Learning

Lecture 35 - Logical XOR

Lecture 36 - Activation Functions

Lecture 37 - Gradient Descent

Lecture 38 - Feedforward and Backpropagation Neural Network

Lecture 39 - Why Word2Vec?

Lecture 40 - What are CBOW and Skip-Gram Models?

Lecture 41 - One word learning architecture

Lecture 42 - Forward pass for Word2Vec

Lecture 43 - Matrix Operations Explained

Lecture 44 - CBOW and Skip Gram Models

Lecture 45 - Building Skip-gram model using Python

Lecture 46 - Reduction of complexity - sub-sampling, negative sampling

Lecture 47 - Binay tree, Hierarchical softmax

Lecture 48 - Mapping the output layer to Softmax

Lecture 49 - Updating the weights using hierarchical softmax

Lecture 50 - Discussion on the results obtained from word2vec

Lecture 51 - Recap and Introduction

Lecture 52 - ANN as a LM and its limitations

Lecture 53 - Sequence Learning and its applications

Lecture 54 - Introuduction to Recurrent Neural Network

Lecture 55 - Unrolled RNN

Lecture 56 - RNN - Based Language Model

Lecture 57 - BPTT - Forward Pass

Lecture 58 - BPTT - Derivatives for W,V and U

Lecture 59 - BPTT - Exploding and vanishing gradient

Lecture 60 - LSTM

Lecture 61 - Truncated BPTT

Lecture 62 - GRU

Lecture 63 - Introduction and Historical Approaches to Machine Translation

Lecture 64 - What is SMT?

Lecture 65 - Noisy Channel Model, Bayes Rule, Language Model

Lecture 66 - Translation Model, Alignment Variables

Lecture 67 - Alignments again!

Lecture 68 - IBM Model 1

Lecture 69 - IBM Model 2

Lecture 70 - Introduction to Phrase-based translation

Lecture 71 - Symmetrization of alignments

Lecture 72 - Extraction of Phrases

Lecture 73 - Learning/estimating the phrase probabilities using another Symmetrization example

Lecture 74 - Introduction to evaluation of Machine Translation

Lecture 75 - BLEU - A short Discussion of the seminal paper

Lecture 76 - BLEU Demo using NLTK and other Metrics

Lecture 77 - Encoder-Decoder model for Neural Machine Translation

Lecture 78 - RNN Based Machine Translation

Lecture 79 - Recap and Connecting Bloom Taxonomy with Machine Learning

Lecture 80 - Introduction to Attention based Translation

Lecture 81 - Research Paper discussion on Neural machine translation by jointly learning to align and translate

Lecture 82 - Typical NMT architecture architecture and models for multi-language translation

Lecture 83 - Beam Search, Stochatic Gradient Descend, Mini Batch, Batch

Lecture 84 - Beam Search, Stochatic Gradient Descend, Mini Batch, Batch

Lecture 85 - Introduction to Conversation Modeling

Lecture 86 - A few examples in Conversation Modeling

Lecture 87 - Some ideas to Implement IR-based Conversation Modeling

Lecture 88 - Discussion of some ideas in Question Answering

Lecture 89 - Hyperspace Analogue to Language - HAL

Lecture 90 - Correlated Occurence Analogue to Lexical Semantic - COALS

Lecture 91 - Global Vectors - Glove

Lecture 92 - Evaluation of Word vectors