Timestamps:
0:00 Intro
0:42 Problem with Self-attention
2:30 Positional Encoding Derivation
11:32 Positional Encoding Formula
13:04 How it capture relative positions?
19:06 Concatenate or Add positional Encoding?
21:39 How Positional Encodings do not interfere with Word Embeddings?
25:04 Outro
Transformation Matrix resource - https://blog.timodenk.com/linear-relationships-in-the-transformers-positional-encoding/
Code - https://github.com/Coding-Lane/Positional-Encoding/blob/main/Positional%20Encoding.ipynb
Follow my entire Transformers playlist :
Transformers Playlist: https://www.youtube.com/watch?v=lRylkiFdUdk&list=PLuhqtP7jdD8CQTxwVsuiFYGvHtFpNhlR3&index=1&t=0s
RNN Playlist: https://www.youtube.com/watch?v=lWPkNkShNbo&list=PLuhqtP7jdD8ARBnzj8SZwNFhwWT89fAFr&t=0s
CNN Playlist: https://www.youtube.com/watch?v=E5Z7FQp7AQQ&list=PLuhqtP7jdD8CD6rOWy20INGM44kULvrHu&t=0s
Complete Neural Network: https://www.youtube.com/watch?v=mlk0rddP3L4&list=PLuhqtP7jdD8CftMk831qdE8BlIteSaNzD&t=0s
Complete Logistic Regression Playlist: https://www.youtube.com/watch?v=U1omz0B9FTw&list=PLuhqtP7jdD8Chy7QIo5U0zzKP8-emLdny&t=0s
Complete Linear Regression Playlist: https://www.youtube.com/watch?v=nwD5U2WxTdk&list=PLuhqtP7jdD8AFocJuxC6_Zz0HepAWL9cF&t=0s