Neural Computation Module Wiki
Map of the module’s weekly themes and key concepts.
Weeks
- week 01 — The Perceptron and the Optimisation Problem: perceptron, dot product, decision boundary, loss function, maximum likelihood estimation
- week 02 — Gradient Descent: How Learning Actually Happens: gradient descent, learning rate, sigmoid function, binary cross entropy, gradient descent variants
- week 03 — Multi-Layer Networks, Backpropagation, and Generalisation: multi layer perceptron, computation graph, backpropagation, softmax, overfitting, regularization
- week 04 — Images and Convolutional Neural Networks: image representation, activation functions, convolution, pooling, convolutional neural network, shift invariance equivariance
- week 05 — Tricks of the Trade and Dense Prediction: weight initialization, normalization, data augmentation, dropout, residual connection, transfer learning, upsampling, u net
- week 06 — Unsupervised Learning, Autoencoders, and Contrastive Learning: autoencoder, latent representation, representation learning, self supervised learning, contrastive learning, pretext task, clever hans effect
- week 07 — Generative Models, GANs, and Conditional Generation: generative model, generative adversarial network, bayes theorem, conditional generative model
- week 08 — Diffusion Models: From Noise to Image, One Small Step at a Time: diffusion model, latent diffusion model
- week 09 — Language Modeling: From Counting to ChatGPT: language model, n gram language model, perplexity, decoding strategies, word embedding, recurrent neural network, lstm
- week 10 —
- week 11 —
- week 12 —
Cross-Week Topics
Emerges once patterns crystallize.
Exam Prep
- flashcards
- past papers
- revision guide