report video

Please log in or create an account and become a temulandian in order to submit a report.

Why Neural Networks can learn (almost) anything

Emergent Garden
50000 subscribers
A video about neural networks, how they work, and why they're useful. My twitter: SOURCES Neural network playground: Universal Function Approximation: Proof: Covering ReLUs: Covering discontinuous functions: Turing Completeness: Networks of infinite size are turing complete: Neural Computability I & II (behind a paywall unfourtunately, but is cited in following paper) RNNs are turing complete: Transformers are turing complete: More on backpropagation: More on the mandelbrot set: Additional Sources: Neat explanation of universal function approximation proof: Where I got the hard coded parameters: Reviewers: Andrew Carr Connor Christopherson TIMESTAMPS (0:00) Intro (0:27) Functions (2:31) Neurons (4:25) Activation Functions (6:36) NNs can learn anything (8:31) NNs can't learn anything (9:35) ...but they can learn a lot MUSIC