Wednesday Oct 02, 2024

Jay McClelland | Neural Networks: Artificial and Biological

Jay McClelland is a pioneer in the field of artificial intelligence and is a cognitive psychologist and professor at Stanford University in the psychology, linguistics, and computer science departments. Together with David Rumelhart, Jay published the two volume work Parallel Distributed Processing, which has led to the flourishing of the connectionist approach to understanding cognition.

In this conversation, Jay gives us a crash course in how neurons and biological brains work. This sets the stage for how psychologists such as Jay, David Rumelhart, and Geoffrey Hinton historically approached the development of models of cognition and ultimately artificial intelligence. We also discuss alternative approaches to neural computation such as symbolic and neuroscientific ones.

Patreon (bonus materials + video chat):
https://www.patreon.com/timothynguyen


Part I. Introduction

  • 00:00 : Preview
  • 01:10 : Cognitive psychology
  • 07:14 : Interdisciplinary work and Jay's academic journey
  • 12:39 : Context affects perception
  • 13:05 : Chomsky and psycholinguists
  • 8:03 : Technical outline

Part II. The Brain

  • 00:20:20 : Structure of neurons
  • 00:25:26 : Action potentials
  • 00:27:00 : Synaptic processes and neuron firing
  • 00:29:18 : Inhibitory neurons
  • 00:33:10 : Feedforward neural networks
  • 00:34:57 : Visual system
  • 00:39:46 : Various parts of the visual cortex
  • 00:45:31 : Columnar organization in the cortex
  • 00:47:04 : Colocation in artificial vs biological networks
  • 00:53:03 : Sensory systems and brain maps

Part III. Approaches to AI, PDP, and Learning Rules

  • 01:12:35 : Chomsky, symbolic rules, universal grammar
  • 01:28:28 : Neuroscience, Francis Crick, vision vs language
  • 01:32:36 : Neuroscience = bottom up
  • 01:37:20 : Jay’s path to AI
  • 01:43:51 : James Anderson
  • 01:44:51 : Geoff Hinton
  • 01:54:25 : Parallel Distributed Processing (PDP)
  • 02:03:40 : McClelland & Rumelhart’s reading model
  • 02:31:25 : Theories of learning
  • 02:35:52 : Hebbian learning
  • 02:43:23 : Rumelhart’s Delta rule
  • 02:44:45 : Gradient descent
  • 02:47:04 : Backpropagation
  • 02:54:52 : Outro: Retrospective and looking ahead


Image credits:
http://timothynguyen.org/image-credits/

Further reading:

Rumelhart, McClelland. Parallel Distributed Processing.

McClelland, J. L. (2013). Integrating probabilistic models of perception and interactive neural networks: A historical and tutorial review

 

Twitter: @iamtimnguyen

 

Webpage: http://www.timothynguyen.org

Comments (0)

To leave or reply to comments, please download free Podbean or

No Comments

Copyright 2022 All rights reserved.

Podcast Powered By Podbean

Version: 20241125