Skip to main content
MIT Mobile homeCalendar and Events home
Event Detail

LL Technology Office: Scaling Knowledge Processing from 2D Chips to 3D Brains

Wed May 1, 2024 1:00–2:00 PM

Description

AbstractArtificial intelligence (AI) now advances by performing twice as many multiplications every two months, but the semiconductor industry tiles twice as many multipliers on a chip every two years. Moreover, the returns from tiling these multipliers ever more densely now diminish because signals must travel relatively farther and farther. Although travel can be shortened by stacking multipliers tiled in two dimensions in the third dimension, such a solution acutely reduces the available surface area for dissipating heat. My recent reconception of the brain’s fundamental units of computation and communication removes this thermal roadblock to parallel processing in three dimensions by moving away from synaptocentric learning to dendrocentric learning. Synaptocentric learning weights activations precisely and sums them across the arbor of a dendrite to detect a spatial pattern. Using dot-products to emulate synaptic weighting, current AI realizes this 60-year-old conception. Dendrocentric learning orders pulses meticulously along a short stretch of dendrite to detect a spatiotemporal pattern. Using a string of ferroelectric transistors to emulate a stretch of dendrite, I will illustrate how dendrocentric learning AI could run not with megawatts in the cloud but rather with watts on a smartphone.
  • LL Technology Office: Scaling Knowledge Processing from 2D Chips to 3D Brains
    AbstractArtificial intelligence (AI) now advances by performing twice as many multiplications every two months, but the semiconductor industry tiles twice as many multipliers on a chip every two years. Moreover, the returns from tiling these multipliers ever more densely now diminish because signals must travel relatively farther and farther. Although travel can be shortened by stacking multipliers tiled in two dimensions in the third dimension, such a solution acutely reduces the available surface area for dissipating heat. My recent reconception of the brain’s fundamental units of computation and communication removes this thermal roadblock to parallel processing in three dimensions by moving away from synaptocentric learning to dendrocentric learning. Synaptocentric learning weights activations precisely and sums them across the arbor of a dendrite to detect a spatial pattern. Using dot-products to emulate synaptic weighting, current AI realizes this 60-year-old conception. Dendrocentric learning orders pulses meticulously along a short stretch of dendrite to detect a spatiotemporal pattern. Using a string of ferroelectric transistors to emulate a stretch of dendrite, I will illustrate how dendrocentric learning AI could run not with megawatts in the cloud but rather with watts on a smartphone.