More from Events Calendar
- Oct 164:00 PMRichard P. Stanley Seminar in CombinatoricsSpeaker: Tom Bohman (Carnegie Mellon University)Title: Two point concentration of the domination number of the random graphAbstract:We show that the domination number of the binomial random graph G_{n,p} with edge probability p =n^{-\gamma} is concentrated on two values for \gamma < 2/3 and not concentrated on two values for \gamma > 2/3.The main ingredient in the proof is a Poisson type approximation for the probability that a random bipartite graph has no isolated vertices in a regime where standard tools are not available.Joint work with Lutz Warnke and Emily Zhu.
- Oct 164:00 PMThe Honest Truth About Causal Trees: Accuracy Limits for Heterogeneous Treatment Effect EstimationMatias Cattaneo (Princeton University)
- Oct 164:00 PMWhistleblowingAyca Kaya University of Miami (joint with Aniko Oery and Anne-Katrine Roesler)
- Oct 164:15 PMFall 2025 ORC Seminar SeriesA series of talks on OR-related topics. For more information see: https://orc.mit.edu/seminars-events/
- Oct 164:30 PMBrandeis-Harvard-MIT-Northeastern Joint Mathematics ColloquiumSpeaker: Sourav Chatterjee (Stanford)Title: Neural networks can learn any low complexity patternAbstract:Neural networks have taken over the world, but research on why they work so well is still in its infancy. I will present a baby step in this direction, based on joint work with my student Tim Sudijono. We show, with quantitative bounds, that a certain kind of neural network can quickly learn any pattern that can be expressed as a short program. An example is as follows. Let N be a large number, and suppose we have data consisting of a sample of X’s and Y’s, where each X is a randomly chosen number between 1 and N, and the corresponding Y is 1 if X is a prime and 0 if not. The sample size n is negligible compared to N. If we fit a neural network to this data which is “sparsest” in a suitable sense, it turns out that the network will be able to accurately predict if a newly chosen X is a prime or not, with a sample of size as small as (log N)^2 — even though the network does not know, a priori, that we are asking it to detect primality. The talk will be accessible to those with no background in neural networks; I will define all necessary concepts.*Pre-reception held in 2-290 at 4pm.
- Oct 164:30 PMSeminar on Arithmetic Geometry, etc. (STAGE)Speaker: Jane Shi (MIT)Title: Katz's proof of the Riemann hypothesis for curvesAbstract:Reference: Katz, A Note on Riemann Hypothesis for Curves and Hypersurfaces Over Finite Fields, Sections 1-4.