More from Events Calendar
- Oct 1711:00 AMStatistics and Data Science SeminarSpeaker: Navid Azizan (MIT)Title: Hard-Constrained Neural NetworksAbstract: Incorporating prior knowledge and domain-specific input-output requirements, such as safety or stability, as hard constraints into neural networks is a key enabler for their deployment in highstakes applications. However, existing methods often rely on soft penalties, which are insufficient, especially on out-ofdistribution samples. In this talk, I will introduce hardconstrained neural networks (HardNet), a general framework for enforcing hard, input-dependent constraints by appending a differentiable enforcement layer to any neural network. This approach enables end-to-end training and, crucially, is proven to preserve the network’s universal approximation capabilities, ensuring model expressivity is not sacrificed. We demonstrate the versatility and effectiveness of HardNet across various applications: learning with piecewise constraints, learning optimization solvers with guaranteed feasibility, and optimizing control policies in safety-critical systems. This framework can be used even for problems where the constraints themselves are not fully known and must be learned from data in a parametric form, for which I will present two key applications: data-driven control with inherent Lyapunov stability and learning chaotic dynamical systems with guaranteed boundedness. Together, these results demonstrate a unified methodology for embedding formal constraints into deep learning, paving the way for more reliable AI.Biography: Navid Azizan is the Alfred H. (1929) and Jean M. Hayes Assistant Professor at MIT, where he holds dual appointments in Mechanical Engineering (Control, Instrumentation & Robotics) and IDSS and is a Principal Investigator in LIDS. His research interests broadly lie in machine learning, systems and control, mathematical optimization, and network science. His research lab focuses on various aspects of reliable intelligent systems, with an emphasis on principled learning and optimization algorithms with applications to autonomy and sociotechnical systems. His work has been recognized by several awards, including Research Awards from Google, Amazon, MathWorks, and IBM, and Best Paper awards and nominations at conferences including ACM Greenmetrics and the Learning for Dynamics & Control (L4DC). He was named in the list of Outstanding Academic Leaders in Data by CDO Magazine for two consecutive years in 2024 and 2023, received the 2020 Information Theory and Applications (ITA) “Sun” (Gold) Graduation Award, and was named an Amazon Fellow in AI in 2017 and a PIMCO Fellow in Data Science in 2018.
- Oct 1712:00 PMMIT Mobility ForumThe Mobility Forum with Prof. Jinhua Zhao showcases transportation research and innovation across the globe. The Forum is online and open to the public.
- Oct 1712:00 PMSCSB Lunch Series with Dr. Caroline Robertson: Seeing What Matters: Semantic Drivers of Gaze in Natural EnvironmentsDate: Friday, October 17, 2025 Time: 12:00pm – 1:00pm Location: Simons Center Conference room 46-6011 + Zoom [https://mit.zoom.us/j/93701332166]Speaker: Caroline Robertson, Ph.D. Affiliation: Associate Professor of Psychological and Brain Sciences, Dartmouth CollegeTalk title: Seeing What Matters: Semantic Drivers of Gaze in Natural EnvironmentsAbstract: Visual attention in everyday life is driven by both image-computable factors in the visual environment, and also the latent cognitive priorities of the viewer. In this talk, I will present two naturalistic eye-tracking studies that leverage computational language models to uncover the cognitive priorities guiding the gaze behavior of individuals with and without autism. First, using eye-tracking in immersive VR, we find that individuals with and without autism exhibit stable “semantic fingerprints” in their gaze, when the targets of their visual attention are modeled in the representational space of a large language model. Second, in dyadic conversations, mobile eye-tracking shows that gaze to the conversation partner’s face is modulated by the ongoing semantic context in conversation, including linguistic surprisal. Together, these findings position gaze as a window into the semantic and predictive processes that guide attention, offering new leverage for modeling individual differences in natural contexts.
- Oct 172:45 PMMIT@2:50 - Ten Minutes for Your MindTen minutes for your mind@2:50 every day at 2:50 pm in multiple time zones:Europa@2:50, EET, Athens, Helsinki (UTC+2) (7:50 am EST) https://us02web.zoom.us/j/88298032734Atlantica@2:50, EST, New York, Toronto (UTC-4) https://us02web.zoom.us/j/85349851047Pacifica@2:50, PST, Los Angeles, Vancouver (UTC=7) (5:50 pm EST) https://us02web.zoom.us/j/85743543699Almost everything works better again if you unplug it for a bit, including your mind. Stop by and unplug. Get the benefits of mindfulness without the fuss.@2:50 meets at the same time every single day for ten minutes of quiet together.No pre-requisite, no registration needed.Visit the website to view all @2:50 time zones each day.at250.org or at250.mit.edu
- Oct 173:30 PMMechE Colloquium: Professor Ming Guo on Pushing Multicellular Living Systems to Extreme: Reality and Virtual WorldMulticellular tissues are sculpted by the spatial and temporal coordination of cells and their interactions. Yet, the organizational principles that govern these events, and their disruption in disease, remain poorly understood. In this talk, I will first discuss our recent experimental work investigating multicellular dynamic organization in several physiologically relevant systems, including cells on engineered curved surfaces, growing human lung alveolospheres, and mouse embryos. Next, I will present our vision for using deep learning to predictively model multicellular developmental processes. I will introduce our recently developed geometric deep-learning method, MultiCell, which can predict single-cell behaviors (neighbor swopping, division, etc.) 30 minutes into the future at single-cell resolution during embryogenesis.Bio:Ming Guo is currently an associate professor at the Department of Mechanical Engineering at MIT, and associated faculty in the MIT Physics of Living Systems Center and Center for Multi-Cellular Engineered Living Systems. His group works on developing tools to characterize and understand cells and tissues as soft active matter. Before joining MIT in 2015, Ming obtained his PhD in 2014 in Applied Physics, and MS in 2012 in Mechanical Engineering at Harvard University. Ming has won numerous awards including Alfred Sloan Fellow in Physics and IUPAP Young Scientist Prize in Biological Physics. Ming is an associated editor of the Journal of Biological Physics.
- Oct 173:30 PMRichard P. Stanley Seminar in CombinatoricsSpeaker: Greta Panova (USC)Title: Hook formulas for skew shapes via contour integrals and vertex modelsAbstract: The celebrated hook-length formula (HLF) of Frame-Robinson-Thrall, which gives the dimension of irreducible $S_n$ modules and the number of standard Young tableaux (SYT), has been at the heart of many results from algebraic combinatorics, representation theory and integrable probability. No such closed formula exists for counting SYTs of skew shapes, the closest formula to it (called NHLF) emerged through implicit computations in equivariant Schubert calculus giving a hook-product weighted sum over so-called excited diagrams. Excited diagrams are in bijection with certain lozenge tilings, with flagged semistandard tableaux and also nonintersecting lattice paths inside a Young diagram and the NHLF has seen a variety of applications from weighted lozenge tilings to asymptotics of skew SYTs. We give two self-contained proofs of a multivariate generalization of this formula, which allow us to extend the setup beyond standard Young tableaux and the underlying Schur polynomials. The first proof uses multiple contour integrals. The second one interprets excited diagrams as configurations of a six-vertex model at a free fermion point, and derives the formula for the number of standard Young tableaux of a skew shape from the Yang-Baxter equation.