An Overview of Large Language Models for Statisticians. [pdf] Wenlong Ji, Weizhe Yuan, Emily Getzen, Kyunghyun Cho, Michael I. Jordan, Song Mei, Jason E Weston, Weijie J. Su, Jing Xu, and Linjun Zhang.
Preprint, 2025.
How Do LLMs Perform Two-Hop Reasoning in Context? [pdf] Tianyu Guo, Hanlin Zhu, Ruiqi Zhang, Jiantao Jiao, Song Mei, Michael I. Jordan, and Stuart Russell.
Preprint, 2025.
Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks. [pdf] Yuhang Cai, Kangjie Zhou, Jingfeng Wu, Song Mei, Michael Lindsey, and Peter L. Bartlett.
Preprint, 2025.
A Statistical Theory of Contrastive Pre-training and Multimodal Generative AI. [pdf] Kazusato Oko, Licong Lin, Yuhang Cai, and Song Mei.
Preprint, 2025.
Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs. [pdf] [Slides] Tianyu Guo, Druv Pai, Yu Bai, Jiantao Jiao, Michael I. Jordan, and Song Mei.
Preprint, 2024.
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning. [pdf] Chongyu Fan, Jiancheng Liu, Licong Lin, Jinghan Jia, Ruiqi Zhang, Song Mei, and Sijia Liu.
Preprint, 2024.
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models. [pdf] Song Mei.
Preprint, 2024.
Mean-field variational inference with the TAP free energy: Geometric and statistical properties in linear models. [pdf] Michael Celentano, Zhou Fan, Licong Lin, and Song Mei.
Preprint, 2023.
Uncertainty Intervals for Prediction Errors in Time Series Forecasting. [pdf] Hui Xu, Song Mei, Stephen Bates, Jonathan Taylor, and Robert Tibshirani.
Preprint, 2023.
Near-optimal multiple testing in Bayesian linear models with finite-sample FDR control. [pdf] Taejoo Ahn, Licong Lin, and Song Mei.
Preprint, 2022.
A landscape theory for approximate inference in generalized linear mixed models. Song Mei, Iain Johnstone, and Matt P Wand.
Available on request.
Publications
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models. [pdf] Song Mei and Yuchen Wu.
IEEE Transactions on Information Theory, 2025+.
Unified Algorithms for RL with Decision-Estimation Coefficients: No-Regret, PAC, and Reward-Free Learning. [pdf] Fan Chen, Song Mei, and Yu Bai.
To appear in the Annals of Statistics, 2025+.
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm. [pdf] Leo Zhou, Joao Basso, and Song Mei.
NeurIPS, 2024. Spotlight.
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization. [pdf] Yuhang Cai, Jingfeng Wu, Song Mei, Michael Lindsey, and Peter L. Bartlett.
NeurIPS, 2024.
An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates and Optimization. [pdf] Minshuo Chen, Song Mei, Jianqing Fan, and Mengdi Wang.
National Science Review, 2024.
Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. [pdf] Ruiqi Zhang, Licong Lin, Yu Bai, and Song Mei.
Conference on Language Modeling (COLM), 2024.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining. [pdf] Licong Lin, Yu Bai, Song Mei.
International Conference on Learning Representations (ICLR), 2024.
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations. [pdf] Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, and Yu Bai.
International Conference on Learning Representations (ICLR), 2024.
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. [pdf] [Slides] Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei.
NeurIPS, 2023 (Oral).
What can a Single Attention Layer Learn? A Study Through the Random Features Lens. [pdf] Hengyu Fu, Tianyu Guo, Yu Bai, and Song Mei.
NeurIPS, 2023.
Lower Bounds for Learning in Revealing POMDPs. [pdf] Fan Chen, Huan Wang, Caiming Xiong, Song Mei, and Yu Bai.
International Conference on Machine Learning (ICML), 2023.
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms. [pdf] Fan Chen, Yu Bai, and Song Mei.
ICLR, 2023 (Notable-top-25% / “Spotlight”).
Local convexity of the TAP free energy and AMP convergence for Z2-synchronization. [pdf] Michael Celentano, Zhou Fan, and Song Mei.
The Annals of Statistics, 2023.
Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent. [pdf] Yu Bai, Chi Jin, Song Mei, Ziang Song, and Tiancheng Yu.
NeurIPS, 2022.
Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games. [pdf] Ziang Song, Song Mei, and Yu Bai.
NeurIPS, 2022.
Learning with convolution and pooling operations in kernel methods. [pdf] Theodor Misiakiewicz and Song Mei.
NeurIPS, 2022.
Performance and limitations of the QAOA at constant levels on large sparse hypergraphs and spin glass models. [pdf] Joao Basso, David Gamarnik, Song Mei, and Leo Zhou.
IEEE Symposium on Foundations of Computer Science (FOCS), 2022.
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. [pdf] Yu Bai, Chi Jin, Song Mei, and Tiancheng Yu.
International Conference on Machine Learning (ICML), 2022.
Efficient and Differentiable Conformal Prediction with General Function Classes. [Pdf] Yu Bai, Song Mei, Huan Wang, Yingbo Zhou, and Caiming Xiong.
International Conference on Learning Representations (ICLR), 2022
The Three Stages of Learning Dynamics in High-Dimensional Kernel Methods. [pdf] Nikhil Ghosh, Song Mei, and Bin Yu.
International Conference on Learning Representations (ICLR), 2022.
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently? [pdf] Ziang Song, Song Mei, and Yu Bai.
International Conference on Learning Representations (ICLR), 2022.
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. [Pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Applied and Computational Harmonic Analysis, 2022.
Understanding the Under-Coverage Bias in Uncertainty Estimation. [Pdf] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong.
NeurIPS, 2021. Spotlight.
Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification. [Pdf] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong.
International Conference on Machine Learning (ICML), 2021.
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models. [pdf] Zitong Yang, Yu Bai, and Song Mei.
International Conference on Machine Learning (ICML), 2021.
Learning with invariances in random features and kernel models. [pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Conference on Learning Theory (COLT), 2021.
The generalization error of random features regression: Precise asymptotics and double descent curve. Song Mei and Andrea Montanari. [Pdf] [Slides]
Communications on Pure and Applied Mathematics (CPAM), 2021.
Linearized two-layers neural networks in high dimension. [Pdf] [Slides] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
The Annals of Statistics, 2021.
TAP free energy, spin glasses, and variational inference. [Pdf] [Slides] Zhou Fan, Song Mei, and Andrea Montanari.
The Annals of Probability, 2021.
When Do Neural Networks Outperform Kernel Methods? [Pdf] [Slides] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Journal of Statistical Mechanics: Theory and Experiment. Conference version: NeurIPS, 2020.
The landscape of the spiked tensor model. [Pdf] Gerard Ben Arous, Song Mei, Andrea Montanari, and Mihai Nica.
Communications on Pure and Applied Mathematics (CPAM), 2019.
Limitations of lazy training of two-layers neural networks. [Pdf] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
NeurIPS, 2019. Spotlight.
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit.[Pdf] [Slides] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Conference on Learning Theory (COLT), 2019.
A mean field view of the landscape of two-layers neural network. [Pdf] [Slides] [Poster] Song Mei, Andrea Montanari, and Phan-Minh Nguyen.
Proceedings of the National Academy of Sciences (PNAS), 2018.
The landscape of empirical risk for non-convex losses. [Pdf] [Slides] Song Mei, Yu Bai, and Andrea Montanari.
The Annals of Statistics, 2018.
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality. [Pdf] [Slides] [Poster] Song Mei, Theodor Misiakiewicz, Andrea Montanari, and Roberto I. Oliveira.
Conference on Learning Theory (COLT), 2017.
Others
Discussion of “Nonparametric Regression using Deep Neural Networks with ReLU Activation Function”. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
The Annals of Statistics, 2020.
On a molecular based Q-tensor model for liquid crystals with density variations. [Pdf] Song Mei and Pingwen Zhang.
SIAM Multiscale Modeling and Simulation, 2015. Undergraduate Thesis.
Analysis of sequential quadratic programming through the lens of Riemannian optimization. [Pdf] Yu Bai and Song Mei, 2018.
A nice short notes on first order constrained optimization.
Research Topics:
Language models and diffusion models
An Overview of Large Language Models for Statisticians. [pdf] Wenlong Ji, Weizhe Yuan, Emily Getzen, Kyunghyun Cho, Michael I. Jordan, Song Mei, Jason E Weston, Weijie J. Su, Jing Xu, and Linjun Zhang.
Preprint, 2025.
How Do LLMs Perform Two-Hop Reasoning in Context? [pdf] Tianyu Guo, Hanlin Zhu, Ruiqi Zhang, Jiantao Jiao, Song Mei, Michael I. Jordan, and Stuart Russell.
Preprint, 2025.
A Statistical Theory of Contrastive Pre-training and Multimodal Generative AI. [pdf] Kazusato Oko, Licong Lin, Yuhang Cai, and Song Mei.
Preprint, 2025.
Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs. [pdf] [Slides] Tianyu Guo, Druv Pai, Yu Bai, Jiantao Jiao, Michael I. Jordan, and Song Mei.
Preprint, 2024.
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning. [pdf] Chongyu Fan, Jiancheng Liu, Licong Lin, Jinghan Jia, Ruiqi Zhang, Song Mei, and Sijia Liu.
Preprint, 2024.
Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. [pdf] Ruiqi Zhang, Licong Lin, Yu Bai, and Song Mei.
Conference on Language Modeling (COLM), 2024.
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models. [pdf] Song Mei.
Preprint, 2024.
An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates and Optimization. [pdf] Minshuo Chen, Song Mei, Jianqing Fan, and Mengdi Wang.
National Science Review, 2024.
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models. [pdf] Song Mei and Yuchen Wu.
IEEE Transactions on Information Theory, 2025+.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining. [pdf] Licong Lin, Yu Bai, Song Mei.
International Conference on Learning Representations (ICLR), 2024.
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations. [pdf] Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, and Yu Bai.
International Conference on Learning Representations (ICLR), 2024.
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. [pdf] [Slides] Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei.
NeurIPS, 2023 (Oral).
What can a Single Attention Layer Learn? A Study Through the Random Features Lens. [pdf] Hengyu Fu, Tianyu Guo, Yu Bai, and Song Mei.
NeurIPS, 2023.
Deep learning theory
Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks. [pdf] Yuhang Cai, Kangjie Zhou, Jingfeng Wu, Song Mei, Michael Lindsey, and Peter L. Bartlett.
Preprint, 2025.
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization. [pdf] Yuhang Cai, Jingfeng Wu, Song Mei, Michael Lindsey, and Peter L. Bartlett.
NeurIPS, 2024.
Learning with convolution and pooling operations in kernel methods. [pdf] Theodor Misiakiewicz and Song Mei.
NeurIPS, 2022.
The Three Stages of Learning Dynamics in High-Dimensional Kernel Methods. [pdf] Nikhil Ghosh, Song Mei, and Bin Yu.
International Conference on Learning Representations (ICLR), 2022.
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. [Pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Applied and Computational Harmonic Analysis, 2022.
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models. [pdf] Zitong Yang, Yu Bai, and Song Mei.
International Conference on Machine Learning (ICML), 2021.
Learning with convolution and pooling operations in kernel methods. [pdf] Theodor Misiakiewicz and Song Mei.
NeurIPS, 2022.
Learning with invariances in random features and kernel models. [pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Conference on Learning Theory (COLT), 2021.
The generalization error of random features regression: Precise asymptotics and double descent curve. Song Mei and Andrea Montanari. [Pdf] [Slides]
Communications on Pure and Applied Mathematics (CPAM), 2021.
Linearized two-layers neural networks in high dimension. [Pdf] [Slides] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
The Annals of Statistics, 2021.
When Do Neural Networks Outperform Kernel Methods? [Pdf] [Slides] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Journal of Statistical Mechanics: Theory and Experiment. Conference version: NeurIPS, 2020.
Limitations of lazy training of two-layers neural networks. [Pdf] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
NeurIPS, 2019. Spotlight.
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit.[Pdf] [Slides] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Conference on Learning Theory (COLT), 2019.
A mean field view of the landscape of two-layers neural network. [Pdf] [Slides] [Poster] Song Mei, Andrea Montanari, and Phan-Minh Nguyen.
Proceedings of the National Academy of Sciences (PNAS), 2018.
Reinforcement learning theory
Unified Algorithms for RL with Decision-Estimation Coefficients: No-Regret, PAC, and Reward-Free Learning. [pdf] Fan Chen, Song Mei, and Yu Bai.
To appear in the Annals of Statistics, 2025+.
Lower Bounds for Learning in Revealing POMDPs. [pdf] Fan Chen, Huan Wang, Caiming Xiong, Song Mei, and Yu Bai.
International Conference on Machine Learning (ICML), 2023.
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms. [pdf] Fan Chen, Yu Bai, and Song Mei.
ICLR, 2023 (Notable-top-25% / “Spotlight”).
Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent. [pdf] Yu Bai, Chi Jin, Song Mei, Ziang Song, and Tiancheng Yu.
NeurIPS, 2022.
Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games. [pdf] Ziang Song, Song Mei, and Yu Bai.
NeurIPS, 2022.
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. [pdf] Yu Bai, Chi Jin, Song Mei, and Tiancheng Yu.
International Conference on Machine Learning (ICML), 2022.
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently? [pdf] Ziang Song, Song Mei, and Yu Bai.
International Conference on Learning Representations (ICLR), 2022.
High dimensional statistics
Mean-field variational inference with the TAP free energy: Geometric and statistical properties in linear models. [pdf] Michael Celentano, Zhou Fan, Licong Lin, and Song Mei.
Preprint, 2023.
Near-optimal multiple testing in Bayesian linear models with finite-sample FDR control. [pdf] Taejoo Ahn, Licong Lin, and Song Mei.
Preprint, 2022.
Local convexity of the TAP free energy and AMP convergence for Z2-synchronization. [pdf] Michael Celentano, Zhou Fan, and Song Mei.
The Annals of Statistics, 2023.
TAP free energy, spin glasses, and variational inference. [Pdf] [Slides] Zhou Fan, Song Mei, and Andrea Montanari.
The Annals of Probability, 2021.
The landscape of the spiked tensor model. [Pdf] Gerard Ben Arous, Song Mei, Andrea Montanari, and Mihai Nica.
Communications on Pure and Applied Mathematics (CPAM), 2019.
The landscape of empirical risk for non-convex losses. [Pdf] [Slides] Song Mei, Yu Bai, and Andrea Montanari.
The Annals of Statistics, 2018.
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality. [Pdf] [Slides] [Poster] Song Mei, Theodor Misiakiewicz, Andrea Montanari, and Roberto I. Oliveira.
Conference on Learning Theory (COLT), 2017.
Quantum algorithms
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm. [pdf] Leo Zhou, Joao Basso, and Song Mei.
NeurIPS, 2024. Spotlight.
Performance and limitations of the QAOA at constant levels on large sparse hypergraphs and spin glass models. [pdf] Joao Basso, David Gamarnik, Song Mei, and Leo Zhou.
IEEE Symposium on Foundations of Computer Science (FOCS), 2022.
Uncertainty quantification
Uncertainty Intervals for Prediction Errors in Time Series Forecasting. [pdf] Hui Xu, Song Mei, Stephen Bates, Jonathan Taylor, and Robert Tibshirani.
Preprint, 2023.
Efficient and Differentiable Conformal Prediction with General Function Classes. [Pdf] Yu Bai, Song Mei, Huan Wang, Yingbo Zhou, and Caiming Xiong.
International Conference on Learning Representations (ICLR), 2022
Understanding the Under-Coverage Bias in Uncertainty Estimation. [Pdf] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong.
NeurIPS, 2021. Spotlight.
Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification. [Pdf] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong.
International Conference on Machine Learning (ICML), 2021.
Selected Publications
A Statistical Theory of Contrastive Pre-training and Multimodal Generative AI. [pdf] Kazusato Oko, Licong Lin, Yuhang Cai, and Song Mei.
Preprint, 2025.
Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs. [pdf] [Slides] Tianyu Guo, Druv Pai, Yu Bai, Jiantao Jiao, Michael I. Jordan, and Song Mei.
Preprint, 2024.
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models. [pdf] Song Mei.
Preprint, 2024.
Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. [pdf] Ruiqi Zhang, Licong Lin, Yu Bai, and Song Mei.
Conference on Language Modeling (COLM), 2024.
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models. [pdf] Song Mei and Yuchen Wu.
IEEE Transactions on Information Theory, 2025+.
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. [pdf] [Slides] Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei.
NeurIPS, 2023 (Oral).
Unified Algorithms for RL with Decision-Estimation Coefficients: No-Regret, PAC, and Reward-Free Learning. [pdf] Fan Chen, Song Mei, and Yu Bai.
To appear in the Annals of Statistics, 2025+.
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms. [pdf] Fan Chen, Yu Bai, and Song Mei.
ICLR, 2023 (Notable-top-25% / “Spotlight”).
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm. [pdf] Leo Zhou, Joao Basso, and Song Mei.
NeurIPS, 2024. Spotlight.
Performance and limitations of the QAOA at constant levels on large sparse hypergraphs and spin glass models. [pdf] Joao Basso, David Gamarnik, Song Mei, and Leo Zhou.
IEEE Symposium on Foundations of Computer Science (FOCS), 2022.
Mean-field variational inference with the TAP free energy: Geometric and statistical properties in linear models. [pdf] Michael Celentano, Zhou Fan, Licong Lin, and Song Mei.
Preprint, 2023.
Local convexity of the TAP free energy and AMP convergence for Z2-synchronization. [pdf] Michael Celentano, Zhou Fan, and Song Mei.
The Annals of Statistics, 2023.
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. [Pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Applied and Computational Harmonic Analysis, 2022.
Learning with invariances in random features and kernel models. [pdf] Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
Conference on Learning Theory (COLT), 2021.
Linearized two-layers neural networks in high dimension. [Pdf] [Slides] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.
The Annals of Statistics, 2021.
The generalization error of random features regression: Precise asymptotics and double descent curve. Song Mei and Andrea Montanari. [Pdf] [Slides]
Communications on Pure and Applied Mathematics (CPAM), 2021.
A mean field view of the landscape of two-layers neural network. [Pdf] [Slides] [Poster] Song Mei, Andrea Montanari, and Phan-Minh Nguyen.
Proceedings of the National Academy of Sciences (PNAS), 2018.
The landscape of empirical risk for non-convex losses. [Pdf] [Slides] Song Mei, Yu Bai, and Andrea Montanari.