ES 201/AM 231 is a course in statistical inference and estimation from a signal processing perspective. The course will emphasize the entire pipeline from writing a model, estimating its parameters and performing inference utilizing real data. The first part of the course will focus on linear and nonlinear probabilistic generative/regression models (e.g. linear, logistic, Poisson regression), and algorithms for optimization (ML/MAP estimation) and Bayesian inference in these models. We will play particular attention to sparsity-induced regression models, because of their relation to artificial neural networks, the topic of the second part of the course. The second part of the course will introduce students to the nascent and exciting research area of model-based deep learning. At present, we lack a principled way to design artificial neural networks, the workhorses of modern AI systems. Moreover, modern AI systems lack the ability to explain how they reach their decisions. In other words, we cannot yet call AI explainable or interpretable which, as a society, poses important questions as to the responsible use of such technology. Model-based deep learning provides a framework to develop and constrain neural-network architectures in a principled fashion. We will see, for instance, how neural-networks with ReLU nonlinearites arise from sparse probabilistic generative models introduced in the first part of the course. This will form the basis for a rigorous recipe we will teach you to build interpretable deep neural networks, from the ground up. We will invite an exciting line up of speakers. Time permitting, we will provide a model-based pespective of the building blocks of modern language and image generative models.