News

Leslie Valiant wins 2010 ACM A. M. Turing Award

Innovator opened new frontiers in learning theory, computational complexity, and parallel and distributed computing

Computer scientist and expert in computational learning theory Leslie Valiant, winner of the 2010 Turing Award, believes that the "science of learning remains only partially explored and certainly unexploited."

March 9, 2011 - ACM, the Association for Computing Machinerytoday named Leslie G. Valiant the winner of the 2010 ACM A. M. Turing Award for his fundamental contributions to the development of computational learning theory and to the broader theory of computer science.

Valiant, the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at the Harvard School of Engineering and Applied Sciences (SEAS), brought together machine learning and computational complexity, leading to advances in artificial intelligence as well as computing practices such as natural language processing, handwriting recognition, and computer vision. He also launched several subfields of theoretical computer science and developed models for parallel computing.

The Turing Award, widely considered the “Nobel Prize in Computing," is named for the British mathematician Alan M. Turing. The award carries a $250,000 prize, with financial support provided by Intel Corporation and Google Inc.

“Leslie Valiant’s accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence and led to extraordinary achievements in machine learning,” said ACM President Alain Chesnais. “His work has produced modeling that offers computationally inspired answers on fundamental questions like how the brain ‘computes.’”

“His profound insights in computer science, mathematics, and cognitive theory have been combined with other techniques to build modern forms of machine learning and communication, like IBM’s ‘Watson’ computing system, that have enabled computing systems to rival a human’s ability to answer questions,” Chesnais said.

Shekhar Borkar, Director of the Microprocessor Technology Lab at Intel Corp., and an Intel Fellow said, "Professor Valiant’s research in the theory of computation has revolutionized machine learning and artificial intelligence, making machines almost think.” He added, "His approach invites comparison with Turing himself – a novel formulation starting from a deep fundamental insight, and Intel is pleased to support this award."

Valiant’s “Theory of the Learnable,” published in 1984 in Communications of the Association for Computing Machinery, is considered one of the seminal contributions to machine learning. It put machine learning on a sound mathematical footing and laid the foundations for a new research area known as Computational Learning Theory. He provided a general framework as well as concrete computational models, and his approach of “Probably Approximately Correct” (PAC) learning has become a standard model for studying the learning process. His work has led to the development of algorithms that adapt their behavior in response to feedback from the environment. Mainstream research in AI has embraced his viewpoint as a critical tool for designing intelligent systems.

“Google joins in recognizing Leslie Valiant for his profound impact on the computer science research landscape,” said Alfred Spector ’76, Vice President of Research and Special Initiatives at Google Inc. “His ingenious concepts and original research have significantly influenced the way computers learn, with applications in medicine, transportation, finance, telecommunications, image processing, game theory, and strategic planning, to name a few. We are pleased to be a sponsor of the ACM Turing Award, which offers opportunities that advance computing and enable innovation worldwide.”

One of Valiant’s key contributions to computational complexity was his work on the complexity of enumeration problems. Its impact was to show the inherent difficulty in counting the number of solutions not just to computationally hard problems, but also to those whose decision complexity is relatively “easy.”

"Les Valiant is known by researchers the world over for his revolutionary contributions to theoretical computer science," said Michael D. Smith, Dean of Harvard's Faculty of Arts and Sciences and the John H. Finley, Jr. Professor of Engineering and Applied Sciences. "His work has reoriented this entire field in recent decades, single-handedly creating or transforming any number of key research areas. Les is clearly deserving of the Turing Award; as a colleague and friend I am delighted by this recognition his myriad contributions to our field."

Valiant's work led to the theory of algebraic computation, which established a framework for understanding which algebraic formulas can be evaluated efficiently. He also introduced the class #P and proved the permanent to be complete for this class. His paper “Completeness Classes in Algebra,” published in 1979, brought algebraic techniques into the toolbox of theoretical computer science, and his work on #P set the stage for the development of interactive proofs for problems beyond NP.

Valiant’s insight into the theory of parallel and distributed computing offers another broad area of important contributions. His 1982 paper “A Scheme for Fast Parallel Communication” described a simple parallel routing scheme that offered a solution to congestion problems, which occur when multiple computers try to communicate over networks with limited capacity. He also introduced the “bulk synchronous parallel” (BSP) computing model, which describes different types of multiprocessor computers based on how efficiently they synchronize and communicate internally. The BSP model explains why the performance of an algorithm may vary between different parallel computers.

"From the first inklings of Artificial Intelligence to the early days of the ARPANET to the personal computer revolution and social networking, Harvard students, faculty, and alumni have been a driving force behind innovations in computer science," added Cherry A. Murray, Dean of the Harvard School of Engineering and Applied Sciences and John A. and Elizabeth S. Armstrong Professor of Engineering and Applied Sciences. "Les Valiant is yet another pioneer in this remarkable tradition. On behalf of everyone in the School of Engineering and Applied Sciences, I wish to extend Les our congratulations on this fantastic achievement."

More recently, Valiant has contributed to computational neuroscience, offering a concrete mathematical model of the brain and relating its architecture to complex cognitive functions. In his 1994 book Circuits of the Mind, he details a promising new computational approach to studying the intricate workings of the human brain. The book focuses on the brain's ability to quickly access a massive store of accumulated information during reasoning processes despite the extreme constraints imposed by its finite number of neurons, their limited speed of communication, and their restricted interconnectivity. The book offered a new approach to brain science for students and researchers in computer science, neurobiology, neurosciences, artificial intelligence, and cognitive science.

“Les Valiant stands out for having essentially initiated multiple research areas within computer science,” said Michael Mitzenmacher, Gordon McKay Professor of Computer Science at Harvard. “Learning theory, the complexity of counting solutions, the bulk-synchronous parallel model of computation, and holographic algorithms are just some of his outstanding contributions. He is a truly profound thinker who has never feared bringing the power of theoretical computer science to entirely new areas, and he and his work have taught and inspired a generation of scholars.”

Michael Rabin, Thomas J. Watson, Sr. Professor of Computer Science at SEAS, won the Turing Award in 1976, and four SEAS-affiliated alumni have also been bestowed with the honor: E. Allen Emerson ’81; Ken Iverson '51 '54; Richard M. Karp ’55 ’59, and Frederick P. Brooks, Jr. ’56.

SEAS dean Cherry A. Murray will host a formal reception for Valiant in the coming weeks; more details will follow.

Content adapted from the from the original ACM press release.

Download a high-resolution image of Leslie Valiant. Credit: Eliza Grinnell, Harvard School of Engineering and Applied Sciences.


Q&A with Leslie Valiant

What is your reaction to winning the Turing?

I am, of course, very pleased to have been given this honor. This connection with the achievements of the previous winners, and of Turing himself, is more than anyone in my field can reasonably expect.

Can you give a brief synopsis of your research?

The tasks we ask computers to do are generally those that are suggested by what humans can do, such as arithmetic or searching through data. A long time ago I started exploring the idea that the most fundamental cognitive capability is that of learning.

I have tried to understand the scope and limitations of learning as a phenomenon, independent of who or what is carrying it out. Integral to learning is the ability to generalize to situations different from those experienced. Understanding how this can be done by a mechanical process is what machine learning and learning theory are about.

I have been particularly involved in the first step of this, namely defining what needs to be achieved by a system if it is to be considered to be learning successfully.

I have also been involved in this field of computational complexity, in which we try to delineate the doable tasks from those that are not doable with feasible resources. Many tasks that we can define exactly, and wish we could get computers to do, cannot be done by computers, as Turing showed in the 1930s.

This field still has several famous unresolved mathematical questions, most notably the “P=?NP” question. My work has focused on tasks with quantitative answers, rather than just yes/no answers, arising, for example, in the calculus or in computing probabilities.

What's next for machine learning?

I believe that the science of learning remains only partially explored and certainly unexploited.

Machine learning is already very successful and widely applied in making useful predictions. However, ways of using it for tasks that involve reasoning on top of learning, as would be needed to approach the broader goals of artificial intelligence, are little understood.

In a related direction, biological systems are highly adaptive. Understanding what exactly they do step by step, and why they succeed, I regard as ideal topics for learning theory and computer science more generally.

I have thought a lot about how anything like the brain can perform its spectacular feats of learning and memory given its physical limitations. In a similar spirit one can formulate Darwinian evolution as a computational process and try to understand its scope and limitations.

Topics: Computer Science, Awards, AI / Machine Learning