Richard Ernest Bellman, (August 26, 1920 – March 19, 1984)
was an American applied mathematician and inventor of dynamic programming in 1953 while affiliated with RAND Corporation. Dynamic programming defines a mathematical theory devoted to the study of multistage processes. These processes are composed of sequences of operations in which the outcome of those preceding may be used to guide the course of future ones ^{[1]}, an application in computer chess and games is the use of transposition tables inside an iterative deepening framework ^{[2]}. He worked and published on Markov decision processes, and in 1958, he published his first paper on stochastic control processes^{[3]}, where he introduced what is today called the Bellman equation, also known as dynamic programming equation.

The dynamic programming methodology which defined the field of retrograde endgame analysis was discovered by Bellman in 1965 ^{[7]}. Bellman had considered game theory from a classical perspective as well ^{[8]}^{[9]}, but his work came to fruition in his 1965 paper, where he observed that the entire state-space could be stored and that dynamic programming techniques could then be used to compute whether either side could win any position ^{[10]}.

Selected Publications

^{[11]}^{[12]}

1947

Richard E. Bellman (1947). On the Boundedness of Solutions of Non-Linear Differential and Difference Equations. Ph.D. thesis, pdf

^Richard E. Bellman (1954). On a new Iterative Algorithm for Finding the Solutions of Games and Linear Programming Problems. Technical Report P-473, RAND Corporation, U. S. Air Force Project RAND, Santa Monica, CA

Home * People * Richard E. BellmanRichard Ernest Bellman, (August 26, 1920 – March 19, 1984)was an American applied mathematician and inventor of dynamic programming in 1953 while affiliated with RAND Corporation. Dynamic programming defines a mathematical theory devoted to the study of multistage processes. These processes are composed of sequences of operations in which the outcome of those preceding may be used to guide the course of future ones

^{[1]}, an application in computer chess and games is the use of transposition tables inside an iterative deepening framework^{[2]}. He worked and published on Markov decision processes, and in 1958, he published his first paper on stochastic control processes^{[3]}, where he introduced what is today called the Bellman equation, also known as dynamic programming equation.During World War II, Bellman worked in Los Alamos on the Manhattan project. He received his Ph.D. at Princeton under the supervision of Solomon Lefschetz in 1947, and since 1949 he worked for many years at RAND, and was professor at University of Southern California since 1965. In the 50s he was persecuted by McCarthy

^{[4]}^{[5]}.^{[6]}## Table of Contents

## Retrograde Analysis

The dynamic programming methodology which defined the field of retrograde endgame analysis was discovered by Bellman in 1965^{[7]}. Bellman had considered game theory from a classical perspective as well^{[8]}^{[9]}, but his work came to fruition in his 1965 paper, where he observed that the entire state-space could be stored and that dynamic programming techniques could then be used to compute whether either side could win any position^{[10]}.## Selected Publications

^{[11]}^{[12]}## 1947

1947).On the Boundedness of Solutions of Non-Linear Differential and Difference Equations. Ph.D. thesis, pdf## 1950 ...

1953).An Introduction to the Theory of Dynamic Programming. R-245, RAND Corporation1954).The Theory of Dynamic Programming. P-550, RAND Corporation, pdf1954).On a new Iterative Algorithm for Finding the Solutions of Games and Linear Programming Problems. Technical Report P-473, RAND Corporation, Santa Monica, CA1957).Dynamic Programming. Princeton University Press1957).The Theory of Games. Technical Report P-1062, RAND Corporation, Santa Monica, CA1957).Markovian decision processes. Journal of Mathematics and Mechanics 38, 716–7191958).Dynamic Programming and Stochastic Control Processes. RAND Corporation, Santa Monica, CA, Information and Control 1, pp. 228–239## 1960 ...

1960).Sequential Machines, Ambiguity, and Dynamic Programming. Journal of the ACM, Vol. 7, No. 11962).Applied Dynamic Programming. RAND Corporation, Princeton University Press, pdf1962).Dynamic Programming Treatment of the Travelling Salesman Problem. Journal of the ACM, Vol. 9, No. 1, 1961 pdf preprint1965).On the Application of Dynamic Programming to the Determination of Optimal Play in Chess and Checkers.PNAS1969).Information science: A function is a mapping-plus a class of algorithms. Information Sciences, Vol. 1, No. 3## 1970 ...

1973).On the Analytic Formalism of the Theory of Fuzzy Sets. Information Sciences, Vol. 5^{[13]}## 1980 ...

1984).Eye of the Hurricane: an Autobiography. World Scientific Publishing## 2000 ...

2002).Richard Bellman on the Birth of Dynamic Programming. Operations Research, Vol. 50, No. 1, pdf2003).Dynamic Programming. Dover Publications## 2010 ...

2010).Dynamic Programming. With a new introduction by Stuart E. Dreyfus, Princeton University Press2015).Applied Dynamic Programming. Princeton University Press## External Links

Bellman Equation Documentary Preview, YouTube Video

## References

1953).An Introduction to the Theory of Dynamic Programming. R-245, RAND Corporation1958).Dynamic Programming and Stochastic Control Processes. RAND Corporation, Santa Monica, CA, Information and Control 1, pp. 228–2391965).On the Application of Dynamic Programming to the Determination of Optimal Play in Chess and Checkers.Proceedings of the National Academy of Sciences of the United States of America1954).On a new Iterative Algorithm for Finding the Solutions of Games and Linear Programming Problems. Technical Report P-473, RAND Corporation, U. S. Air Force Project RAND, Santa Monica, CA1957).The Theory of Games. Technical Report P-1062, RAND Corporation, Santa Monica, CA1996).Multilinear Algebra and Chess Endgames. Games of No Chance edited by Richard J. Nowakowski, pdf## What links here?

Up one level