Home * Programming * Algorithms * Dynamic Programming

Dynamic Programming, (DP)
a mathematical, algorithmic optimization method of recursively nesting overlapping sub problems of optimal substructure inside larger decision problems. The term DP was coined by Richard E. Bellman in the 50s not as programming in the sense of producing computer code, but mathematical programming, planning or optimization similar to linear programming, devoted to the study of multistage processes. These processes are composed of sequences of operations in which the outcome of those preceding may be used to guide the course of future ones [1].

DP in Computer Chess

In computer chess, dynamic programming is applied in depth-first search with memoization aka using a transposition table and/or other hash tables while traversing a tree of overlapping sub problems aka child positions after making a move by one side in top-down manner, gaining from stored positions of sibling subtrees due to transpositions and/or common aspects of positions, in particular effective inside an iterative deepening framework. Another approach of dynamic programming in computer chess or computer games is the application of retrograde analysis, to solve a problem by solving subproblems in bottom-up manner starting from terminal nodes [2].

See also


Selected Publications

1953 ...

1960 ...

1970 ...

1990 ...

2000 ...

2010 ...


External Links


References

  1. ^ Richard E. Bellman (1953). An Introduction to the Theory of Dynamic Programming. R-245, RAND Corporation
  2. ^ Richard E. Bellman (1965). On the Application of Dynamic Programming to the Determination of Optimal Play in Chess and Checkers. PNAS
  3. ^ Markov chain from Wikipedia
  4. ^ David Blackwell and Dynamic Programming (pdf) by William Sudderth

What links here?


Up one Level