Eric Huang Wefald, († August 31, 1989) ^{[1]}
was an American computer scientiest and AI-researcher from University of California, Berkeley, California, who co-authored with Stuart Russell on multiple papers on search control, machine learning, and metareasoning, that is the process of reasoning about reasoning itself. Eric Wefald obtained a M.A. in philosophy from Princeton in 1985, and was completing a doctorate in artificial intelligence at U.C. Berkeley, when he died in a car accident near Bordeaux, France, on August 31, 1989.

Decision-Theoretic Search Control

Abstract from Stuart Russell and Eric Wefald (1988). Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.^{[2]}:

In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low- overhead algorithm, MGSS, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello appears to rival an alpha-beta search with equal node allocations or time allocations. Pruning and search termination subsumes or improve on many other algorithms. Single-agent search, as in the A algorithm, yields a simpler analysis, and we are currently investigating applications of the algorithm developed for this case.

Optimal Game-Tree Search

Abstract from Stuart Russell and Eric Wefald (1989). On optimal game-tree search using rational metareasoning^{[3]}:

In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the expected value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low-overhead algorithm, MGSS*, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello soundly beats an alpha-beta search while expanding significantly fewer nodes, even though both programs use the same evaluation function.

Selected Publications

^{[4]}

Stuart Russell and Eric Wefald (1988). Decision-Theoretic Search Control: General Theory and an Application to Game-Playing. Computer Science Division Technical Report 88/435, University of California, Berkeley, CA.

Stuart Russell and Eric Wefald (1988). Multi-Level Decision-Theoretic Search. In AAAI Symposium on Computer Game-Playing, Stanford.

Stuart Russell and Eric Wefald (1989). On optimal game-tree search using rational metareasoning. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI: Morgan Kaufmann. pdf

^Stuart Russell and Eric Wefald (1988). Decision-Theoretic Search Control: General Theory and an Application to Game-Playing. Computer Science Division Technical Report 88/435, University of California, Berkeley, CA.

## Table of Contents

Home * People * Eric WefaldEric Huang Wefald, († August 31, 1989)^{[1]}was an American computer scientiest and AI-researcher from University of California, Berkeley, California, who co-authored with Stuart Russell on multiple papers on search control, machine learning, and metareasoning, that is the process of reasoning about reasoning itself. Eric Wefald obtained a M.A. in philosophy from Princeton in 1985, and was completing a doctorate in artificial intelligence at U.C. Berkeley, when he died in a car accident near Bordeaux, France, on August 31, 1989.

## Decision-Theoretic Search Control

Abstract from Stuart Russell and Eric Wefald (1988).Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.^{[2]}:In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low- overhead algorithm, MGSS, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello appears to rival an alpha-beta search with equal node allocations or time allocations. Pruning and search termination subsumes or improve on many other algorithms. Single-agent search, as in the A algorithm, yields a simpler analysis, and we are currently investigating applications of the algorithm developed for this case.## Optimal Game-Tree Search

Abstract from Stuart Russell and Eric Wefald (1989).On optimal game-tree search using rational metareasoning^{[3]}:In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the expected value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low-overhead algorithm, MGSS*, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello soundly beats an alpha-beta search while expanding significantly fewer nodes, even though both programs use the same evaluation function.## Selected Publications

^{[4]}1988).Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.Computer Science Division Technical Report 88/435, University of California, Berkeley, CA.1988).Multi-Level Decision-Theoretic Search.In AAAI Symposium on Computer Game-Playing, Stanford.1989).On optimal game-tree search using rational metareasoning.In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI: Morgan Kaufmann. pdf1989).Adaptive Learning of Decision-Theoretic Search Control Knowledge. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann1991).Principles of Metareasoning.Artificial Intelligence 49, pdf1991).Do the right thing: studies in limited rationality. The MIT Press## References

1988).Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.Computer Science Division Technical Report 88/435, University of California, Berkeley, CA.1989).On optimal game-tree search using rational metareasoning.In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI: Morgan Kaufmann, pdf## What links here?

Up one level