Application of stochastic control theory to resource allocation under uncertainty

1974 
The subject of this paper is the application of stochastic control theory to resource allocation under uncertainty. In these problems it is assumed that the results of a given allocation of resources are not known with certainty, but that a limited number of experiments can be performed to reduce the uncertainty. The problem is to develop a policy for performing experiments and allocating resources on the basis of the outcome of the experiments such that a performance index is optimized. The problem is first analyzed using the basic stochastic dynamic programming approach. A computationally practical algorithm for obtaining an approximate solution is then developed. This algorithm preserves the "closed-loop" feature of the dynamic programming solution in that the resulting decision policy depends both on the results of past experiments and on the statistics of the outcomes of future experiments. In other words, the present decision takes into account the value of future information. The concepts are discussed in the context of the general problem of allocating resources to repair machines where it is possible to perform a limited number of diagnostic experiments to learn more about potential failures. Illustrative numerical results are given.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    19
    Citations
    NaN
    KQI
    []