## Abstract

Consider the following nonlinear programming (NLP) problem: min xg_{0}(x)= min x∫ψ(x, y)f_{Y}(y, x) dy=min E[ψ_{0}(x, Y)]s.t.g_{j}(x)=∫ψ_{j}(x, y)f{hook}_{Y}(x, y) dy=E[ψ_{j}(x, Y)] ≤ 0, j=1,...,M,where x ∈ X ⊂ R^{n},y∈ D ⊂ R^{m}, ψ_{j}(x,Y), j=0,1,...,M are given functions, and f_{T}(y, x)is a probability density function depending on a vector of parameters x. We assume that the pdf (probability density function) f_{Y}(y, x) is unknown but a sample Y_{1},..., Y_{N} from it is available. To find the approximate solution of this NLP problem (the exact solution is not available since f_{Y}(y,'x) is unknown) we use the sample Y_{1}...Y_{N} directly in an adaptive procedure called stochastic approximation in which the optimal solution x^{*} of (1) is approximated iteratively, i.e., step by step. We consider several stochastic optimization models which can be fitted in the framework of the NLP problem (1) and present adaptive stochastic approximation procedures to approximate the optimal solution x*.

Original language | English |
---|---|

Pages (from-to) | 169-188 |

Number of pages | 20 |

Journal | Mathematics and Computers in Simulation |

Volume | 28 |

Issue number | 3 |

DOIs | |

State | Published - 1 Jan 1986 |

Externally published | Yes |