In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.
At each time step, the process is in some state s {\displaystyle s} , and the decision maker may choose any action a {\displaystyle a} that is available in state s {\displaystyle s} . The process responds at the next time step by randomly moving into a new state s ′ {\displaystyle s'} , and giving the decision maker a corresponding reward R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} .
The probability that the process moves into its new state s ′ {\displaystyle s'} is influenced by the chosen action. Specifically, it is given by the state transition function P a ( s , s ′ ) {\displaystyle P_{a}(s,s')} . Thus, the next state s ′ {\displaystyle s'} depends on the current state reference
Ever curious about what that abbreviation stands for? fullforms has got them all listed out for you to explore. Simply,Choose a subject/topic and get started on a self-paced learning journey in a world of fullforms.
Allow To Receive Free Coins Credit 🪙