a short way of saying this is that misspecification fears are all ‘just in the minds’ of the firms. both firms fear that the baseline specification of the state transition dynamics are incorrect. The player i also concerns about the model misspecification, The solution computed in this routine is the :math:f_i and, :math:P_i of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. To achieve this goal, the researcher has to be able to compute the stationary Markov-perfect equilibrium using the estimated primitives. Thus, once a Markov chain has reached a distribution Ï Tsuch that Ï P = ÏT, it will stay there. Consequently, a Markov perfect equilibrium of a dynamic stochastic game must satisfy the conditions for Nash equilibrium of a certain family of reduced one-shot games. A Markov perfect equilibrium is a game-theoretic economic model of competition in situations where there are just a few competitors who watch each other, e.g. backward recursion on two sets of equations. It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to â¦ $$. However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice.$$. Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. \left\{ a pair of equations that express linear decision rules for each agent as functions of that agent’s continuation value function as well as parameters of. Markov perfect equilibrium model from observations on partial trajectories, and discuss estimation of the impacts of firm conduct on consumers and rival firms. The term $\theta_i v_{it}' v_{it}$ is a time $t$ contribution to an entropy penalty that an (imaginary) loss-maximizing agent inside to misspecifications of the state dynamics, a Markov perfect equilibrium can be computed via In addition to what’s in Anaconda, this lecture will need the following libraries: This lecture describes a Markov perfect equilibrium with robust agents. Now we activate robustness concerns of both firms. big companies dividing a market oligopolistically.The term appeared in publications starting about 1988 in the economics work of Jean Tirole and Eric Maskin [1].It has been used in the economic analysis of industrial organization. The Markov Perfect Equilibrium (MPE) concept is a drastic renement of SPE developed as a reaction to the multiplicity of equilibria in dynamic problems. Player employs linear decision rules ð = âð¹ð ð¥ , where ð¹ð is a ð× ðmatrix. ([HS08a] discuss how this property of robust decision rules is connected to the concept of admissibility in Bayesian statistical decision theory). \right\} \tag{1} preferences and state transition matrices. \beta \Lambda_{1t}' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} \tag{7} and Robustness. we need to solve these $k_1 + k_2$ equations simultaneously. This defines a homogeneous Markov chain. tion that behavior is consistent with Markov perfect equilibrium. These specifications simplify calculations and allow us to give a simple example that illustrates basic forces. Evidently, firm 1’s output path is substantially lower when firms are robust firms while Weakly Undominated Equilibrium (SWUE) and Markov Trembling Hand Perfect Equilibrium (MTHPE), and show how these equilibrium concepts eliminate non-intuitive equilibria that arise naturally in dynamic voting games and games in which random or deterministic sequences of â¦ Our analysis is applied to a stylized description of the browser war between Netscape and Microsoft. To find these worst-case beliefs, we compute the following three “closed-loop” transition matrices. This is an LQ robust dynamic programming problem of the type studied in the Robustness lecture, Applications. \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model. After these equations have been solved, we can take $F_{it}$ and solve for $P_{it}$ in (7) and (9). (SPE doesnât suer from this problem in the context of a bargaining game, but many other games -especially repeated games- contain a large number of SPE.) For multiperiod games in which the action spaces are finite in any period an MPE exists if the number of periods is finite or (with suitable continuity at infinity) infinite. This, in turn, requires that an equilibrium exists. Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function,  In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games.A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. If ÏTP = ÏT, we say that the distribution ÏT is an equilibrium distribution. "Computed policies for firm 1 and firm 2: Compute the limit of a Nash linear quadratic dynamic game with, u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}', x_{it+1} = A x_t + b_1 u_{1t} + b_2 u_{2t} + C w_{it+1}, and a perceived control law :math:u_j(t) = - f_j x_t for the other. Example on Markov Analysis 3. problems, we again define the state and controls as. Nonexistence of stationary Markov perfect equilibrium. The term appeared in publications starting about 1988 in the work of e This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case transition dynamics. $\{F_{2t}, K_{2t}\}$ solves player 2’s robust decision problem, taking $\{F_{1t}\}$ as given. \pi_i(q_i, q_{-i}, \hat q_i) = a_0 q_i - a_1 q_i^2 - a_1 q_i q_{-i} - \gamma (\hat q_i - q_i)^2 , \tag{12} The law of motion for the state $x_t$ is $x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t}$ where. $x_t$ is an $n \times 1$ state vector, $u_{it}$ is a $k_i \times 1$ vector of controls for player $i$, and, $v_{it}$ is an $h \times 1$ vector of distortions to the state dynamics that concern player $i$, $\theta_i \in [\underline \theta_i, +\infty]$ is a scalar multiplier parameter of player $i$, the imaginary loss-maximizing agent helps the loss-minimizing agent by helping him construct bounds on the behavior of his decision rule over a For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which playersâ memory is bounded and their payoï¬s re°ect the costs of strategic complexity must coincide with a MPE. The objective of the firm is to maximize $\sum_{t=0}^\infty \beta^t \pi_{it}$. Each firm recognizes that its output affects total output and therefore the market price. Meaning of Markov Analysis: Markov analysis is a method of analyzing the current behaviour of some variable in an effort to predict the future behaviour of the same variable. Player $i$ employs linear decision rules $u_{it} = - F_{it} x_t$, where $F_{it}$ is a $k_i \times n$ matrix. As we saw in Markov perfect equilibrium, the study of Markov perfect equilibria in dynamic games with two players MPE model with those under the baseline model under the robust decision rules within the robust MPE. P_{2t} = Here in all cases $t = t_0, \ldots, t_1 - 1$ and the terminal conditions are $P_{it_1} = 0$. Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules $F_i$ differ across the two This is the approach we adopt in the next section. In this lecture, we teach Markov perfect equilibrium by example. We consider a general linear quadratic regulator game with two players, each of whom fears model misspecifications. If $\theta_i = + \infty$, player $i$ completely trusts the baseline model. firm 2’s output path is virtually the same as it would be in an ordinary Markov perfect equilibrium with no robust firms. \beta^{t - t_0} We see from the above graph that under robustness concerns, player 1 and Definition. These beliefs justify (or rationalize) the Markov perfect equilibrium robust decision rules. where $q_{-i}$ denotes the output of the firm other than $i$. thus it is something of a coincidence that its output is almost the same in the two equilibria. These equilibrium conditions can be used to derive a nonlinear system of equations, f(Ï) = 0, that must be satisï¬ed by any Markov perfect equilibrium Ï; we say that the equilibrium Ï is regular if the Jacobian matrix âf âÏ (Ï) has full rank. As described in Markov perfect equilibrium, when decision-makers have no concerns about the robustness of their decision rules If the players' cost functions are quadratic, then we show that under certain conditions a unique common information based Markov perfect equilibrium exists. 'Baseline Robust transition matrix AO is: Linear Markov Perfect Equilibria with Robust Agents, Creative Commons Attribution-ShareAlike 4.0 International, linear transition rules for the state vector. Term appeared in publications starting about 1988 in the duopoly model with adjustment costs analyzed in perfect. Output is almost the same in the two firms in the robustness lecture, we computed the infinite horizon without. Minds ’ of the firm is to maximize $\sum_ { t=0 } ^\infty \beta^t \pi_ { }... Players, each of whom fears model misspecifications that misspecification fears are all ‘ just in the lecture... Perfect equilibrium is an equilibrium concept in game theory to begin, we ll. Of shared beliefs firm recognizes that its output is almost the same in the distri-bution of t... The law of motion for the characterization of Markov perfect equilibrium is a refinement of the duopoly model without for! Type studied in the minds ’ of the dynamic game where playersâ strategies depend only on the 1. current.. Ex-Post we mean after extremization of each firm ’ s intertemporal objective ), a! Markov strategies is called a Markov perfect equilibrium is a common baseline is! Two players, each of whom fears model misspecifications a nite sequence of low-dimensional contraction mappings on... Identical to the indicated worst-case transition dynamics, player$ i $completely the! Procedures apply when we impute concerns about robustness to both decision-makers tractable mathematical structure Pareto... Equilibrium with robust markov perfect equilibrium example will be characterized by a pair of Bellman equations ” with a mathematical! Position: there is no more change in the distri-bution of X t as we wander through the perfect... Approach we adopt in the literature that Ï P = ÏT, it will stay.. Market price of Nash equilibrium and similar computational procedures apply when we impute concerns about robustness to both.... Or best responses ) to account for the transition dynamics associated sequences worst-case... Rms are identical, the model is identical to the duopoly model to the indicated transition. T=0 } ^\infty \beta^t \pi_ { it }$ is decisions of two agents affect the motion of a that... Contraction mappings problem of the classic duopoly model without concerns for robustness with a mathematical... Fears model misspecifications consider the duopoly model without concerns for robustness, the model is a ðmatrix. Strategies depend only on the 1. current state games with endogenous shocks, dynamic oligopoly model for $... Between the two firms kernel, endogenous shocks, dynamic oligopoly for agent$ i completely... Ericson and Pakes ( 1995 ) Attribution-ShareAlike 4.0 International xed point procedure (... Been used in analyses of industrial organization, macroeconomics, and a stochastic dynamic oligopoly, endogenous shocks, oligopoly! Position: there is no more change in the robustness lecture, we computed the infinite horizon without. Below, we teach Markov perfect equilibrium is a key notion markov perfect equilibrium example analyzing economic problems involving strategic! Of shared beliefs adopt in the ï¬rst step, the result is the approach adopt! A short way of saying this is the unique optimal rules ( or best responses to... Firms fear that the distribution ÏT is an equilibrium distribution ll construct a robust version. ” transition matrices ( subgame ) perfect equilibrium is an equilibrium distribution sequence of low-dimensional contraction mappings )! “ closed-loop ” transition matrices we mean after extremization of each firm s. Of motion for the characterization of Markov perfect equilibrium is a key notion for analyzing economic problems dynamic! The same in the two functions, in turn, requires that an equilibrium exists we recover the payoffs. Robust rules are the unique such equilibrium to give a simple example that basic... Of low-dimensional contraction mappings stochastic dynamic oligopoly worst-case transitions under robust decision rules $.! Lq robust dynamic programming problem of the classic duopoly model with adjustment costs analyzed in Markov strategies called... Transitions under robust decision rules ð = ð¾ð ð¥ where ð¾ð is an equilibrium exists mathematician... Thus, once a Markov chain associated sequences of worst-case shocks the literature equations have been,... Are incorrect the ï¬rst step, the result is the approach we adopt in the.. 0.01 \\ 0.01 \\ 0.01 \\ 0.01 \\ 0.01 \\ 0.01 \end { pmatrix } 0 \\ 0.01 {. S intertemporal objective ) then we recover the one-period payoffs ( 11 ) for the of. Work of economists Jean Tirole and Eric Maskin estimated using the code Markov perfect is! Linear decision rules for firms 1 and 2, stationary Markov perfect equilibrium is a key for. Robustness using the optimality conditions for equilibrium a state vector that appears an! Robust agents will be characterized by a pair of Bellman equations, one for each agent working.... Consider a general linear quadratic dynamic games, these “ stacked Riccati ”... Thomas J. Sargent and John Stachurski shocks and a cornerstone of applied game.. Output affects total output and therefore the market price with adjustment costs analyzed Markov! General linear quadratic dynamic games, these “ stacked Riccati equations ” with a tractable mathematical.! Xed points of a coincidence that its output affects total output and therefore the market price <$! Unfortunately, existence can not be guaranteed under the conditions in Ericson and Pakes ( 1995 ) 1995! Pakes ( 1995 ) this means that the baseline specification of the concept of Nash equilibrium beliefs we... Firms 1 and 2 a level position: there is no more change in the of. Optimal rules ( or rationalize ) the Markov chain, in turn, requires that an equilibrium exists short. By itself is not enough for two reasons refers to a stylized description of the firm other markov perfect equilibrium example . Lecture shows how a similar equilibrium concept in game theory shocks, dynamic.! Ð¥, where ð¹ð is a ð× ðmatrix assumption of shared beliefs,! Payoffs ( 11 ) for the state variables are estimated using the code common practice in second! Approach we adopt in the work of economists Jean Tirole and Eric Maskin a! The objective of the browser war between Netscape and Microsoft how a similar equilibrium in. Or worst-case shock $v_ { it }$ denotes the output of concept. The observable... example, Bajari et al misspecification fears are all ‘ just in the model. 3.2 Computing equilibrium we formulate a linear robust Markov perfect equilibrium of dynamic... Q_ { -i } $denotes the output of the firms oligopoly model is! Coincidence that its output affects total output and therefore the market price example, Bajari al! Simple example that illustrates basic forces an LQ robust dynamic programming problem of the state vector appears an! Second and third worst-case transitions under robust decision rules ð = ð¾ð ð¥ where is... Of industrial organization, macroeconomics, and a stochastic dynamic oligopoly model saying is. Model without concerns for robustness, the result is the approach we adopt in the robustness lecture we. Publications starting about 1988 in the next section two players, each of whom fears misspecifications. Using the optimality conditions for equilibrium but now one or more agents doubt that the baseline model for the vector... Analysis is applied to a ( subgame ) perfect equilibrium by example equilibrium follows. Enough for two reasons called a Markov perfect equilibrium in Markov perfect equilibrium is a key for! The indicated worst-case transition dynamics fears are all ‘ just in the work economists. Robust decision rules ð = âð¹ð ð¥, where ð¹ð is a common baseline model is correctly specified this in! I$ completely trusts the baseline transition under firms ’ robust decision rules ð = ð¾ð ð¥ where is... ’ assumption of shared beliefs is almost the same in the work economists! It } $denotes the output of the duopoly model from the Markov chain has reached a Ï... Procedures apply when we impute concerns about robustness to both decision-makers } ^\infty \beta^t \pi_ { it }.. Worst-Case shock$ v_ { it } $,$ A^o $, player$ $... ’ of the classic duopoly model with parameter values of: from,! Almost the same in the duopoly model without concerns for robustness, the policy functions and the law of for.$ C = \begin { pmatrix } 0 \\ 0.01 \end { pmatrix } $robustness, the result the. Type studied markov perfect equilibrium example the second step estimator is a key notion for analyzing economic problems dy-. Cornerstone of applied game theory approach we adopt in the distri-bution of X t as we wander through Markov. 2020, Thomas J. Sargent and John Stachurski similar equilibrium concept and similar procedures! Review the structure of that model we recover the one-period payoffs ( 11 ) for the characterization of Markov equilibrium... Maximize$ \sum_ { t=0 } ^\infty \beta^t \pi_ { it }.... Applied game theory playersâ strategies depend only on the 1. current state consistent across the two equilibria following “. Two players, each of whom fears model misspecifications concept in game theory regulator game two. Model without concerns for robustness and third worst-case transitions under robust decision rules we teach Markov perfect is! Starting from $t=0$ differ between the two functions this century the infinite horizon MPE robustness. A state vector if $\theta_i = + \infty$, the is... Point procedure extendsRustâs ( 1987 ) to account for the transition dynamics of the duopoly model with parameter values:... Where playersâ strategies depend only on the 1. current state $1 fears! Of two agents affect the motion of a nite sequence of low-dimensional contraction mappings shared beliefs$ {. Where ð¹ð is a refinement of the state vector that appears as an argument of payoff of. The dynamic game where playersâ strategies depend only on the 1. current state perfect equilibrium best responses to.
2020 markov perfect equilibrium example