An MPEC Estimator for the Sequential Search Model
Abstract
This paper proposes a constrained maximum likelihood estimator for sequential search models, using the MPEC (Mathematical Programming with Equilibrium Constraints) approach. This method enhances numerical accuracy while avoiding ad hoc components and errors related to equilibrium conditions. Monte Carlo simulations show that the estimator performs better in small samples, with lower bias and root-mean-squared error, though less effectively in large samples. Despite these mixed results, the MPEC approach remains valuable for identifying candidate parameters comparable to the benchmark, without relying on ad hoc look-up tables, as it generates the table through solved equilibrium constraints.
Keywords: Sequential search model, Search cost, Demand estimation, MPEC
JEL code: C50, L81, D83, M31
1 Introduction
The consumer search process, through which individuals gather information about choices, is essential to understanding decision-making behavior. This process has become increasingly observable to researchers through browsing data, which reveals the options considered before a final choice. The availability of such data has allowed for the estimation of structural models of consumer behavior, as noted by Ursu et al. (2023). The current benchmark model solves implicit functions related to reservation prices using a fixed-point approach, which is computationally demanding.
To address this, we propose an estimator based on the Mathematical Program with Equilibrium Constraints (MPEC) approach (Su and Judd 2012). MPEC is a constrained optimization problem subject to equilibrium conditions, that avoids iterative solutions to the fixed-point problem and removes approximation and estimation errors — a key issue in demand estimation (Dubé et al. 2012), dynamic programming (Su and Judd 2012), and misclassification models (Lu et al. 2014).
Monte Carlo simulations show that MPEC performs comparably, with lower bias and root-mean-squared error (RMSE) in small samples but higher bias and RMSE in larger samples, relative to the common estimation method using an ad hoc look-up table. The MPEC approach is particularly valuable for identifying parameters comparable to the benchmark, eliminating the need for the look-up table. MPEC can effectively construct this table by requiring the equilibrium constraints to be satisfied during estimation. We conclude that MPEC is a useful alternative for obtaining benchmark estimates before advancing to more complex models, especially when approximation and estimation errors from an ad hoc look-up table are unknown to researchers.
2 Weitzman’s sequential search model
2.1 Framework
We construct the sequential search model based on Weitzman (1979). A decision maker faces a set of boxes and box gives a potential reward independently drawn from a known distribution . Opening box takes cost . An outside option is denoted as with a known reward is available at no cost. The decision maker opens boxes via sequential search steps and her goal is to maximize her expected reward net of total costs.
Suppose that the decision maker has opened a set of boxes, which revealed a maximum reward value of , and unopened boxes can still be opened. Her dynamic programming problem choosing whether to stop opening boxes and get payoff , or to continue opening boxes is described by the following Bellman equation:
where is the expected value of continuing to open boxes and is defined as
The reservation utility of a product is the utility level defined as
A set of optimal decision rules, developed by Weitzman (1979), is used to characterize consumers’ optimal search and choice strategies. The rules are as follows:
-
1.
Consumers know the true distribution(s) .
-
2.
Search fully reveals the utility associated with product .
-
3.
For each consumer is independently (across ) drawn from .
Then, the optimal search and choice decision rules are expressed as follows:
-
1.
Selection Rule: The consumer searches in decreasing order of reservation utilities.
-
2.
Stopping Rule: Search terminates when the maximum observed utility exceeds the reservation utility of any unsearched product.
-
3.
Choice Rule: Once the consumer stops searching, she chooses the product with the highest observed utility among all searched options.
2.2 Parametrizations
Empirical economists often assume consumer ’s utility defined as
where is utility which is known by the consumer prior to search (“pre-search utility”) and is utility that is only known by the consumer after search (“post-search taste shock”). We assume that the pre-search utility consists of a component that can be observed by the researcher and a pre-search taste shock that cannot be observed by the researcher. According to Ursu et al. (2023), we need to further normalize their variance by setting .
Under the assumption of normally distributed post-search taste shocks, we can derive the following expression for the reservation utility:
where is the implicit function that solves the following equation (see Kim et al. (2010)):
(1) |
with and denoting the standard normal pdf and cdf, respectively. Weitzman (1979) shows the existence and uniqueness of the solution of (1).
There are four primary methods to solve (1). The first method, proposed by Kim et al. (2010), involves pre-computing the mapping between and and storing it in a look-up table. The second method, suggested by Jiang et al. (2021), employs Newton’s method to compute reservation utilities by iteratively improving approximations to the root of the function:
The third approach, introduced by Elberg et al. (2019), uses a contraction mapping defined as:
The fourth method, proposed by Morozov (2023), directly estimates .
Ursu et al. (2023) highlight limitations in each method: (1) the first method introduces errors due to linear interpolation for search cost values that do not align with grid points; (2) the second and third methods avoid interpolation errors but require iterative computation of and a convergence threshold, which can cause numerical errors if the threshold is too loose; and (3) the fourth method involves estimation errors for . In practice, the second and third methods typically converge quickly, allowing for tight convergence thresholds that minimize numerical issues (Ursu et al. 2023). Similar challenges in demand estimation are addressed by the MPEC approach.
3 An MPEC estimator for the sequential search model
As a fifth method, we propose a straightforward estimator for the sequential search model, utilizing the Mathematical Programming with Equilibrium Constraints (MPEC) approach introduced by Su and Judd (2012). The MPEC estimator bypasses the need for iterative computations to find the fixed point by treating the equilibrium equations as constraints.
Let represent the set of parameters. The MPEC estimator solves the following constrained optimization problem:
(2) | ||||
where individual likelihood is derived as
(3) |
A remarkable advantage of MPEC is that it does not need an ad hoc look-up table which is unknown to researchers and does not incorporate approximation and estimation error of equilibrium constraints (1), in addition to the main advantage of MPEC that it does not need to solve the fixed point problem iteratively. Detailed construction is provided in the Appendix.
4 Simulation
For comparison, we follow the parameter settings for and in (2) as described in Appendix B of Ursu et al. (2023). We evaluate the MPEC approach against their kernel-smoothed frequency estimator (benchmark), which employs a look-up table—commonly used in empirical research. The same ad hoc table from Ursu et al. (2023), with a grid fineness of 0.001, is utilized. We generate 50 simulated datasets, each representing consumers who make sequential search and purchase decisions across four brands and an outside option (with the mean utility of the outside option normalized to zero). The utility function includes only brand intercepts, specified as . The search cost logarithm is set at , and draws are used for the error terms. All estimations start from an initial vector of zeros. The replication code, written in Julia for fair comparison, is available on the authors’ GitHub.
MPEC | Look-up table | |||
---|---|---|---|---|
Bias | RMSE | Bias | RMSE | |
-0.179 | 0.250 | -0.212 | 0.273 | |
-0.156 | 0.216 | -0.168 | 0.237 | |
-0.044 | 0.071 | -0.109 | 0.200 | |
-0.062 | 0.182 | -0.060 | 0.192 | |
0.214 | 0.250 | 0.248 | 0.304 | |
Time | 67.957 | 12.691 |
MPEC | Look-up table | |||
---|---|---|---|---|
Bias | RMSE | Bias | RMSE | |
-0.272 | 0.361 | -0.203 | 0.251 | |
-0.199 | 0.332 | -0.114 | 0.174 | |
-0.173 | 0.255 | -0.088 | 0.157 | |
-0.112 | 0.258 | -0.036 | 0.159 | |
0.161 | 0.255 | 0.265 | 0.290 | |
Time | 182.293 | 40.294 |
Note: The benchmark results closely replicate Column (4) in Table B1: Monte Carlo Simulation Results from Ursu et al. (2023). We calculate the average finish time for locally solved cases.
Table 1 presents the bias and RMSE of the estimated coefficients. Panel (a) shows that the MPEC estimator, though still biased, achieves a smaller bias and RMSE than the benchmark in small samples, aligning with the MPEC misclassification model (Lu et al. 2014). However, Panel (b) reveals worse performance in both bias and RMSE for larger samples, except for the search cost. MPEC also requires over four times the computational time and struggles with finding local optima.
Despite these seemingly discouraging results, we argue that the MPEC approach remains useful for identifying candidate parameters comparable to those from the benchmark method, which relies on an ad hoc look-up table. Furthermore, MPEC can construct this table dynamically by solving the equilibrium constraints during the estimation process.
5 Conclusion
The optimal sequential search model, based on Weitzman (1979), has been widely used in empirical research (Ursu et al. 2023). However, caution is needed regarding estimation and approximation accuracy, as the commonly used approach relies on an ad hoc look-up table.
To address these issues, we propose an MPEC estimator that bypasses the need for approximations and estimation of equilibrium constraints. Despite certain limitations, the MPEC approach proves useful for identifying parameters comparable to the benchmark while dynamically generating the look-up table during the estimation process.
Acknowledgments
This work was supported by JST ERATO Grant Number JPMJER2301, Japan.
References
- (1)
- Dubé et al. (2012) Dubé, Jean-Pierre, Jeremy T Fox, and Che-Lin Su, “Improving the numerical performance of static and dynamic aggregate discrete choice random coefficients demand estimation,” Econometrica, 2012, 80 (5), 2231–2267.
- Elberg et al. (2019) Elberg, Andrés, Pedro M Gardete, Rosario Macera, and Carlos Noton, “Dynamic effects of price promotions: Field evidence, consumer search, and supply-side implications,” Quantitative Marketing and Economics, 2019, 17, 1–58.
- Jiang et al. (2021) Jiang, Zhenling, Tat Chan, Hai Che, and Youwei Wang, “Consumer search and purchase: An empirical investigation of retargeting based on consumer online behaviors,” Marketing Science, 2021, 40 (2), 219–240.
- Kim et al. (2010) Kim, Jun B, Paulo Albuquerque, and Bart J Bronnenberg, “Online demand under limited consumer search,” Marketing science, 2010, 29 (6), 1001–1023.
- Lu et al. (2014) Lu, Ruichang, Yao Luo, and Ruli Xiao, “An MPEC estimator for misclassification models,” Economics Letters, 2014, 125 (2), 195–199.
- Morozov (2023) Morozov, Ilya, “Measuring benefits from new products in markets with information frictions,” Management Science, 2023, 69 (11), 6988–7008.
- Su and Judd (2012) Su, Che-Lin and Kenneth L Judd, “Constrained optimization approaches to estimation of structural models,” Econometrica, 2012, 80 (5), 2213–2230.
- Ursu et al. (2023) Ursu, Raluca, Stephan Seiler, and Elisabeth Honka, “The Sequential Search Model: A Framework for Empirical Research,” Available at SSRN 4236738, 2023.
- Weitzman (1979) Weitzman, Martin L, “Optimal Search for the Best Alternative,” Econometrica: Journal of the Econometric Society, 1979, pp. 641–654.
Appendix A Appendix (for online publication)
A.1 Crude estimator
We first introduce a crude estimator for likelihood expression (3) as the simplest approach. Define
(4) | ||||
(5) | ||||
(6) | ||||
(7) |
Then, the estimation procedure is described as follows.
-
1
Take sets of draws of and (each set of draws contains one draw of and one draw of ) for each consumer-product combination, i.e., sets of draws.
-
2
For a given guess of parameters , compute and for each set of draws and each consumer-product combination.
- 3
-
4
Compute for each consumer.
-
5
Compute and solve the constrained problem (2).
A.2 Kernel estimator
To improve upon the crude estimator, the kernel estimator applies a smooth kernel function to obtain the log-likelihood. Specifically, we use a multivariate scaled logistic cumulative distribution function as the kernel, resulting in the following consumer-specific likelihood contribution:
where is a scaling parameter for each condition, to be determined by the researcher. The procedure to estimate is the same as in the crude estimator. In our simulation, is set to 10 for and 20 for for both approaches. Further methods requiring fine-tuning are discussed in Ursu et al. (2023).