We study time-consistency of optimization problems, where we say that an optimization problem is time-consistent if its optimal solution, or the optimal policy for choosing actions, does not depend on when the optimization problem is solved. Time-consistency is a minimal requirement on an optimization problem for the decisions made based on its solution to be rational. We show that the return that we can gain by taking "optimal" actions selected by solving a time-inconsistent optimization problem can be surely dominated by that we could gain by taking "suboptimal" actions. We establish sufficient conditions on the objective function and on the constraints for an optimization problem to be time-consistent. We also show when the sufficient conditions are necessary. Our results are relevant in stochastic settings particularly when the objective function is a risk measure other than expectation or when there is a constraint on a risk measure.
Minimum Bayes-risk (MBR) decoding has recently gained renewed attention in text generation. MBR decoding considers texts sampled from a model as pseudo-references and selects the text with the highest similarity to the others. Therefore, sampling is one of the key elements of MBR decoding, and previous studies reported that the performance varies by sampling methods. From a theoretical standpoint, this performance variation is likely tied to how closely the samples approximate the true distribution of references. However, this approximation has not been the subject of in-depth study. In this study, we propose using anomaly detection to measure the degree of approximation. We first closely examine the performance variation and then show that previous hypotheses about samples do not correlate well with the variation, but our introduced anomaly scores do. The results are the first to empirically support the link between the performance and the core assumption of MBR decoding.
Modeling of a product or service’s attractiveness as a function of its own attributes (e.g., price and quality) is one of the foundations in econometric forecasts, which have been provided with an assumption that each human rationally has a consistent preference order among his choice decisions. Yet the preference orders by real humans become irrationally reversed, when the choice set of available options is manipulated. In order to accurately predict choice decisions involving preference reversals, which existing econometric methods have failed to incorporate, the authors introduce a new cognitive choice model whose parameters are efficiently fitted with a global convex optimization algorithm. The proposed model captures each human as a Bayesian decision maker facing a mental conflict between objective evaluation samples and a subjective prior, where the underlying objective evaluation function is rationally independent from contexts while the subjective prior is irrationally determined by each choice set. As the key idea to analytically handle the irrationality and to yield the convex optimization, the Bayesian decision mechanism is implemented as a closedform Gaussian process regression using similarities among the available options in each context. By explaining the irrational decisions as a consequence of averting uncertainty, the proposed model outperformed the existing econometric models in predicting the irrational choice decisions recorded in realworld datasets. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.
Due to rapid urbanization, large cities in developing countries have problems with heavy traffic congestion. International aid is being provided to construct modern traffic signal infrastructure. But often such an infrastructure does not work well due to the high operating and maintenance costs and the limited knowledge of the local engineers. In this paper, we propose a frugal signal control framework that uses image analysis to estimate traffic flows. It requires only low-cost Web cameras to support a signal control strategy based on the current traffic volume. We can estimate the traffic volumes of the roads near the traffic signals from a few observed points and then adjust the signal control. Through numerical experiments, we confirmed that the proposed framework can reduce an average travel time 20.6% compared to a fixed-time signal control even though the Web cameras are located at 500 m away from intersections.
Agent-based simulations are indisputably effective for analyzing complex processes such as traffic patterns and social systems. However, human experts often face the challenges in repeating the simulation many times when evaluating a large variety of scenarios. To reduce the computational burden, we propose an approach for inferring the end results in the middle of simulations. For each simulated scenario, we design a feature that compactly aggregates the agents' states over time. Given a sufficient number of such features we show how to accurately predict the end results without fully performing the simulations. Our experiments with traffic simulations confirmed that our approach achieved better accuracies than existing simulation metamodeling approaches that only use the inputs and outputs of the simulations. Our results imply that one can quickly evaluate all scenarios by performing full simulations on only a fraction of them, and partial simulations on the rest.
Any trajectory is always generated with its origin and destination. Origin-destination (OD) generation for trips plays an important role in many applications such as trajectory mining, traffic simulation, or marketing. In previous work on traffic pattern recognition, microscopic ODs for limited areas are estimated with probe-car data, while macroscopic ODs for broad areas are usually generated by using road-traffic-census data. In this paper, we propose a microscopic OD determination method for broad areas with the same data and landmark information, which is based on an L1-regularized Poisson regression. We demonstrate performance improvements over baseline methods in numerical experiments with a massive data set from Tokyo.