Lunch Talk: Multiple Observations and Goodness of Fit in Generalized Inverse Optimization

The University of Toronto Operations Research Group (UTORG) is hosting a lunch talk by Rafiq Mahmood. The talk is entitled “Multiple Observations and Goodness of Fit in Generalized Inverse Optimization”. Lunch and coffee will be provided. Hope to see you there!

When: Wednesday, July 4th @ 12:00pm – 1:00pm

Where: MB101

Bio-sketch: Rafid Mahmood received his B.A.Sc. and M.A.Sc. degrees in Electrical and Computer Engineering from the University of Toronto in 2013 and 2015 respectively. He is pursuing his Ph.D. degree in Mechanical & Industrial Engineering at the University of Toronto. His research interests focus on the intersection of information theory, optimization, and deep learning for applications in multimedia streaming, health care, and sports analytics.

Abstract:  Inverse optimization is the practice of using observed decisions to model a latent optimization problem. This work develops a generalized inverse linear optimization framework for imputing objective function parameters given a data set containing both feasible and infeasible points. We devise assumption-free, exact solution methods to solve the inverse problem; under mild assumptions, we show that these methods can be made more efficient. We extend a goodness-of-fit metric previously introduced for the problem with a single observed decision to this new setting, proving and numerically illustrating several important properties.

Meet with Professor John N. Tsitsiklis

The University of Toronto Operations Research Group (UTORG) is hosting a meet and greet with Professor John N. Tsitsiklis, a renowned operations research expert from MIT. Refreshments will be served. We hope to see you there!

Who: Professor John N. Tsitsiklis

 

When: Wednesday June 27 @ 11:00a.m. – 12:00p.m.

Where: MB101

Bio-Sketch: John N. Tsitsiklis was born in Thessaloniki, Greece, in 1958. He received the B.S. degree in Mathematics (1980), and the B.S. (1980), M.S. (1981), and Ph.D. (1984) degrees in Electrical Engineering, all from the Massachusetts Institute of Technology, Cambridge, Massachusetts, U.S.A.

During the academic year 1983-84, he was an acting assistant professor of Electrical Engineering at Stanford University, Stanford, California. Since 1984, he has been with the department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology (MIT), where he is currently a Clarence J Lebel Professor of Electrical Engineering.

After serving as acting co-director (Spring 1996 and 1997) and co-associate director (2008-2013), he is now the director of the Laboratory for Information and Decision Systems (LIDS). He has also served as a co-director of the Operations Research Center (ORC) (2002-2005), and as a member of the National Council on Research and Technology in Greece (2005-2007) and the associated Sectoral Research Council on Informatics (2011-2013). Finally, he has served (2013-2016) as the Chair of the Council of the Harokopio University, in Greece.

His research interests are in the fields of systems, optimization, control, and operations research. He is a coauthor of Parallel and Distributed Computation: Numerical Methods (1989, with D. Bertsekas), Neuro-Dynamic Programming (1996, with D. Bertsekas), Introduction to Linear Optimization (1997, with D. Bertsimas), and Introduction to Probability (1st ed. 2002, 2nd. ed. 2008, with D. Bertsekas). He is also a coinventor in seven awarded U.S. patents.

He has been a recipient of an IBM Faculty Development Award (1983), an NSF Presidential Young Investigator Award (1986), an Outstanding Paper Award by the IEEE Control Systems Society (1986), the M.I.T. Edgerton Faculty Achievement Award (1989), the Bodossaki Foundation Prize (1995), the MIT/EECS Louis D. Smullin Award for Teaching Excellence (2015), a co-recipient of two INFORMS Computing Society prizes (1997, 2012), a co-recipient of an ACM Sigmetrics Best Paper Award (2013), and a recipient of the ACM Sigmetrics Achievement Award (2016). He is a Fellow of the IEEE (1999) and of INFORMS (2007). In 2007, he was elected to the National Academy of Engineering. In 2008, he was conferred the title of Doctor honoris causa from the Université catholique de Louvain (Belgium).

Lunch Talk: On the symbiosis between operations research and process mining: the story of automated model simplification

Who: Arik Senderovich, Lyon Sachs postdoctoral fellow, University of Toronto

 

When: Thursday, June 21st @ 12:00pm – 1:00pm

Where: BA8256

Abstract:  Process mining is a rapidly evolving research field that aims at discovering process models, such as queueing networks and stochastic Petri nets, from transactional data. On the one hand, process mining creates models that can be used for operational analysis (e.g. staffing and wait time estimation). On the other hand, operations research methods can be used to improve process discovery.

This talk will mainly focus on the inter-relations between process mining and operations research through the story of automated model simplification. We shall demonstrate how queueing theory and combinatorial optimization can be applied in order to improve the quality of discovered process models.


Lunch Talk: Optimal Dynamic Portfolio Liquidation with Lower Partial Moments

Who: Hassan Anis, M.A.Sc Candidate, University of Toronto

When: Wednesday, December 6th @ 12:00pm – 1:00pm

Where: RS207

Abstract: One of the most important problems faced by stock traders is how to execute large block orders of security shares. When liquidating a large position, the trader faces the following dilemma: a slow trading rate risks prices moving away from their current quote, while a faster trading rate will drive quotes away from the current one leading to a large market impact. We propose a novel quasi-multi-period model for optimal position liquidation in the presence of both temporary and permanent market impact. Four features distinguish the proposed approach from alternatives. First, instead of the common stylized approach of modelling the problem as a dynamic program with static trading rates, we frame the problem as a stochastic SOCP which uses a collection of sample paths to represent possible future realizations of state variables. This, in turn, is used to construct trading strategies that differentiate decisions with respect to the observed market conditions. Second, our trading horizon is a single day divided into multiple intraday periods allowing us to take advantage of the seasonal intraday patterns in the optimization. This paper is the first to apply Engle’s Multiplicative component GARCH to estimate and update intraday volatilities in a trading strategy. Third, we implement a shrinking horizon framework to update intraday parameters by incorporating new incoming information while maintaining standard non-anticipativity constraints. We construct a model where the trader uses information from observations of price evolution during the day to continuously update the size of future trade orders. Thus, the trader is able to dynamically update the trading decisions based on changing market conditions. Finally, we use asymmetric measures of risk which, unlike symmetric measures such as variance, capture the fact that investors are usually not averse to deviations from the expected target if these deviations are in their advantage.

Lunch Talk: Nonlinear Hybrid Planning with Deep Net Learned Transition Models and Mixed-Integer Linear Programming

Who: Buser Say, Ph.D. Candidate, University of Toronto

When: Wednesday, November 22nd @ 12:00pm – 1:00pm

Where: BA3008

Abstract: In many real-world hybrid (mixed discrete continuous) planning problems such as Reservoir Control, Heating, Ventilation and Air Conditioning (HVAC), and Navigation, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allow us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep network models of their state transitions. But there remains one major problem for the task of control – how can we plan with deep net- work learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to mixed discrete and continuous domains? In this paper, we make the critical observation that the popular Rectified Linear Unit (ReLU) transfer function for deep networks not only allows accurate nonlinear deep net model learning, but also permits a direct compilation of the deep network transition model to a Mixed- Integer Linear Program (MILP) encoding in a planner we call Hybrid Deep MILP Planning (HD-MILP-PLAN). We identify deep net specific optimizations and a simple sparsification method for HD-MILP-PLAN that improve performance over a naive encoding, and show that we are able to plan optimally with respect to the learned deep network.