Prophet inequalities provide an idealized ground to study online decision making problems under stochastic input. The basic idea is to design good stopping rules for a given problem whose reward is, in expectation, close to what a prophet, that can foresee the whole instance of the problem in advance, can get. The area of prophet inequalities was very active in the probability theory community in the 70's and 80's. Then it took a two-decade nap, until about 10 years ago, when regained interest arouse because of its connections to posted price mechanisms and other problems. In this talk I will review some of these recent results and will discuss new variants of the problem in which the distributional information is replaced by the availability of samples taken from the input.
The term Digital Humanities (briefly DH), coined just about a dozen years ago, is used to describe efforts aimed at supporting research (and teaching) in the humanities (and the social sciences) by employing tools and methods from information technology, computer science and mathematics. DH is developing in many directions, but despite some hype and significant media coverage, DH is still in its infancy. The current DH situation resembles the early days of Operations Research (OR), also a field in the interface of many other disciplines, where – even after 30 years of theory development since the 1950s – it was still unclear whether OR can ever be utilized successfully in industry and society.
Standard statistical analysis has a long tradition in various branches of the humanities, and the usual IT-tools are frequently employed as well. In my lecture I will address the issue how graph theory, optimization and other OR tools can support DH, i.e., which modeling approaches and solution techniques of Operations Research can be successfully employed in Digital Humanities. I will describe, in particular, some of the efforts (still rudimentary) made in this direction at the Berlin-Brandenburg Academy of Sciences and Humanities with a special focus linguistics, archeology, and on the edition of manuscripts (of famous scientists such as Leibniz, Alexander von Humboldt, Kant,...).
The Fifties are the "official" birth years of Combinatorial Optimization. Our tour starts with the first modern polynomial-time algorithm for the assignment problem, invented by Kuhn in 1955. It was christened the "Hungarian method" to acknowledge its origin from older results obtained by two Hungarian mathematicians: Dénes Kőnig in 1916 and Jenő Egerváry in 1931. We next show that a number of other famous algorithms of the Fifties (for spanning trees, shortest paths, maximum flows,...) were independent re-discoveries of results obtained by central European mathematicians in the Thirties. We discuss the results and the complicated lives of Kőnig and Egerváry, as well as an old related result by Jacob Jacobi. We conclude our tour with a combinatorial optimization problem, independently defined in satellite communication and in scheduling theory, for which various modern authors discovered and re-discovered the same polynomial-time algorithm. We show that, again, such algorithm directly implements another result from the Thirties, which also implies the famous Birkhoff-von Neumann theorem.
A fundamental class of combinatorial optimization problems involve packing and covering. Examples include maximum independent set, dominating set, set cover, hitting set, etc. Motivated by applications in sensor networks and robotics, natural instances of such problems arise in geometric situations, such as covering points with a small number of disks, viewing "art galleries" with a small number of guards or with a mobile guard on a short route, packing disks or other shapes within a bounded domain, etc. Almost all of these problems are NP-hard even in simple two-dimensional settings. The problems get even harder when we take into account uncertain data, time constraints for scheduled coverage, and routing/connectivity problems in combination with coverage constraints. We discuss several of these geometric optimization problems from the perspective of approximation algorithms and we describe some techniques that have led to new or improved bounds or running times for selected packing, covering, and routing/coverage problems.
The current standard practice for a data scientist, confronted with a machine learning task on relational data, is to issue a feature extraction query to extract the (carefully curated) data from a relational database by joining together multiple tables to materialize a design matrix, and then to import this design matrix into some machine learning tool to train the model. This standard practice is wasteful because computing relational joins is computationally expensive. In particular, The resulting design matrix will likely contain much redundant information and be significantly larger than the original tables (exponentially larger in the worst-case). Thus, conceptually this standard practice seems unnecessarily inefficient.
I will discuss our nascent efforts to develop "relational algorithms" for common machine learning problems. Informally a "relational algorithm" is one that works directly on the relational data, without forming the design matrix, and that is much faster than standard practice. Further I will discuss our current understanding of which algorithmic problems can be efficiently solved if the input is in relational form.
The diameter of a polytope P is the maximum length of a shortest path between a pair of vertices on the 1-skeleton of P, which is the graph where the vertices correspond to the 0-dimensional faces of P, and the edges are given by the 1-dimensional faces of P. Despite decades of studies, it is still not known whether the diameter of a d-dimensional polytope with n facets can be bounded by a polynomial function of n and d. This is a fundamental open question in discrete mathematics, motivated by the (still unknown) existence of a polynomial pivot rule for the Simplex method for solving Linear Programs. In this talk, I will discuss algorithmic and complexity results related to the diameter of polytopes, highlighting some important open questions.
Let F be a family of subsets of some universal set U, and h an integer ≥ 1. Say that F is h-intersecting when every h sets of F intersect. The core of F is the intersection of all sets of F, denoted core(F). The family F is h-Helly when every h-intersecting subfamily F' of it satisfies core(F') ≠ ∅. Finally, the Helly number of the family F is the least integer h such that F is h-Helly. The Helly number has been introduced about 100 years ago. It has been studied for some different families of subsets, with focus in some of its distinct aspects, algebraic, geometric, algorithmic. The Helly number has applications to problems originated from some different areas. In this talk, we will describe results related to the determination of this parameter. We concentrate on families of vertices or edges of a graph. Special emphasis will be given to families of paths on a grid.
The Vehicle Routing Problem (VRP) is among the most widely studied problems in the fields of operations research and combinatorial optimization. Its relevance stems from its direct application in the real world systems that distribute goods and provide services, vital to the modern economies. Reflecting the large variety of conditions present in those systems, the VRP literature is spread into dozens of variants. For example, there are variants that consider time windows, multiple depots, mixed vehicle fleet, split deliveries, pickups and deliveries, precedences, etc. The currently best exact VRP algorithms are based on the combination of column generation and cut separation, in the so called Branch-Cut-and-Price (BCP) algorithms. This talk surveys significant recent contributions by several authors. In particular, it presents the concept of cuts with limited-memory (Pecin et al. 2014), a technique that represented a breakthrough on some of the most classical VRP variants, allowing the optimal solution of instances with up to a few hundreds points. The talk also presents VRPSolver, a generic exact VRP framework that obtains state-of-the-art performance in dozens of variants.
The class of even-hole-free graphs (i.e. graphs that do not contain a chordless cycle of even length as an induced subgraph) has been studied since the 1990's, initially motivated by their structural similarity to perfect graphs. It is known for example that they can be decomposed by star cutsets and 2-joins into algorithmically well understood subclasses, which has led to, for example, their polynomial time recognition. Nevertheless, the complexity of a number of classical computational problems remain open for this class, such as the coloring and stable set problems.
In this talk we show that even-hole-free graphs with bounded degree have bounded treewidth (a result obtained in joint work with Abrishami and Chudnovsky), which implies that many algorithmic problems can be solved in polynomial time for this class.
Treewidth is a parameter that emerged out of the study of minor closed classes of graphs (i.e. classes closed under vertex and edge deletion, and edge contraction). It in some sense describes the global structure of a graph. Roughly, a graph has treewidth k if it can be decomposed by a sequence of noncrossing cutsets of size at most k into pieces of size at most k+1. The study of hereditary graph classes (i.e. those closed under vertex deletion only) reveals a different picture, where cutsets that are not necessarily bounded in size (such as star cutsets, 2-joins and their generalization) are required to decompose the graph into simpler pieces that are structured but not necessarily bounded in size. A number of such decomposition theorems are known for complex hereditary graph classes, including for even-hole-free graphs, perfect graphs and others. They do not describe the global structure in the sense that a tree decomposition does, since the cutsets guaranteed by these decomposition theorems are far from being noncrossing. In the case of even-hole-free graphs of bounded degree we show how these cutsets can be partitioned into a bounded number of well-behaved collections, which allows us to bound the treewidth of such graphs.
The Four Color Theorem states that planar graphs are 4-colorable. Planar graphs are precisely the graphs that contain no subdivision of K5 or K3,3. Are graphs containing no subdivision of K5 also 4-colorable? This was conjectured by Hajós in 1950s. We will discuss progress on this conjecture, as well as related problems about graph structures. We will also discuss problems about cycles in graphs that are partially motivated by the Four Color Theorem.
This minicourse is aimed at grad students in graph theory and/or combinatorial optimization, with little to no previous exposure to semidefinite programming. I intend to cover some basics of SDP formulations for discrete problems, e.g., the vector chromatic number.