Nesta terça-feira, 16, às 14 horas, na sala 2077 do Instituto de Ciências Exatas (ICEx) da UFMG, o professor Jakob Nordström, das Universidades de Compenhague e de Lund, irá proferir a palestra “Combinatorial solving with provably correct results”. A palestra é aberta a todos os interessados e não há necessidade de inscrição prévia. abaixo, em inglês, o resumo da palestra e biografia do professor.
ABSTRACT
Since the turn of the millennium there have been dramatic improvements in algorithms for combinatorial solving and optimization. The flipside of this is that as solvers get more and more sophisticated, it becomes harder to avoid bugs sneaking in during algorithm design and implementation, and it is well documented that even the most mature tools sometimes return incorrect results. Software testing, while important, has not been sufficient to resolve this problem, and formal verification methods are far from being able to scale to the level of complexity in modern combinatorial solvers. During the last ten years the Boolean satisfiability (SAT) community has instead successfully introduced proof logging, meaning that algorithms have to output, along the answer to a problem, a machine-verifiable proof that this answer is correct, but this success has not been replicated in stronger combinatorial optimization paradigms.
In the last few years, this has started to change with the introduction of so-called pseudo-Boolean proof logging using 0-1 integer linear constraints. In this talk, we will illustrate how this approach seems to hit a sweet spot between simplicity and ease of verification on the one hand and expressive power on the other. We will try to provide a glimpse how pseudo-Boolean reasoning has enabled proof logging not only for previously unsupported advanced
SAT techniques (including symmetry breaking) but also for SAT-based optimization (MaxSAT solving), subgraph solving, constraint programming, and most recently presolving techniques in 0-1 integer linear programming (ILP). Perhaps the most striking feature of this method is that proofs can be written in a simple proof format referred to as “cutting planes with strengthening rules” that knows nothing about symmetries, groups, graphs, integer-valued variables,
or global constraints.
Bio
I am a professor at the Department of Computer Science at the University of Copenhagen, Denmark, and also have a part-time affiliation with the the Department of Computer Science at Lund University.
Computers are everywhere today—at work, in our cars, in our living rooms, and even in our pockets—and have changed the world beyond our wildest imagination. Yet these marvellous devices are, at the core, amazingly simple and stupid: all they can do is to mechanically shuffle around zeros and ones. What is the true potential of such automated computational devices? And what are the limits of what can be done by mindless calculations? Finding answers to this kind of questions is ultimately what my research is about.
Computational complexity theory gives these deep and fascinating philosophical questions a crisp mathematical meaning. A computational problem is any task that is in principle amenable to being solved by a computer—i.e., it can be worked out by mechanical application of mathematical steps. By constructing general, abstract models of computers we can study how to design efficient methods, or algorithms, for solving different tasks, but also prove mathematical theorems showing that some computational problems just cannot be solved efficiently for inherent reasons (meaning that is impossible to design algorithms for them that are as efficient as we would like).
I am particularly interested in understanding combinatorial optimization problems, which are of fundamental mathematical importance but also have wide-ranging applications in industry. My goal is, on the one hand, to prove formally that many such problems are beyond the reach of current algorithmic techniques, but also, on the other hand, to develop new algorithms that have the potential to go significantly beyond the current state of the art. In the last few years, I have also been doing research on how complexity theory can be harnessed to produce certificates that algorithms are actually computing correct results. It is an open secret in combinatorial optimization that even the most mature optimization tools in academica and industry sometimes produce wrong answers, but there has been no really principled way of addressing this problem. Our work has started to change this state of affairs.