In parameterized algorithmics the process of kernelization is defined as a polynomial time algorithm that transforms the instance of a given problem to an equivalent instance of a size that is limited by a function of the parameter. As, afterwards, this smaller instance can then be solved to find an answer to the original question, kernelization is often presented as a form of preprocessing. A natural generalization of kernelization is the process that allows for a number of smaller instances to be produced to provide an answer to the original problem, possibly also using negation. This generalization is called Turing kernelization. Immediately, questions of equivalence occur or, when is one form possible and not the other. These have been long standing open problems in parameterized complexity. In the present paper, we answer many of these. In particular we show that Turing kernelizations differ not only from regular kernelization, but also from intermediate forms as truth-table kernelizations. We achieve absolute results by diagonalizations and also results on natural problems depending on widely accepted complexity theoretic assumptions. In particular, we improve on known lower bounds for the kernel size of compositional problems using these assumptions.

doi.org/10.1007/978-3-030-10801-4
Lecture Notes in Computer Science
45th International Conference on Current Trends in Theory and Practice of Computer Science
Algorithms and Complexity

Witteveen, J., Bottesch, R., & Torenvliet, L. (2019). A Hierarchy of Polynomial Kernels. In 45th International Conference on Current Trends in Theory and Practice of Computer Science, SOFSEM 2019. doi:10.1007/978-3-030-10801-4