Article ID Journal Published Year Pages File Type
471132 Computer Science Review 2011 39 Pages PDF
Abstract

One approach to confronting computational hardness is to try to understand the contribution of various parameters to the running time of algorithms and the complexity of computational tasks. Almost no computational tasks in real life are specified by their size alone. It is not hard to imagine that some parameters contribute more intractability than others and it seems reasonable to develop a theory of computational complexity which seeks to exploit this fact. Such a theory should be able to address the needs of practitioners in algorithmics. The last twenty years have seen the development of such a theory. This theory has a large number of successes in terms of a rich collection of algorithmic techniques, both practical and theoretical, and a fine-grained intractability theory. Whilst the theory has been widely used in a number of areas of applications including computational biology, linguistics, VLSI design, learning theory and many others, knowledge of the area is highly varied. We hope that this article will show the basic theory and point at the wide array of techniques available. Naturally the treatment is condensed, and the reader who wants more should go to the texts of Downey and Fellows (1999) [2], Flum and Grohe (2006) [59], Niedermeier (2006) [28], and the upcoming undergraduate text (Downey and Fellows 2012) [278].

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
, ,