Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
419483 | Discrete Applied Mathematics | 2011 | 13 Pages |
Abstract
We give an algorithm that with high probability properly learns random monotone DNF with t(n)t(n) terms of length ≈logt(n)≈logt(n) under the uniform distribution on the Boolean cube {0,1}n{0,1}n. For any function t(n)≤poly(n) the algorithm runs in time poly(n,1/ϵ) and with high probability outputs an ϵϵ-accurate monotone DNF hypothesis. This is the first algorithm that can learn monotone DNF of arbitrary polynomial size in a reasonable average-case model of learning from random examples only. Our approach relies on the discovery and application of new Fourier properties of monotone functions which may be of independent interest.
Related Topics
Physical Sciences and Engineering
Computer Science
Computational Theory and Mathematics
Authors
Jeffrey C. Jackson, Homin K. Lee, Rocco A. Servedio, Andrew Wan,