# Learning monotone Boolean functions

Avrim Blum, Carl Burch, and John Langford

1-page nontechnical abstract,
(gzipped PostScript)
CMU Immigration Course Symposium, 1998

extended abstract
(PostScript,
gzipped PostScript,
PDF),
FOCS '98, 408-415,
© 1998, by IEEE

talk slides
(gzipped PostScript),
FOCS '98

### Abstract

We consider the problem of learning monotone Boolean functions over
{0,1}^{n} under the uniform distribution.
Specifically, given a polynomial number of uniform random samples for an
unknown monotone Boolean function `f`, and given polynomial
computing time, we would like to approximate `f` as well as
possible. We describe a simple algorithm that we prove achieves error
at most 1/2 - Omega(1/sqrt(`n`)), improving on the previous
best bound of 1/2 - Omega((log^{2} `n`)/`n`).
We also prove that no algorithm, given a polynomial number of samples,
can guarantee error 1/2 -
omega((log `n`)/sqrt(`n`)), improving on the previous
best hardness bound of O(1/sqrt(`n`)). These lower bounds hold
even if the learning algorithm is allowed membership queries. Thus this
paper settles to an O(log `n`) factor the question of the best
achievable error bound for learning the class of monotone Boolean
functions with respect to the uniform distribution.

Learning monotone Boolean functions /
Publications /
Carl Burch