Submodular optimization has been receiving a lot of attention in machine learning recently. It a nice way to generalize convex optimization to sets, and has the nice property that very simple greedy algorithm is : 1) pretty close to optimal and 2) pretty much the best one can do.

If is a set, a set function is submodular if it satisfies one of the following equivalent conditions:

1. If , then .

2. If and , then .

The conditions capture the intuition for diminishing margins of return.

We call a submodular function monotone if . Monotone submodular optimization include problems such as Max Cover, which says given a collection of sets and , find the sets whose union is the greatest. Max Cover is NP-hard. What’s very nice is that any monotone submodular optimization can be approximated to within by a simple Greedy algorithm.

The Greedy algorithm works in iterations. In the first round, find the single element to maximize . Next, given , find to maximize and so on. It’s actually pretty easy to show by induction that this give s approximation to the optimal solution for any fixed .