Introduction

\includegraphics[type=pdf,ext=.pdf,read=.pdf,width=2.5in]{graphics/random_walk}

Greedy & Lagrangian-relaxation algorithms are randomized-rounding algorithms.

This blog explores using sample-and-increment randomized-rounding schemes: how these schemes can give better approximate solutions than do standard randomized-rounding schemes, and how derandomizing them (via the method of conditional probabilities) yields greedy and Lagrangian-relaxation algorithms in a systematic way.



Sample-and-increment randomized-rounding schemes

A typical sample-and-increment randomized-rounding scheme has the form below:

  input: Problem instance {\cal I}
  output: Approximate solution for {\cal I} (w/ positive probability)
1. Solve an LP to get fractional solution x^*.
2. Initialize each \tilde x_ j = 0.
3. For t=1,2,\ldots until some condition is met:
4. Increment \tilde x_ j, where j is randomly sampled from distribution x^*/|x^*|.
5. Return \tilde x.

(Above |x^*| = \sum _ j x^*_ j is the 1-norm of x^*.)

We are interested in two properties of sample-and-increment schemes:

  1. They can give a better quality of approximation than conventional rounding schemes.

    For example, for weighted set cover, a sample-and-increment scheme yields expected cost {\rm H}(d) times the cost of the fractional cover, where d is the largest set size (matching Chvátal’s greedy algorithm).

  2. Derandomizing them using the standard method of conditional probabilities yields greedy and Lagrangian-relaxation algorithms.

Regarding the second point, to derive an algorithm, one first analyzes the rounding scheme to show a desired performance guarantee. A typical guarantee might have the following form:

Lemma.

For any instance {\cal I}, with positive probability, the rounding scheme returns a c-approximate solution \tilde x\ldots

The proof will use standard probabilistic methods (linearity of expectation, naive union bounds, Chernoff bounds, etc.)

Next, one applies the standard method of conditional probabilities to the existence proof. The resulting deterministic algorithm is guaranteed to reach a successful outcome, that is, to match the performance guarantee of the rounding scheme (but with probability 1):

Theorem (algorithm).

For any instance {\cal I}, the algorithm returns a c-approximate solution \tilde x\ldots .

Deriving a greedy or Lagrangian-relaxation algorithm

The goal in this setting is to obtain an algorithm of the following form:

  input: Problem instance {\cal I}
  output: Approximate solution (guaranteed)
1. Solve an LP to get fractional solution x.
2. Initialize each \tilde x_ j = 0.
3. For t=1,2,\ldots until some condition is met:
4. Increment \tilde x_ j, where j is chosen so that \phi (\tilde x) does not increase.
5. Return \tilde x.

This form is atypical, in that it skips the first step, solving the linear program. The trick is to find a pessimistic estimator \phi such that the choice of j in line 4 can be made without knowing the fractional solution x^*. Starting with a rounding scheme based on random sampling makes this possible. I call this kind of randomized rounding oblivious randomized rounding.

Many existing greedy/Lagrangian-relaxation algorithms can be derived this way. Prime examples are the greedy set-cover algorithm by Johnson [4], Lovász [5], and Chvátal [2], and the Lagrangian-relaxation algorithms of Plotkin, Shmoys, and Tardos [6] (comparable to algorithms by Grigoriadis and Khachiyan [3]). The latter compute (1+\varepsilon )-approximate solutions to fractional packing/covering LPs.

From this viewpoint, each algorithm follows from an existence proof “in a systematic way” [7] by applying the method of conditional probabilities. Standard probabilistic methods yield, and abstract, the arguments necessary to prove the performance guarantees of the algorithms (approximate primal-dual pairs, invariants, smooth penalty functions, etc.)

In short, the “probabilistic lens” [1] helps to organize and standardize the technical ideas underlying the design and analyses of these algorithms.

The blog presents the underlying ideas illustrated by examples. The appendices contain various probabilistic bounds, some of which will be new to most readers, and some mini-tutorials on background concepts.

Neal Young

p.s. Blogging in LaTeX

I write these notes in LaTeX, then use plasTeX to translate them to HTML and MathJax. More on that here.

MathJax is loading, equations are not yet displayed...

Comments

  1. Hi Neal,
    Very interesting work! Happy I got to read your paper. I have a quick question though about your generalized algorithm when applied to approximating the optimal solution to a game with sparse strategies. It is said that at each iteration the best response pure strategy is added to S. How ever the algorithm shows x which is supposed to be a mixed strategy. Does the \(\epsilon\) approximation still hold when pure strategies are selected at each iteration instead of mixed strategies? What happens when the oracle returns a pure strategy that is within \(\alpha\) from the best response pure strategy?
    Thanks.

    - NL
    1. If I understand your question, the best response in each iteration should be a pure strategy, so in each iteration a pure strategy should be selected. If an \(\alpha\)-approximate strategy is selected as the response in each iteration (instead of the true best response), then the algorithm should generate finally a \((1+\epsilon)\alpha\)-approximate solution. Not sure if that answers your questions.

      - Neal Post author
      1. Thanks for the quick answer. Actually it was mostly about selecting the best pure strategy vs the best mixed strategy at each iteration. I was wondering if that makes any difference… But then when I think about it, if x* is a best response mixed strategy for the opponent’s strategy y, then each pure strategy i in the support of x* is a pure best response with v(x*,y)=v(i,y). So the \(\epsilon\)-approximation holds for both.

        - NL
  2. Hi Neal,

    Your SODA’95 paper is a superb paper. I’ve always felt that paper deserves a journal version because it is jam-packed with ideas, but it is not very easy to read. This is not a criticism of your writing skill — I usually I think your writing and talks are extremely clear.

    Anyways, I’m excited to have discovered this blog and I hope that it can serve as a substitute for the journal version of your paper.

    - Anonymous Post author
    1. Well, thanks! The ideas have developed a fair amount since the paper. This blog doesn’t contain much yet, but this winter quarter I should have time to add more.

      - Neal Post author
  3. I’d love to hear more on MathJax-enabled plasTeX. Thanks for your offer to share

    - Juergen Stockburger Post author

Leave a Reply