## Bacon and eggs: task scheduling into batches

The time to cook bacon and eggs is less than
the time to cook bacon plus the time to cook eggs.
This suggest the following abstract model for tasks which are better done in batches
than separately.

- Tasks arrive to an "inbox" at times of a Poisson process of rate 1.
- Each
task is of a random "type", uniform on type-space \([0,1] \).
- The (non-random) time needed to process a set B of tasks is a given function \(S(B)\).
- The function \(S(B)\) is assumed to be monotone and subadditive: for disjoint sets
A and B,

\( 0 < S(A) < S(A \cup B) < S(A) + S(B) \) .
- After a batch completes processing, the scheduler looks at the current set A of
items in the inbox and uses an algorithm to choose a subset \(B \subseteq A\) to process
as the next batch.
- An item of type a incurs cost \(c(a)\) per unit time, from its arrival time until
the time at which the batch containing it is finished.

For any online
algorithm there is a number
W = long-run average cost per unit time

arising from the waiting time costs.
Let \(W_*\) be the minimum, over all online algorithms, of W, and assume \(W_*\) is finite.
The value of \(W_*\) and the optimal algorithm depend in some complicated way on the
functions \(S(\cdot )\) and \(c(\cdot )\).
**Problem:** Is there some explicit algorithm (depending of course on \(S(\cdot )\) and \( c(\cdot
))\) and a constant C such that the W for this algorithm
satisfies
\( W/W_* < C \) for all \(S(\cdot )\) and \( c(\cdot
))\)?

Conceptually, I guess that at both ends (light traffic or heavy traffic) some simple
algorithm would work; the issue in the intermediate case.
This paper shows that in the \(c = 1\) context, the greedy algorithm (put everything in
one batch) has finite W whenever any
algorithm does.

**History**.
Problem posted March 2009.