*** 8,13 **** --- 8,15 ---- Of course, that's probably not really reasonable, it implies we're only processing a handful of terms per pass. Instead, we could have a setting describing how much effort per insert we're willing to spend doing bookkeeping. Then we could split a big merge across a couple dozen inserts (or a couple hundred when doing the 64k->1M merge). Or it might be reasonable to do it in something like sqrt(N) steps. So the the merge to a 64kdoc segment, 256 steps, to 1mdoc 1024 steps. + NOTE: Given inline interior nodes, it might be reasonable to make the basic incremental merge increment be a single interior node worth of leaves, in which case the incremental merge state is merely a stack of partial interior nodes. This always make forward progress so long as single-document segments never require a full interior node. + ---- *Background merges*