How does deleteRecordsBulk handle large datasets?

I have a table with around 25,000 records that are no longer relevant, and I need to delete them.
Right now, I could do it with something like:

let result = await doo.table.deleteRecordsBulk(
    "exchangeRates",
    25000,
    `baseCurrency(notempty)`
);
console.log("Finished");
console.log(result);

I’d like to make this process as fast and light on the app as possible, and I’m wondering:

  1. Does deleteRecordsBulk automatically handle large numbers of records by processing them internally in batches/chunks, or do I need to implement batching logic on the client side?
  2. If I send multiple deleteRecordsBulk calls in parallel with the same filter, will they target the same records (causing overlap), or does the API handle this to avoid collisions?
  3. Is there a recommended, more efficient approach for deleting large datasets without risking duplicate delete attempts or hitting API throttling limits?

If you have any recommendations or alternative approaches, I’d love to hear them.

deleteRecordsBulk(table, limit, filter) deletes up to limit records that match the filter in that one call. To remove ~25k rows, call it in a loop until the batch deletes 0.

Don’t fire parallel calls with the same filter—they can race over the same pool and waste quota. If you really need parallelism, shard into non-overlapping filters (e.g., by year) and run one worker per shard.

Keep batches modest (e.g., 500–2000) to be gentle on throttling. If your plan has rate limits, a tiny delay between batches helps.

const TABLE = 'exchangeRates';
const FILTER = 'baseCurrency(notempty)';
const CHUNK = 1000; // tune 500–2000

let total = 0;

while (true) {
  const res = await doo.table.deleteRecordsBulk(TABLE, CHUNK, FILTER);
  const deleted = res?.bulk?.successCount ?? 0;
  total += deleted;
  if (deleted === 0) break;                 
  await new Promise(r => setTimeout(r, 150)); 
}

doo.toast.success(`Deleted ${total} records`);