FOR UPDATE locks the returned rows (up to 200) while deleting each.
This causes additional latency in the runtime route. Avoid locking
the rows, but add a filter when deleting each row. If the row no
longer matches the filter used when the row was SELECTed, do not
delete it.
This uses Sequel's instance_filters plugin to allow for additional
conditions in the DELETE statement (beyond the primary key filter).
It then adds destroy_where dataset and instance methods, which
use the arguments to setup the instance filter.
This requires transactions/savepoints around each DELETE statement,
so the DELETE process is now slower. However, the #cleanup_cache
method is not sensitive to latency (unlike the runtime route),
so this shouldn't cause a problem.