With these caching extensions, Sequel no longer needs to run
multiple queries for every model to get the schema and indexes.
The schema and indexes are queried once in a Rake task, and
stored in marshal format in the cache directory. When models
are loaded, if entries for the models are already present
in the cache, Sequel uses the cached value instead of querying
the database.
On my laptop, this shaves off about 0.2 seconds when loading,
for about a 10% reduction in load time.
Using this cache reduces the number of database queries when
loading the application and all models from 125 to 6. The 6
remaining queries:
* 3 used for all new PostgreSQL connections
* 1 to get the PostgreSQL server version
* 2 to check whether the accounts table exists (probably in Rodauth)
This can add risk, because if the caches do not match the
database, then the models will not have the correct schema or
indexes. To attempt to avoid this, this modifes the test_up and
test_down rake tasks to refresh the Sequel caches. However,
developers must remember to check in the caches when committing
migrations.
Given that the performance increase of doing this appears to be
modest on modern computers, maybe the additional risk of this
approach makes it not worth merging. As Ubicloud grows in
terms of number of models, the benefits of this approach
increase, but computer performance also increases over time,
and the higher the computer performance, the lower the benefit
of caching.
Note that if we plan on using the pg_auto_constraint_validations
plugin, that has a similar issue to a much greater degree than
the queries to get the model schema and indexes. It also supports
caching, and we would definitely want to use caching as the
performance difference would be more significant. Since we would
be doing caching anyway at that point, we would want to do index
and schema caching as well.
Switch GitHub CI workflow to use _test_up instead of test_up. We
do not want the CI workflow refreshing the caches, we want it
to test using the existing caches