With these caching extensions, Sequel no longer needs to run multiple queries for every model to get the schema and indexes. The schema and indexes are queried once in a Rake task, and stored in marshal format in the cache directory. When models are loaded, if entries for the models are already present in the cache, Sequel uses the cached value instead of querying the database. On my laptop, this shaves off about 0.2 seconds when loading, for about a 10% reduction in load time. Using this cache reduces the number of database queries when loading the application and all models from 125 to 6. The 6 remaining queries: * 3 used for all new PostgreSQL connections * 1 to get the PostgreSQL server version * 2 to check whether the accounts table exists (probably in Rodauth) This can add risk, because if the caches do not match the database, then the models will not have the correct schema or indexes. To attempt to avoid this, this modifes the test_up and test_down rake tasks to refresh the Sequel caches. However, developers must remember to check in the caches when committing migrations. Given that the performance increase of doing this appears to be modest on modern computers, maybe the additional risk of this approach makes it not worth merging. As Ubicloud grows in terms of number of models, the benefits of this approach increase, but computer performance also increases over time, and the higher the computer performance, the lower the benefit of caching. Note that if we plan on using the pg_auto_constraint_validations plugin, that has a similar issue to a much greater degree than the queries to get the model schema and indexes. It also supports caching, and we would definitely want to use caching as the performance difference would be more significant. Since we would be doing caching anyway at that point, we would want to do index and schema caching as well. Switch GitHub CI workflow to use _test_up instead of test_up. We do not want the CI workflow refreshing the caches, we want it to test using the existing caches
38 lines
1.8 KiB
Ruby
38 lines
1.8 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
require "netaddr"
|
|
require "sequel/core"
|
|
require_relative "config"
|
|
require_relative "lib/util"
|
|
|
|
db_ca_bundle_filename = File.join(Dir.pwd, "var", "ca_bundles", "db_ca_bundle.crt")
|
|
Util.safe_write_to_file(db_ca_bundle_filename, Config.clover_database_root_certs)
|
|
DB = Sequel.connect(Config.clover_database_url, max_connections: Config.db_pool - 1, pool_timeout: Config.database_timeout).tap do |db|
|
|
# Replace dangerous (for cidrs) Ruby IPAddr type that is otherwise
|
|
# used by sequel_pg. Has come up more than once in the bug tracker:
|
|
#
|
|
# https://github.com/jeremyevans/sequel_pg/issues?q=inet
|
|
# https://github.com/jeremyevans/sequel_pg/issues/30
|
|
# https://github.com/jeremyevans/sequel_pg/pull/37
|
|
db.add_conversion_proc(650, NetAddr.method(:parse_net))
|
|
db.add_conversion_proc(869, NetAddr.method(:parse_ip))
|
|
end
|
|
|
|
postgres_monitor_db_ca_bundle_filename = File.join(Dir.pwd, "var", "ca_bundles", "postgres_monitor_db.crt")
|
|
Util.safe_write_to_file(postgres_monitor_db_ca_bundle_filename, Config.postgres_monitor_database_root_certs)
|
|
begin
|
|
POSTGRES_MONITOR_DB = Sequel.connect(Config.postgres_monitor_database_url, max_connections: Config.db_pool, pool_timeout: Config.database_timeout) if Config.postgres_monitor_database_url
|
|
rescue Sequel::DatabaseConnectionError => ex
|
|
Clog.emit("Failed to connect to Postgres Monitor database") { {database_connection_failed: {exception: Util.exception_to_hash(ex)}} }
|
|
end
|
|
|
|
# Load Sequel Database/Global extensions here
|
|
# DB.extension :date_arithmetic
|
|
DB.extension :pg_json, :pg_auto_parameterize, :pg_timestamptz, :pg_range, :pg_array
|
|
Sequel.extension :pg_range_ops
|
|
|
|
DB.extension :schema_caching
|
|
DB.extension :index_caching
|
|
DB.load_schema_cache?("cache/schema.cache")
|
|
DB.load_index_cache?("cache/index.cache")
|