When the cache entry is destroyed, we need to delete the blob storage object too. We clean up the object in the `after_destroy` hook. If the cache object is already deleted, we don't need to do anything. Also, the cache might not be committed yet. So if the multipart upload is not completed, we abort the upload. If the upload is aborted already, we don't need to do anything.
35 lines
738 B
Ruby
35 lines
738 B
Ruby
# frozen_string_literal: true
|
|
|
|
require_relative "../../model"
|
|
|
|
require "aws-sdk-s3"
|
|
|
|
class GithubCacheEntry < Sequel::Model
|
|
many_to_one :repository, key: :repository_id, class: :GithubRepository
|
|
|
|
include ResourceMethods
|
|
|
|
def self.ubid_type
|
|
UBID::TYPE_ETC
|
|
end
|
|
|
|
def blob_key
|
|
"cache/#{ubid}"
|
|
end
|
|
|
|
def after_destroy
|
|
super
|
|
if committed_at.nil?
|
|
begin
|
|
repository.blob_storage_client.abort_multipart_upload(bucket: repository.bucket_name, key: blob_key, upload_id: upload_id)
|
|
rescue Aws::S3::Errors::NoSuchUpload
|
|
end
|
|
end
|
|
|
|
begin
|
|
repository.blob_storage_client.delete_object(bucket: repository.bucket_name, key: blob_key)
|
|
rescue Aws::S3::Errors::NoSuchKey
|
|
end
|
|
end
|
|
end
|