Updating CDN-cached files in GCS without changing filenames

Need help with GCS file updates behind CDN

I’m working on a project where we have files stored in Google Cloud Storage. These files are served through a CDN using signed URLs. We’ve been using query strings with version numbers to update content (like file.ext?v=100).

This worked great for a while but now we’re running into issues. The docs say backend buckets don’t process query strings anymore. This seems to be a new change.

Does anyone know how to get around this? We need to update files quickly without changing names or using invalidation. We can’t ditch the CDN because of latency concerns.

Any ideas on how to keep our setup working smoothly? Thanks for any help!

hey, have u tried using custom headers instead of query strings? you could set a unique header for each file version and configure ur CDN to include it in the cache key. that way, u can update files without changing names or messing with query params. might be worth a shot!

I’ve encountered a similar challenge in my work with GCS and CDNs. One effective approach we implemented was using Cloud Functions to handle file updates. Essentially, we set up a function triggered by file uploads to GCS. This function would generate a unique hash based on the file content and update the object metadata with this hash. Then, we configured our CDN to include this metadata in the cache key. This way, when the file content changes, the CDN sees it as a new object without altering the filename. It’s a bit more complex to set up initially, but it provides a robust solution for frequent updates without relying on query strings or invalidations. Have you considered something along these lines?

hmm, that’s a tricky situation! have you considered using object versioning in GCS? it lets you keep multiple versions of a file without changing the name. maybe you could update the CDN to serve the latest version? just brainstorming here - what other options have you explored so far?