The first step is to create a lifecycle rule on your bucket that matches based on the tag to use. We can now plug this all together to create the final solution, still using Fargate spot containers to distribute the work of creating many S3 batch jobs. The tag filter is exactly what we need when combined with the S3 batch action to add tags. S3 bucket lifecycle rules can be configured on: ![]() So, how do we handle deletes? Tagging is the answer. It can invoke a Lambda function which could handle the delete of the object but that adds extra costs and complexity. Batch then does its thing and reports back with a success or failure message and reports on objects which succeeded or failed.Ĭonspicuously missing from the list of actions is delete. The idea is you provide S3 batch with a manifest of objects and ask it to perform an operation on all objects in the manifest. At the time of writing, S3 batch can perform the following actions: ![]() S3 batch is an AWS service that can operate on large numbers of objects stored in S3 using background (batch) jobs. We settled on using S3 batch and some tooling around this to handle the removal of the data “automagically” using tagging and S3 content lifecycle rules. At Rewind, we have a requirement to remove data from AWS S3 based on an external time criteria.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |