S3 Lifecycle Rules: Using Bucket Lifecycle Configurations (2022)

As one of the most popular storage solutions in the cloud, Amazon S3 storage is used by many enterprise and corporate organizations around the world for storing their valuable data.

Managing this data effectively based on the lifecycle of the data is a key requirement in order to ensure cost-efficiency. Amazon S3 Lifecycle configuration is a simple set of rules that can be used to manage the lifecycle of such data on S3, in an automated fashion.

This article will take a look at the concept of AWS S3 versioning and the Amazon S3 lifecycle configurations as a concept. It will then show how to leverage the lifecycle configuration to expire and delete objects in order to save cloud storage costs.

Use the links below to jump down to the sections on:

  • What Is Amazon S3 Object Versioning?
  • Amazon S3 Lifecycle Configuration
    • What Are S3 Transition Actions?
    • What Are S3 Expiration Actions?
  • Leveraging S3 Lifecycle Management Configurations to Delete S3 Objects
  • How to Set Up an S3 Lifecycle Policy to Delete Objects
  • Do Even More with AWS S3 Object Storage

What Is Amazon S3 Object Versioning?

In order to fully understand all the different applications of the Amazon S3 lifecycle configurations, it is important to understand the concept of S3 bucket version control first.

The Amazon S3 versioning feature allows users to keep multiple versions of the same object in an S3 bucket for rollback or recovery purposes.

(Video) Simplify Your Data Lifecycle & Optimize Storage Costs With Amazon S3 Lifecycle | Amazon Web Services

When an Amazon S3 bucket is enabled for versioning, each object in the bucket is given a version identifier that changes each time the object changes or is overwritten. When an object is overwritten, this new object will be the current version while the older file will have an older version ID.

Older versions can be leveraged by users to restore objects that are deleted or overwritten as needed. When an S3 object is deleted by the end user/application, Amazon S3 will insert a delete marker. This delete marker in effect becomes the object’s current version.

While S3 versioning is handy for various recovery purposes, it does mean that once enabled, older objects which were deleted or overwritten continue to consume the underlying storage capacity, incurring consumption costs.

Amazon S3 Lifecycle Configuration

AWS S3 lifecycle configuration is a collection of rules that define various lifecycle actions that can automatically be applied to a group of Amazon S3 objects. These actions can be either transition actions (which makes the current version of the S3 objects transition between various S3 storage classes) or they could be expiration actions (which defines when an S3 object expires).

What Are S3 Transition Actions?

Transition actions are typically a set of rules that can be configured to make current versions of the S3 objects move/tier between various storage classes upon reaching a specific lifetime (in number of days).

For example, a transition lifecycle rule action can be set to automatically move Amazon S3 objects from the default S3 standard tier to Standard-IA (Infrequent Access) 30 days after they were created in order to reduce S3 storage costs. The same rule can also be configured to archive the same objects after another three months by automatically moving them from S3 Standard-IA to Glacier or Glacier Deep Archive to further reduce storage costs.

Amazon S3 storage tier transitions can follow the waterfall model as illustrated below:

(Video) AWS S3 Tutorial - How to Create Lifecycle Rules to Manage your S3 Files!

S3 Lifecycle Rules: Using Bucket Lifecycle Configurations (1)Source: AWS

When the lifecycle patterns of data are known clearly, customers can select specific storage classes for the data to transition to. When the lifecycle patterns of data are not clearly known, data can be transitioned to the S3 Intelligent-Tiering class instead, where Amazon S3 will manage the tiering behind the scenes.

This ability for S3 objects to be tiered into different storage classes with different cost structures allows organizations to potentially reduce their cloud storage costs, throughout the data’s lifecycle.

Consider an example of an expense claim system that frequently uploads images of receipts to Amazon S3. These images are often frequently accessed during the first 30-60 days of their existence for validating and processing the expense claim. But after this initial period, where the expenses claim has been processed and paid out, the images of the receipts are no longer frequently accessed. They can now safely be moved off to the less-expensive S3 Infrequent Access tier to enable cost savings.

Once the infrequent access period also elapses (for example, once a back-end quarterly accounting cycle has completed), the images of these receipts will no longer need to be accessed. They can now be automatically moved off to cheaper archive tiers such as Glacier for long-term storage.

What Are S3 Expiration Actions?

Similar to transition actions, expiration actions enable customers to define when the current version of S3 objects expire (which will automatically remove them from the S3 bucket). S3 expiration actions within the lifecycle policy allow users to permanently delete noncurrent versions of S3 objects from the bucket or permanently delete previously expired objects, freeing up storage space utilization and reducing ongoing cloud storage costs.

It is important to note that the actual removal of an expired Amazon S3 object is an asynchronous process where there could be a delay between when an object is marked for expiry and when it is actually removed / deleted. However, users are not charged for the storage of expired objects.

(Video) AWS: Create an Amazon S3 Lifecycle Policy - Step by Step Tutorial

Leveraging S3 Lifecycle Management Configurations to Delete S3 Objects

Amazon S3 objects, including older versions of the objects, continue to incur storage consumption costs unless they are deleted promptly. Many organizations leveraging Amazon S3 with versioning find that their storage costs exponentially increase due to the underlying S3 objects not actually being deleted from the underlying storage platform due to the effect of object versioning. These costs can compound over time as new S3 objects are created / added and existing objects are overwritten.

AWS SDK and AWS Command Line Interface or the Amazon S3 console provide ways to delete S3 objects or expired options either manually or programmatically. However, the process can be cumbersome and can include additional code or admin efforts to be used at scale.

S3 lifecycle configurations enable users to address this issue conveniently instead. Expiration actions found within the Amazon S3 lifecycle configuration can be configured to automatically delete the previous versions as well as the expired versions of S3 objects. This all happens with no user involvement, saving significant time and effort for enterprise organizations that can leverage this at scale to reduce their underlying storage footprint. This also reduces the associated storage consumption costs without the need for any additional administrative overhead.

There are three specific Amazon S3 lifecycle expiration actions that can be leveraged by customers:

  1. Expiring current version of the object: This configuration allows users to automatically expire the current version of the Amazon S3 objects stored within the bucket after a specified number of days.
    In the same example of the expense claim system, we referenced above, all images of expense receipts that are older than ‘X’ (X = number of days for the data to be retained based on the compliance requirements) can be automatically expired from the Glacier archival storage using this method. This will stop AWS S3 storage costs from incurring from that point onwards.
  2. Permanently delete noncurrent version of the objects: This enables users to permanently remove the older versions of the S3 objects inside a bucket automatically after a certain period of time (days), with no user involvement.
  3. Delete expired object delete markers and failed multipart uploads: This configuration allows users to remove “delete object markers” or to stop and remove any failed multi-part uploads, if they are not completed within a specified period (days), which will save storage costs.

How to Set Up an S3 Lifecycle Policy to Delete Objects

S3 Objection expiration lifecycle configuration can be created using a number of different tools: AWS CLI tool, AWS SDK, the Amazon S3 console, or RESTful API calls. Please refer to the Amazon S3 lifecycle user guide for detailed step-by-step information.

The screenshot below shows how the Amazon S3 console UI (accessed via the “Management” tab within the S3 bucket) can be leveraged to configure a S3 lifecycle rule to expire the current version of S3 objects. For the purpose of this example, expiration has been set to kick in after 90 days of the object creation for the whole bucket.

S3 Lifecycle Rules: Using Bucket Lifecycle Configurations (2)

(Video) How To Expire #AWS #S3 bucket objects/files automatically with #LifeCycle Rules #automation

For removing older versions of S3 objects, the same AWS console can be easily leveraged to create the lifecycle rule, as illustrated below.

S3 Lifecycle Rules: Using Bucket Lifecycle Configurations (3)

Do Even More with AWS S3 Object Storage

When it comes to Amazon S3 lifecycle rules, transitioning actions and expiration actions both provide customers with convenient, scalable, policy-driven approaches to reduce storage consumption and overhead costs.

Customers planning on leveraging these configurations are advised to reference the AWS S3 documentation here.

If you are concerned about storage efficiency, you might also be interested in NetApp Cloud Volumes ONTAP for AWS. Available also for Azure and Google Cloud, Cloud Volumes ONTAP gives users data management features that aren’t available in the public cloud, including space- and cost efficient storage snapshots and instant, zero-capacity data cloning.

S3 Lifecycle Rules: Using Bucket Lifecycle Configurations (4)

FAQs

What are the actions that can be configured in the S3 object lifecycle? ›

Amazon S3 Lifecycle Configuration

These actions can be either transition actions (which makes the current version of the S3 objects transition between various S3 storage classes) or they could be expiration actions (which defines when an S3 object expires).

What is S3 lifecycle configuration? ›

An S3 Lifecycle configuration is an XML file that consists of a set of rules with predefined actions that you want Amazon S3 to perform on objects during their lifetime. You can also configure the lifecycle by using the Amazon S3 console, REST API, AWS SDKs, and the AWS Command Line Interface (AWS CLI).

How do I check my S3 bucket life cycle policy? ›

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that you want to create a lifecycle rule for. Choose the Management tab, and choose Create lifecycle rule. In Lifecycle rule name, enter a name for your rule.

How long does it take for S3 lifecycle policy to take effect? ›

A rule of thumb that seems to hold is to expect the policy to take effect within 48 hours.

How often do S3 lifecycle rules run? ›

Amazon S3 Lifecycle configuration rules run once a day. Check and confirm whether the rule has transitioned the storage class. The total number of objects in the bucket affects the time it takes for you to see the change in storage class.

Does S3 lifecycle apply to existing objects? ›

Lifecycle policies apply to both existing and new S3 objects, ensuring that you can optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration.

What is lifecycle configuration? ›

A lifecycle configuration is a set of rules that are applied to the objects in specific S3 buckets. Each rule specifies which objects are affected and when those objects will expire (on a specific date or after some number of days). StorageGRID supports up to 1,000 lifecycle rules in a lifecycle configuration.

What is life cycle policy? ›

Lifecycle policies allow you to automatically review objects within your S3 Buckets and have them moved to Glacier or have the objects deleted from S3. You may want to do this for security, legislative compliance, internal policy compliance, or general housekeeping.

What is prefix in S3 lifecycle? ›

In this S3 Lifecycle configuration rule, the filter specifies a key prefix ( tax/ ). Therefore, the rule applies to objects with the key name prefix tax/ , such as tax/doc1. txt and tax/doc2. txt .

Which S3 feature should you use if you want to make sure that a policy will no longer be changed? ›

S08/Q02: Which S3 feature should you use if you want to make sure that a policy will no longer be changed? S3 Glacier Vault Lock allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy.

What can be used as a storage class for an S3 object lifecycle policy? ›

There are five storage classes in the S3 that you can configure, which are S3 Standard Storage, S3 Reduced Redundancy Storage(S3 RRS), S3 Infrequently Accessed (S3 IA), S3 Glacier, and S3 Intelligent-Tiering.

What is S3 bucket policy? ›

You can use the Amazon S3 console to add a new bucket policy or edit an existing bucket policy. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it.

What is S3 transfer acceleration? ›

S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications.

What is replication in S3 bucket? ›

Amazon Simple Storage Service (S3) Replication is an elastic, fully managed, low cost feature that replicates objects between buckets. S3 Replication offers the most flexibility and functionality in cloud storage, giving you the controls you need to meet your data sovereignty and other business needs.

How do you limit access to an S3 bucket by source IP? ›

To allow users to perform S3 actions on the bucket from the VPC endpoints or IP addresses, you must explicitly allow the user-level permissions. You can explicitly allow user-level permissions on either an AWS Identity and Access Management (IAM) policy or another statement in the bucket policy.

How often do lifecycle rules run? ›

Lifecycle rules run once a day at midnight Universal Coordinated Time (UTC).

How can S3 remove all objects within the bucket when a specified date or time period in the object's lifetime is reached? ›

On the specified date, Amazon S3 expires all the objects in the bucket. S3 also continues to expire any new objects you create in the bucket. To stop the Lifecycle action, you must remove the action from the Lifecycle configuration, disable the rule, or delete the rule from the Lifecycle configuration.

Can S3 objects be transitioned to Glacier using S3 life cycles? ›

Using S3 Lifecycle configuration, you can transition objects to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes for archiving.

What is a prerequisite for S3 object lock? ›

S3 Object Lock requires the use of versioned buckets. Retention settings apply to individual object versions. An object version can have both a retain-until-date and a legal hold setting, one but not the other, or neither.

What is non current version in S3? ›

NoncurrentDays. Specifies the number of days an object is noncurrent before Amazon S3 can perform the associated action. The value must be a non-zero positive integer. For information about the noncurrent days calculations, see How Amazon S3 Calculates When an Object Became Noncurrent in the Amazon S3 User Guide.

What is S3 object lock? ›

Object Lock is an Amazon S3 feature that blocks object version deletion during a user-defined retention period, to enforce retention policies as an additional layer of data protection and/or for strict regulatory compliance.

Which of the following AWS services offer lifecycle management for cost optimal storage? ›

Lower your storage costs without sacrificing performance with Amazon S3. Amazon S3 lets you take control of costs and continuously optimize your spend, while building modern, scalable applications.

What can you use to define actions to move S3 objects between different storage classes? ›

Lifecycle Management

With this action, you can choose to move objects to another storage class. With this, you can configure S3 to move your data between various storage classes on a defined schedule.

Which method can be used to grant anonymous access to an object in S3? ›

If an object is uploaded to a bucket through an unauthenticated request, the anonymous user owns the object. The default object ACL grants FULL_CONTROL to the anonymous user as the object's owner. Therefore, Amazon S3 allows unauthenticated requests to retrieve the object or modify its ACL.

What is versioning in AWS S3? ›

Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets.

What is the difference between lifecycle policies and intelligent tiering? ›

The main thing to note here is that the life cycle management supports all the S3 Storage classes, unlike the Intelligent Tiering which only supports Standard and Standard-IA.

What are S3 prefixes? ›

A key prefix is a string of characters that can be the complete path in front of the object name (including the bucket name). For example, if an object (123. txt) is stored as BucketName/Project/WordFiles/123. txt, the prefix might be “BucketName/Project/WordFiles/123.

How many tags can an S3 object have assigned to it? ›

You can associate up to 10 tags with an object. Tags associated with an object must have unique tag keys.

What is the maximum amount of data that can be stored in S3 in a single AWS account? ›

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB.

How many buckets can be created in S3? ›

By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase.

When creating an Amazon S3 bucket to store data What factors should be considered? ›

Seven factors that affect Amazon S3 pricing
  1. The region where you store your data.
  2. The volume of data you store.
  3. The level of redundancy.
  4. The storage class.
  5. Data requests.
  6. Data transfers.
  7. Data retrievals (Glacier only)
22 Dec 2020

How do I create a S3 bucket Lifecycle policy? ›

To create a lifecycle rule

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that you want to create a lifecycle rule for. Choose the Management tab, and choose Create lifecycle rule.

How long does it take for S3 lifecycle policy to take effect? ›

A rule of thumb that seems to hold is to expect the policy to take effect within 48 hours.

How many storage classes are there in S3? ›

S3 has six storage classes puprose-built for varying access needs to help you optimize costs. With the S3 Storage Classes, S3 Storage Class Analysis, and S3 Lifecycle policies, you can enable storage cost efficiencies without impacting availability or performance.

What is S3 lifecycle? ›

An S3 Lifecycle configuration is an XML file that consists of a set of rules with predefined actions that you want Amazon S3 to perform on objects during their lifetime. You can also configure the lifecycle by using the Amazon S3 console, REST API, AWS SDKs, and the AWS Command Line Interface (AWS CLI).

What is difference between S3 bucket policies and IAM policies? ›

S3 bucket policies (as the name would imply) only control access to S3 resources, whereas IAM policies can specify nearly any AWS action.

Which Amazon S3 bucket policy can limit access to a specific object? ›

You can use the NotPrincipal element of an IAM or S3 bucket policy to limit resource access to a specific set of users. This element allows you to block all users who are not defined in its value array, even if they have an Allow in their own IAM user policies.

What is S3 transfer protocol? ›

S3 is accessed using web-based protocols that use standard HTTP(S) and a REST-based application programming interface (API). Representational state transfer (REST) is a protocol that implements a simple, scalable and reliable way of talking to web-based applications.

What is S3 data transfer? ›

S3 transfer out means you will be charged for amout of data transfered out from S3 to internet. When you make object public, S3 makes that object available to internet. So every request to S3 for that object will result in transfer of that object out to internet.

Can S3 transfer acceleration be used for downloads? ›

Users can enable data transfer acceleration for a single S3 bucket. This option is available via standard AWS console. After that the following two modes become available for this bucket: accelerated uploading and downloading.

Does S3 replication copy existing objects? ›

S3 Batch Replication can replicate objects that were already replicated to new destinations. Replicate replicas of objects that were created from a replication rule – S3 Replication creates replicas of objects in destination buckets.

Can S3 buckets be duplicated? ›

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets.

How do I protect my S3 bucket from unauthorized usage? ›

The easiest way to secure your bucket is by using the AWS Management Console. First select a bucket and click the Properties option within the Actions drop down box. Now select the Permissions tab of the Properties panel. Verify that there is no grant for Everyone or Authenticated Users.

Do S3 buckets have IP addresses? ›

S3 IP addresses are consumed from a AWS-owned network range that differs based on the geographical location. Your our subnet IP's won't be affected by your S3 endpoints.

What can be used as a storage class for an S3 object lifecycle policy? ›

There are five storage classes in the S3 that you can configure, which are S3 Standard Storage, S3 Reduced Redundancy Storage(S3 RRS), S3 Infrequently Accessed (S3 IA), S3 Glacier, and S3 Intelligent-Tiering.

What is prefix in S3 lifecycle? ›

In this S3 Lifecycle configuration rule, the filter specifies a key prefix ( tax/ ). Therefore, the rule applies to objects with the key name prefix tax/ , such as tax/doc1. txt and tax/doc2. txt .

How can S3 remove all objects within the bucket when a specified date or time period in the object's lifetime is reached? ›

On the specified date, Amazon S3 expires all the objects in the bucket. S3 also continues to expire any new objects you create in the bucket. To stop the Lifecycle action, you must remove the action from the Lifecycle configuration, disable the rule, or delete the rule from the Lifecycle configuration.

Which S3 feature should you use if you want to make sure that a policy will no longer be changed? ›

S08/Q02: Which S3 feature should you use if you want to make sure that a policy will no longer be changed? S3 Glacier Vault Lock allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy.

How many storage classes are there in S3? ›

S3 has six storage classes puprose-built for varying access needs to help you optimize costs. With the S3 Storage Classes, S3 Storage Class Analysis, and S3 Lifecycle policies, you can enable storage cost efficiencies without impacting availability or performance.

What is the maximum amount of data that can be stored in S3? ›

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB.

What is minimum storage duration S3? ›

S3 Standard-IA, and S3 One Zone-IA storage are charged for a minimum storage duration of 30 days, and objects deleted before 30 days incur a pro-rated charge equal to the storage charge for the remaining days.

What is lifecycle configuration? ›

A lifecycle configuration is a set of rules that are applied to the objects in specific S3 buckets. Each rule specifies which objects are affected and when those objects will expire (on a specific date or after some number of days). StorageGRID supports up to 1,000 lifecycle rules in a lifecycle configuration.

What are S3 prefixes? ›

A key prefix is a string of characters that can be the complete path in front of the object name (including the bucket name). For example, if an object (123. txt) is stored as BucketName/Project/WordFiles/123. txt, the prefix might be “BucketName/Project/WordFiles/123.

What is S3 object lock? ›

Object Lock is an Amazon S3 feature that blocks object version deletion during a user-defined retention period, to enforce retention policies as an additional layer of data protection and/or for strict regulatory compliance.

How do you delete objects from S3 bucket with Lifecycle policy? ›

Resolution
  1. Open the Amazon S3 console.
  2. From the list of buckets, choose the bucket that you want to empty.
  3. Choose the Management tab.
  4. Choose Create lifecycle rule.
  5. For Lifecycle rule name, enter a rule name.
  6. For Choose a rule scope, select This rule applies to all objects in the bucket.
7 Jan 2022

How do I delete files from old than 7 days S3? ›

Automatic deletion of data from the entire S3 bucket
  1. Go to Management and click Create lifecycle rule.
  2. We give the name of our rule. ...
  3. If we want the files to be deleted after 30 days, select the “Expire current versions of objects” option and enter the number of days after which the files will be deleted.
28 Mar 2022

How do you limit access to an S3 bucket by source IP? ›

To allow users to perform S3 actions on the bucket from the VPC endpoints or IP addresses, you must explicitly allow the user-level permissions. You can explicitly allow user-level permissions on either an AWS Identity and Access Management (IAM) policy or another statement in the bucket policy.

How many buckets can be created in S3? ›

By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase.

When creating an Amazon S3 bucket to store data What factors should be considered? ›

Seven factors that affect Amazon S3 pricing
  1. The region where you store your data.
  2. The volume of data you store.
  3. The level of redundancy.
  4. The storage class.
  5. Data requests.
  6. Data transfers.
  7. Data retrievals (Glacier only)
22 Dec 2020

What is the difference between lifecycle policies and intelligent tiering? ›

The main thing to note here is that the life cycle management supports all the S3 Storage classes, unlike the Intelligent Tiering which only supports Standard and Standard-IA.

Videos

1. 14_AWS S3 LifeCycle Rules Configuration
(Tech Studio For Beginners)
2. S3 Bucket Management: Lifecycle Rules, Replication Rules, Inventory Configuration
(JasCloudTech)
3. AWS S3 Lifecycle Rule | Guide - How to add, configure, use | tutorial 2022 automation
(Wojciech Lepczyński)
4. Manage S3 Objects Lifecycle with S3 Lifecycle Rules | AWS New
(Cloudemy TV)
5. AWS S3 Object Versioning and Lifecycle Rules
(Enrico Portolan)
6. AWS S3 Lifecycle policy-HIndi/Urdu | AWS 2021 Latest Management Console | Technical Guftgu
(Technical Guftgu)

Top Articles

Latest Posts

Article information

Author: Ms. Lucile Johns

Last Updated: 09/21/2022

Views: 6163

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.