What is Amazon S3?
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
S3 service is started in 2006.
Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements.
Q: How much data can I store in Amazon S3?
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the Multipart Upload capability.
Amazon S3 Batch Operations
you can make changes to object metadata and properties, or perform other storage management tasks, such as copying objects between buckets, replacing object tag sets, modifying access controls, and restoring archived objects from S3 Glacier — instead of taking months to develop custom applications to perform these tasks
S3 Batch Operations is a managed solution for performing storage actions like copying and tagging objects at scale, whether for one-time tasks or for recurring, batch workloads. S3 Batch Operations can perform actions across billions of objects and petabytes of data with a single request. To perform work in S3 Batch Operations, you create a job.
The short answer is you can’t migrate a S3 bucket from one region to another. But there is a workaround to this.
Workaround
- Create a new bucket in another region. Note that you can not have the same bucket name as your current one because bucket names must be unique.
- Copy the contents of the current bucket to the new bucket created in the region you prefer.
- Once copied, delete the old bucket.
- Replace the bucket name to the old name in the new region if you prefer to maintain the same name for the bucket in the new region. Note that you can only do this after you deleted the old bucket.
----------------------------------------------------------------------------------
What is versioning in S3 AWS?
Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. ... Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite.
----------------------------------------------------------------------------
Logs:
- Server level log
- Object-level log (need additional cost)
--------------------------------------------------
Object lock level functionality. It will protect to delete for unwanted delete operations.
-------------------------------------------------------
Block public access (Bucket level)
Block public access (Account setting)
------------------------------
-------------------------------
Public accessibility
-- Able to access Objects by other aws account holder.
-- public access
------------------------------------------------------------
So what are presigned URLs anyway?
A presigned URL is a URL that you can provide to your users to grant temporary access to a specific S3 object. Using the URL, a user can either READ the object or WRITE an Object (or update an existing object). The URL contains specific parameters which are set by your application. A pre-signed URL uses three parameters to limit the access to the user;
- Bucket: The bucket that the object is in (or will be in)
- Key: The name of the object
- Expires: The amount of time that the URL is valid
-----------------------------------
S3 bucket propertis
S3 can be used for static website hosting.
Scale up automatically
Without making any changes to your initial setup AWS S3 will automatically scale up the infrastructure to meet the growing demand.
3. High Availability
Amazon guarantees 99.99% availability of S3 which means there is almost no chance of losing your data. S3 achieves this by replication across multiple data centers
4. Fast Content Serving with Amazon CloudFront
If you have globally distributed audience then CloudFront can help you deliver the contents in a very efficient manner. CloudFront has 70 data centres, called edge locations, all across the globe. Content of your website are cached at these edge locations and every visitor is served via the nearest edge location hence decreasing the latency and resulting in optimum response time.
5. Negligible costs
Hosting your small to medium sized static website would cost you few dollars monthly. For example see the sample costing provided below
S3 Standard Storage: 1 GB
PUT and other similar requests: 30000
GET and other similar requests: 30000
Data transfer out: 3 GB
Data transfer in: 3 GB
Route53 Hosted Zones: 1
Standard Queries to Route53: 1 million per month
Cost per month: $1.30
S3 Transfer Acceleration
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of the globally distributed edge locations in Amazon CloudFront.
Terabyte and terabyte data download or upload, we can use this service. You have to aware of cost. You have to carry very very high cost. it not cheap.
The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. Whenever an action is taken on an s3 object, an event is created. This event can send notifications to SQS, SNS or AWS Lambda service. These events can be used to enable event-driven workflows.
Requester Pays
Amazon S3 buckets configured for Requester Pays means that the requester will be charged for Data Transfer costs. Since these costs need to be charged back to an identified AWS Account, the objects must be accessed via authenticated requests.
How To Secure S3 Buckets Effectively
Tip 1: Securing Your Data Using S3 Encryption
- Server-Side Encryption: Using this type of encryption, AWS encrypts the raw data you send and stores it on its disks (on data centers). When you try to retrieve your data, AWS reads the data from its disks, decrypts, and sends it back to you.
- Client-Side Encryption: Using this type of encryption, instead of AWS, it’s you who encrypts the data before sending it to AWS. Once you retrieve the data from AWS, you need to decrypt it.
Tip 2: Managing Access Control
-
- Limiting IAM User Permissions
- Restricting S3 Access Using Bucket Policies
- Amazon S3 Block Public Access
- Finally, Amazon offers a centralized way to restrict public access to your S3 resources. By using the Amazon S3 Block Public Access setting, you can override any bucket policies and object permissions set before. It should be noted that block public settings can only be used for buckets, AWS accounts, and access points. You can learn more about using Amazon S3 Block Public
Tip 3: Maximizing S3 Reliability With Replication
- S3 versioning:
Tip 4: Enforcing SSL
Tip 5: Enhancing S3 Security Using Logging
Tip 6: Putting S3 Object Locking To Work
Tips 7: presign url
tips8
REf:
- https://www.youtube.com/watch?v=7M3s_ix9ljE
- https://www.youtube.com/watch?v=L3dYocCSU-E
- https://medium.com/panther-labs/how-to-secure-s3-buckets-effectively-9c1a3a7178bb
- https://www.youtube.com/watch?v=IUdkEuvihOk
- https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
Ref:
- https://www.youtube.com/watch?v=L3dYocCSU-E