If we have a shared bucket that multiple users use, S3 access points can be a good fit for channeling different use cases to the bucket. Although setting up the access points requires some effort, they can efficiently protect the bucket while providing the least privilege permissions to the users.
1. Problem statement
Imagine that Alice has a S3 bucket with lots of objects in it. Various applications and users should access the bucket. Some can upload and download objects, while others can only read specific objects. Some applications should only access the bucket from a VPC while others need access to objects in their folders.
In this case, multiple identities (users, applications) want to access the same bucket. The clients have different needs and use cases. They might only need access to their folder.
For example, an application (or a real user) uploads data to the bucket, and another application will read and process the objects.
2. Solution
AWS recommends using S3 Access Points for the above scenario.
Access points behave like "gatekeepers" to the bucket and provide the point of contact for identities and applications.
Alice can create one access point for each identity, which means that eventually, she will have multiple access points delegated to the same bucket.
Each access point can have a different set of permissions. One will allow the applications to connect to the bucket from a VPC, while another access point will permit users to upload objects to their folders.
Access points have their own ARN, which applications can use instead of the bucket name. They can have policies that define the relevant permissions for the given use case. These permissions can be fine-grained and manage even individual objects in the bucket.
Access points can have either a VPC or internet origin. When we want a user to upload objects to the bucket, we will create an access point of an internet origin.
Applications can securely connect to the bucket via an access point that has a VPC origin. In this case, traffic will travel in the AWS private network.
3. Example
This example won't describe how to create buckets and access points. I'll leave some links in the References section, which explain everything in detail.
In this example, Bob is the user who wants to upload objects to the folder called Bob
, and an application running from a VPC will get and read them.
Let's assume that the name of the bucket is access-point-test
. We have already created two access points called bob-access-point-test
for Bob and app-vpc-access-point-test
for the programmatic access.
When we create an access point, we will have to specify the bucket it belongs. This way, the access point will belong to the bucket and only that bucket.
4. Delegating control to the access point
First, let's create a bucket policy that allows the access points in the given account to control access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::access-point-test",
"arn:aws:s3:::access-point-test/*"
],
"Condition": {
"StringEquals": {
"s3:DataAccessPointAccount": "123456789012"
}
}
}
]
}
This policy will delegate the control to the access point. It will allow all access points in the current account to have full access to the bucket.
5. User access (Bob)
The documentation describes how to set up the access point policy and Bob's identity-based policy. It's straightforward to follow, so I won't spend time repeating it here.
If we set up the policies as in the referred post, Bob will only be able to access the bucket via the access point.
6. Connecting to the access point from a VPC
Let's say we have an application that wants to access the objects other users (for example, Bob) have uploaded to the bucket. We have already created the access point restricted to a VPC, and let's call it app-vpc-access-point-test
.
The application can run on a Lambda function, which we had provisioned in a private subnet in a VPC. Or it could run from a ECS container or an EC2 instance. What matters is that the route table of the subnet should not have any routes pointing to the internet gateway (i.e. the subnet should be private).
6.1. Adding GetAccessPoint permission to the application role
The application won't connect to the bucket directly. Instead, it will go through its dedicated access point.
This way, our application will need GetAccessPoint
permission, and we don't have to define any bucket-related actions.
The application role should contain the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "*",
"Action": "s3:GetAccessPoint",
"Condition": {
"StringLike": {
"s3:DataAccessPointArn": "arn:aws:s3:us-east-1:123456789012:accesspoint/app-vpc-access-point-test"
}
}
}
]
}
With this permission in its role, the application will be able to connect to the VPC-restricted access point.
6.2. Creating a VPC endpoint
The application that is in a private subnet will connect to the access point via a Gateway endpoint for S3.
The VPC endpoint should allow all actions to S3. It cannot be a problem because the default endpoint policy allows all actions on all resources.
Optionally, we can restrict the permissions further by denying every actions that doesn't target the access point as described in this article.
6.3. Allowing the app to access the bucket
We now have to create an access point policy that allows the GetObject
action on every object in the app
folder:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow app to access the bucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/service-role/access-point-app-role"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:us-east-1:123456789012:accesspoint/app-vpc-access-point-test/object/app/*"
}
]
}
The name of the role that the Lambda function assumes is access-point-app-role-vonqn241
, and the resource is the dedicated access point (app-vpc-access-point-test
).
6.4. Replacing the bucket name with the access point in the code
Lastly, we should replace the bucket name with the ARN of the access point in the application code. The good thing about access points is that they hide the bucket, so we don't have to know either the bucket name or the ARN.
The application uses the getObject
method of the SDK V3 to access the objects from the bucket. The method minimally accepts the bucket name and the object key.
But because the application gets the object via the access point and not directly from the bucket, we will have to replace the name of the bucket in the Bucket
property with the ARN of the access point:
const s3Object = await getObject({
// originally, this should be the name of the bucket
Bucket: 'arn:aws:s3:us-east-1:123456789012:accesspoint/app-vpc-access-point-test',
Key: 'app/NAME_OF_THE_OBJECT',
})
The application from the private subnet should now have access to the bucket via the access point and run without any issues.
7. Considerations
Let me share some (opinionated) notes on access points with you.
7.1. Not necessarily easier
Using access points will not make the permission setup and application development easier.
Bob and the application will still need GetAccessPoint
permission in their identity-based policies because they connect to the bucket via the access point.
Having multiple access points will add complexity to the architecture. We should create as many access points as many use cases exist. Every access point will need different access point policies, which allow access to the relevant user or role.
7.2. Not always testable in the Console
While user Bob can see the public access points and the objects in the bucket, VPC-restricted access points are not visible and therefore are not testable in the Console.
7.3. Direct access to the bucket
Identities can still directly access the bucket if their policies contain the relevant permissions. Administrators should ensure that the identity-based policies only provide access to the access points and not to the bucket.
7.4. No complex bucket policy
One clear advantage of using access points is that we don't have to create complicated bucket policies.
When multiple users and applications access the bucket with different use cases, we should create either a complex bucket policy or individual identity-based policies (for same-account access). As mentioned previously, we will still need identity-based policies that allow the client to access the access points. So at this point, access points provide no clear advantage in terms of work and effort over direct access to the bucket.
But when it comes to restricting the bucket to VPC traffic, the situation will get more complicated. If the bucket has to be shared, we can't lock it down to the VPC endpoint because other users also need it.
7.5. Separation of concerns
This part is the best selling point of the access points as they provide a great solution to the "shared bucket with multiple user and application access" scenario.
Access points can be the right choice when it comes to separation of concerns. They move the access control up by one level and provide a security layer in front of the bucket.
8. Summary
S3 Access Points can be a great solution when we need a shared bucket with multiple users and applications. An access point always has a dedicated bucket attached. We can create an access point for each use case, so eventually, we can have multiple access points for the same bucket.
AWS recommends delegating control to the access points, so we should create access point policies.
Access points can be either internet-facing when users can upload and download objects from the bucket via the Console or VPC-restricted when they only accept traffic from the specified VPC.
9. References
S3 Access Points - Introduction to Access Points
Creating a bucket - How to create S3 buckets?
Creating access points - How to create access points?
Access point compatibility with S3 operations - Which S3 API operations can be used with access points
Top comments (0)