I need to upload a zip file to an s3 bucket after its creation. I'm aware of the s3_deployment package but it doesn't fit my usecase because I need the file to be uploaded only once, on stack creation. The s3_deployment package would upload the zip on every update.
I have the following custom resource defined however I'm not sure how to pass the body of the file to the custom resource. I've tried opening the file in binary mode but that returns an error.
app_data_bootstrap = AwsCustomResource(self, "BootstrapData",
on_create={
"service": "S3",
"action": "putObject",
"parameters": {
"Body": open('app_data.zip', 'rb'),
"Bucket": f"my-app-data",
"Key": "app_data.zip",
},
"physical_resource_id": PhysicalResourceId.of("BootstrapDataBucket")
},
policy=AwsCustomResourcePolicy.from_sdk_calls(resources=AwsCustomResourcePolicy.ANY_RESOURCE)
)
I don't believe that's possible unless you write a custom script and runs before your cdk deploy to upload your local files to an intermediary S3 bucket. Then you can write a custom resource that copies content of the intermediary bucket on on_create event to the bucket that was created via CDK.
Read this paragraph from s3_deployment in CDK docs:
This is what happens under the hood:
When this stack is deployed (either via cdk deploy or via CI/CD), the contents of the local website-dist directory will be archived and uploaded to an intermediary assets bucket. If there is more than one source, they will be individually uploaded.
The BucketDeployment construct synthesizes a custom CloudFormation resource of type Custom::CDKBucketDeployment into the template. The source bucket/key is set to point to the assets bucket.
The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case websiteBucket). If there is more than one source, the sources will be downloaded and merged pre-deployment at this step.
So in order for you do replicate step 1, you have to write a small script that creates an intermediate bucket and uploads your local files to it. A sample of that script can be like this:
#!/bin/sh
aws s3 mb <intermediary_bucket> --region <region_name>
aws s3 sync <intermediary_bucket> s3://<your_bucket_name>
Then your custom resource can be something like this:
*Note that this will work for copying one object, you can change the code to copy multiple objects.
import json
import boto3
import cfnresponse
def lambda_handler(event, context):
print('Received request:\n%s' % json.dumps(event, indent=4))
resource_properties = event['ResourceProperties']
if event['RequestType'] in ['Create']: #What happens when resource is created
try:
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'intermediary_bucket',
'Key': 'path/to/filename.extension'
}
bucket = s3.Bucket('otherbucket')
obj = bucket.Object('otherkey')
obj.copy(copy_source)
except:
cfnresponse.send(event, context, cfnresponse.FAILED, {})
raise
else:
cfnresponse.send(event, context, cfnresponse.SUCCESS,
{'FileContent': response['fileContent'].decode('utf-8')})
elif event['RequestType'] == 'Delete': # What happens when resource is deleted
cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
Alternative to all of this, is to open an issue in AWS CDK's Github repo and ask them to add your usecase.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With