We now offer Amazon Web Services (AWS) as an option for Bring Your Own Cloud (BYOC) deployments. AWS is a top-tier cloud platform recognized for its flexibility, broad suite of tools, and extensive global infrastructure. It allows teams to develop and operate applications with strong reliability, robust security, and efficient performance.
Through this integration, you can make use of your current AWS setup to host and manage services within your own cloud environment — giving you enhanced oversight, adaptability, and support for all your needs.
In this guide, we’ll walk you through how to set up the integration and start running Zephyr on your own AWS infrastructure.
Prerequisites
INFO
A registered AWS account, contract must include edge workers and edgekv
A registered Zephyr account
A registered domain
Log in to the Zephyr Dashboard Dashboard After signing in, select your organization.
Locate AWS under Deployment Integration
Select Settings from the top navigation tabs.
On the left sidebar, select Deployment Integration .
Choose Available to find AWS, then click on Add integration .
Before proceeding, you will need some data from AWS:
1. Credentials
Go to IAM -> User groups in menu and click Create group button
Type group name
Choose the following permissions:
Permission name AmazonCloudWatchEvidentlyFullAccess AmazonDynamoDBFullAccess AmazonS3FullAccess AWSLambda_FullAccess CloudFrontFullAccess CloudWatchLogsFullAccess IAMFullAccess SecretsManagerReadWrite
Click Create user group button
Go to IAM -> Users in menu and click Create user button
Type user name and click next button
Choose Add user to group , choose your group and click next button
Click Create user button
Click on your user name
Go to Security credentials tab
Click Create access key button
Choose Command Line Interface (CLI) and click Next button
Click Create access key
Download you credentials and add [default] as a header, so token will look like:
Secure Your Credentials
Keep your credentials secure. Never commit credentials to version control or share them publicly.
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_ACCESS_KEY_SECRET AWS API Token
This will be considered as your AWS API Token
2. Certificate arn
Choose us-east-1 region
Go to Certificate manager in menu and click Request button
Choose Request a public certificate and click Next button
Add ze.yourdomain.com and *.ze.yourdomain.com as Fully qualified domain name and use defaults for the rest of inputs - Disable export , DNS validation - recommended , RSA 2048
Validate with DNS CNAME
Wait until domain becomes validated
Copy ARN value
Back on Zephyr Dashboard, these are the details for each input after clicking Add Integration under AWS :
Integration Name A unique name within your organization, used as a slug. Integration Display Name The name of the integration shown on the dashboard. API Token Obtainable from AWS. See instructions for creating your API
token.
Domain Your domain. Certificate arn Certificate arn for domain (ze.your.domain) Set Integration as Default When set as default, all Zephyr deployments will use this integration until a
new one (default integration) is set. Deployment using integration won't work
until AWS worker and property become activated.
Integration deployment process
After completing the integration creation it can take up to 10 minutes to be complete.
Validate domain and setup DNS
Go to Cloudfront -> Distributions
Find you domain
Copy value from Domain name (standard) column and add DNS records described in the table below
Subdomain Type Value ze.yourdomain.com CNAME < Domain name value > *.ze.yourdomain.com CNAME < Domain name value >
What Will Be Created on Your AWS Account?
When AWS is added as your provider on Zephyr, these properties will be created on your AWS account:
1. IAM roles and policies
ze-yourdomain-com_lambda_edge_policy - lambda edge policy
ze-yourdomain-com_lambda_role_name - lambda role name
ze-yourdomain-com-store-access-secrets - secret store policy
2. DynamoDB tables
ze-yourdomain-com_envs
ze-yourdomain-com_snapshots
3. Secret store
4. S3 buckets
ze-yourdomain-com-bucket
more buckets will be created during work: one bucket per application
5. Labmda@Edge function
ze-yourdomain-com (for uploading and serving assets)
6. Cloudfront function
ze-yourdomain-com-viewer-request
7. Cloudfront
8. Cloudwatch log groups
/aws/lambda/us-east-1.ze-yourdomain-com
/aws/lambda/ze-yourdomain-com
Clean Uninstall and Reset
Warning
Zephyr Cloud does not manage deletion of API tokens or any AWS account properties.
Assets and information on your AWS account are immutable by default. During a clean uninstall , previously deployed assets and information are unrecoverable.
To delete an existing AWS integration, follow these steps:
Ensure you have AWS cli installed and authenticated
Check AWS CLI documentation for more info
Ensure you have jq installed
Check jq documentation for more info
See Validate domain and setup DNS .
Save the scripts below
#!/usr/bin/env bash
set -euo pipefail
export AWS_PAGER = ""
export AWS_DEFAULT_OUTPUT = json
# ================== SETTINGS (FILL IN) ==================
AWS_REGION_MAIN = "us-east-1"
AWS_REGION_EDGE = "us-east-1"
CF_DIST_ID = "${CF_DIST_ID :- DISTRIBUTION_ID}"
CF_FUNC_NAME = "${CF_FUNC_NAME :- }"
# Normalize DDB_TABLES into an array (supports spaces/commas/newlines)
declare -a DDB_TABLES_ARR = ()
if [[ -n "${DDB_TABLES :- }" ]]; then
_tmp = "${DDB_TABLES // $'\n' / }"
_tmp = "${_tmp // , / }"
read -r -a DDB_TABLES_ARR <<< "$_tmp"
fi
SECRET_NAME = "${SECRET_NAME :- your / secret / name}"
S3_BUCKET = "${S3_BUCKET :- your-bucket-name}"
# Normalize LOG_GROUPS into an array (supports spaces/commas/newlines)
declare -a LOG_GROUPS_ARR = ()
if [[ -n "${LOG_GROUPS :- }" ]]; then
_lg_tmp = "${LOG_GROUPS // $'\n' / }"
_lg_tmp = "${_lg_tmp // , / }"
read -r -a LOG_GROUPS_ARR <<< "$_lg_tmp"
fi
IAM_POLICY_SM_ARN = "${IAM_POLICY_SM_ARN :- }"
# =========================================================
log () {
echo "[$( date '+%F %T')] $*"
}
command -v aws > /dev/null || { log "aws cli not found" ; exit 1 ; }
command -v jq > /dev/null || { log "jq not found" ; exit 1 ; }
export AWS_DEFAULT_REGION = "$AWS_REGION_MAIN"
log "==> PHASE 1 started"
# --- 1) CloudFront: detach functions, disable, delete, wait until gone ---
if [[ -n "$CF_DIST_ID" && "$CF_DIST_ID" != "DISTRIBUTION_ID" ]]; then
log "Detaching functions, disabling and deleting CloudFront Distribution: $CF_DIST_ID"
DIST_CONF_JSON = "$( aws cloudfront get-distribution-config --id "$CF_DIST_ID" 2> /dev/null || true )"
if [[ -n "$DIST_CONF_JSON" ]]; then
ETAG = $( echo "$DIST_CONF_JSON" | jq -r '.ETag' )
ORIG_CONF = $( echo "$DIST_CONF_JSON" | jq -r '.DistributionConfig' )
NEW_CONF = $( echo "$ORIG_CONF" | jq '
.DefaultCacheBehavior.LambdaFunctionAssociations |= (if . then .Quantity=0 | .Items=[] else . end) |
.DefaultCacheBehavior.FunctionAssociations |= (if . then .Quantity=0 | .Items=[] else . end) |
(if .CacheBehaviors and .CacheBehaviors.Quantity>0 then
.CacheBehaviors.Items |= map(
.LambdaFunctionAssociations |= (if . then .Quantity=0 | .Items=[] else . end) |
.FunctionAssociations |= (if . then .Quantity=0 | .Items=[] else . end)
)
else . end) |
.Enabled=false
' )
aws cloudfront update-distribution \
--id "$CF_DIST_ID" \
--if-match "$ETAG" \
--distribution-config "$NEW_CONF" > /dev/null
log "Waiting for Distribution status=Deployed (after disable)"
for i in { 1..90} ; do
STATUS = $( aws cloudfront get-distribution --id "$CF_DIST_ID" 2> /dev/null | jq -r '.Distribution.Status' || echo "unknown" )
log " attempt $i: status=$STATUS"
[[ "$STATUS" == "Deployed" ]] && break
sleep 10
done
# Refresh ETag after disable
NEW_ETAG = $( aws cloudfront get-distribution-config --id "$CF_DIST_ID" | jq -r '.ETag' )
log "Deleting CloudFront Distribution: $CF_DIST_ID"
if aws cloudfront delete-distribution --id "$CF_DIST_ID" --if-match "$NEW_ETAG" > /dev/null 2>&1 ; then
log "Delete request accepted. Waiting until distribution is fully removed…"
# wait up to ~30 minutes (180 * 10s)
for i in { 1..180} ; do
# If get-distribution returns error containing NoSuchDistribution — it's gone
if OUT = $( aws cloudfront get-distribution --id "$CF_DIST_ID" 2>&1 ); then
# still exists; log status
STATUS = $( echo "$OUT" | jq -r '.Distribution.Status' 2> /dev/null || echo "unknown" )
log " attempt $i: still present (status=$STATUS)"
else
if echo "$OUT" | grep -qi 'NoSuchDistribution' ; then
log "Distribution $CF_DIST_ID no longer exists."
break
fi
# Some other transient error
log " attempt $i: transient error: $( echo "$OUT" | head -n1 )"
fi
sleep 10
done
else
log "Delete request failed. See previous error."
fi
else
log "Could not get distribution config — maybe already deleted. Skipping."
fi
else
log "CF_DIST_ID empty — skipping CloudFront Distribution step."
fi
# --- 2) Delete CloudFront Function (if exists) ---
if [[ -n "$CF_FUNC_NAME" ]]; then
log "Deleting CloudFront Function: $CF_FUNC_NAME"
DESC = "$( aws cloudfront describe-function --name "$CF_FUNC_NAME" 2> /dev/null || true )"
if [[ -n "$DESC" ]]; then
FETAG = $( echo "$DESC" | jq -r '.ETag' )
aws cloudfront delete-function --name "$CF_FUNC_NAME" --if-match "$FETAG" > /dev/null || true
else
log "CloudFront Function not found — skipping."
fi
fi
# --- 3) DynamoDB: delete tables (reliable, with wait) ---
log "DynamoDB tables candidates: ${DDB_TABLES_ARR[ * ] :- (none)}"
for tbl in "${DDB_TABLES_ARR[ @ ] :- }" ; do
[[ -z "$tbl" ]] && continue
log "Deleting DynamoDB table: $tbl"
if aws dynamodb describe-table --table-name "$tbl" > /dev/null 2>&1 ; then
if aws dynamodb delete-table --table-name "$tbl" > /dev/null 2>&1 ; then
:
else
log " delete-table returned non-zero (maybe already DELETING). Will still wait."
fi
log " waiting until it is deleted..."
deleted = 0
for i in { 1..5} ; do
if aws dynamodb wait table-not-exists --table-name "$tbl" > /dev/null 2>&1 ; then
deleted = 1
log " deleted: $tbl"
break
fi
log " still deleting ($i/5)..."
sleep 3
done
if [[ $deleted -ne 1 ]]; then
log " WARNING: table $tbl not confirmed deleted after retries. Check AWS Console/CLI."
fi
else
log " table not found — skipping."
fi
done
# --- 4) Secrets Manager: force delete without recovery ---
if [[ -n "$SECRET_NAME" ]]; then
log "Force deleting secret: $SECRET_NAME"
aws secretsmanager delete-secret \
--secret-id "$SECRET_NAME" \
--force-delete-without-recovery > /dev/null || true
else
log "SECRET_NAME empty — skipping secret deletion."
fi
# --- 5) S3: empty and delete bucket (including versioned objects) ---
if [[ -n "$S3_BUCKET" ]]; then
log "Completely cleaning and deleting S3 bucket: s3://$S3_BUCKET"
if ! aws s3api head-bucket --bucket "$S3_BUCKET" > /dev/null 2>&1 ; then
log " bucket not found — skipping."
else
if aws s3api get-bucket-versioning --bucket "$S3_BUCKET" | jq -e '.Status=="Enabled"' > /dev/null 2>&1 ; then
log " Bucket is versioned — cleaning versions:"
while : ; do
VERS = "$( aws s3api list-object-versions --bucket "$S3_BUCKET" --max-items 1000 )"
CNT = $( echo "$VERS" | jq '[.Versions[]?, .DeleteMarkers[]?] | length' )
[[ "$CNT" -eq 0 ]] && break
IDS = $( echo "$VERS" | jq -c '{Objects: ([.Versions[]? , .DeleteMarkers[]?] | map({Key:.Key, VersionId:.VersionId})), Quiet:true}' )
echo "$IDS" | aws s3api delete-objects --bucket "$S3_BUCKET" --delete file:///dev/stdin > /dev/null || true
done
fi
aws s3 rm "s3://$S3_BUCKET" --recursive > /dev/null || true
aws s3api delete-bucket-policy --bucket "$S3_BUCKET" 2> /dev/null || true
aws s3api delete-bucket-lifecycle --bucket "$S3_BUCKET" 2> /dev/null || true
aws s3api put-public-access-block --bucket "$S3_BUCKET" --public-access-block-configuration '{
"BlockPublicAcls": true, "IgnorePublicAcls": true, "BlockPublicPolicy": true, "RestrictPublicBuckets": true
}' 2> /dev/null || true
aws s3api delete-bucket --bucket "$S3_BUCKET" > /dev/null || true
fi
else
log "S3_BUCKET empty — skipping."
fi
# --- 6) CloudWatch Logs: delete log groups ---
log "CloudWatch log groups candidates: ${LOG_GROUPS_ARR[ * ] :- (none)}"
for lg in "${LOG_GROUPS_ARR[ @ ] :- }" ; do
[[ -z "$lg" ]] && continue
log "Deleting log group: $lg"
aws logs delete-log-group --log-group-name "$lg" 2> /dev/null || true
done
# --- 7) IAM policy (Secrets Manager): detach from roles and delete (if custom) ---
if [[ -n "$IAM_POLICY_SM_ARN" ]]; then
log "Processing IAM policy: $IAM_POLICY_SM_ARN"
aws iam list-entities-for-policy --policy-arn "$IAM_POLICY_SM_ARN" --entity-filter Role \
--query 'PolicyRoles[].RoleName' --output text 2> /dev/null | tr '\t' '\n' | while read -r R ; do
[[ -n "$R" ]] && aws iam detach-role-policy --role-name "$R" --policy-arn "$IAM_POLICY_SM_ARN" 2> /dev/null || true
done
if [[ "$IAM_POLICY_SM_ARN" == arn:aws:iam::aws:policy/ * ]]; then
log " This is an AWS-managed policy; deletion skipped (detach performed)."
else
aws iam list-policy-versions --policy-arn "$IAM_POLICY_SM_ARN" \
--query 'Versions[?IsDefaultVersion==`false`].VersionId' --output text 2> /dev/null \
| tr '\t' '\n' | while read -r VID ; do
[[ -n "$VID" ]] && aws iam delete-policy-version --policy-arn "$IAM_POLICY_SM_ARN" --version-id "$VID" 2> /dev/null || true
done
aws iam delete-policy --policy-arn "$IAM_POLICY_SM_ARN" 2> /dev/null || true
fi
else
log "IAM_POLICY_SM_ARN empty — skipping policy step."
fi
# --- 8) Delete CloudFront Distribution itself (after disable) ---
if [[ -n "$CF_DIST_ID" && "$CF_DIST_ID" != "DISTRIBUTION_ID" ]]; then
log "Deleting CloudFront Distribution: $CF_DIST_ID"
GET = $( aws cloudfront get-distribution-config --id "$CF_DIST_ID" 2> /dev/null || true )
if [[ -n "$GET" ]]; then
ETAG = $( echo "$GET" | jq -r '.ETag' )
aws cloudfront delete-distribution --id "$CF_DIST_ID" --if-match "$ETAG" > /dev/null || true
else
log " config not found — probably already deleted."
fi
fi
log "==> PHASE 1 completed"
#!/usr/bin/env bash
set -euo pipefail
# --- convenience ---
export AWS_PAGER = ""
export AWS_DEFAULT_OUTPUT = json
# ============== SETTINGS VIA ENV VARIABLES ==============
AWS_REGION_EDGE = "us-east-1" # Lambda@Edge region (always us-east-1, but can be overridden)
LAMBDA_NAME = "${LAMBDA_NAME :- }" # Lambda function name or ARN (required)
POLICY_ARN = "${POLICY_ARN :- }" # Custom policy ARN to delete (optional)
IAM_ROLE_ARN = "${IAM_ROLE_ARN :- }" # Lambda role ARN (optional, will be fetched from config if empty)
# ==============================================================
command -v aws > /dev/null || { echo "aws cli not found" ; exit 1 ; }
command -v jq > /dev/null || { echo "jq not found" ; exit 1 ; }
[[ -n "$LAMBDA_NAME" ]] || { echo "LAMBDA_NAME is empty — specify function name/ARN" ; exit 1 ; }
export AWS_DEFAULT_REGION = "$AWS_REGION_EDGE"
# --- helpers ---
log () {
echo "[$( date '+%F %T')] $*"
}
retry () {
local tries = "$1" ; shift
local sleep_s = "$1" ; shift
local i
for ((i = 1 ; i <= tries ; i ++ )); do
if "$@" ; then return 0 ; fi
log " retry $i/$tries..."
sleep "$sleep_s"
done
return 1
}
fn_name_from_any () {
local in = "$1"
if [[ "$in" == arn:aws:lambda: * : * :function: * ]]; then
local base = "${in ##*: function : }"
echo "${base %%:* }"
else
echo "$in"
fi
}
role_name_from_arn () {
local arn = "$1"
[[ -z "$arn" ]] && return 0
echo "${arn ##*/ }"
}
is_aws_managed_policy () {
[[ "$1" == arn:aws:iam::aws:policy/ * ]]
}
# Check how many CloudFront distributions still reference this Lambda *version* ARN
cf_refs_for_lambda_version () {
local version_arn = "$1"
aws cloudfront list-distributions-by-lambda-function \
--lambda-function-arn "$version_arn" \
--query 'DistributionList.Items | length(@)' \
--output text 2> /dev/null || echo 0
}
# Wait until Lambda@Edge version can be deleted (no CF references + replicas gone), then delete it.
delete_lambda_version_with_wait () {
local version_arn = "$1"
local max_tries = "${2 :- 120}" # up to ~20–30 min total with 10–15 sec interval
local sleep_s = "${3 :- 15}"
log " delete version: $version_arn"
for ((i = 1 ; i <= max_tries ; i ++ )); do
# 1) Make sure CloudFront no longer references this version
local refs
refs = "$( cf_refs_for_lambda_version "$version_arn")" || refs = 0
if [[ "$refs" != "0" ]]; then
log " still referenced by $refs CloudFront distribution(s). Waiting..."
sleep "$sleep_s"
continue
fi
# 2) Try to delete. If it returns a replicated error — wait and retry.
if aws lambda delete-function --function-name "$version_arn" > /dev/null 2>&1 ; then
log " deleted: $version_arn"
return 0
else
log " still replicated or in-flight; waiting..."
sleep "$sleep_s"
fi
done
log " WARNING: version not deleted after $max_tries tries: $version_arn"
return 1
}
delete_policy_fully () {
local policy_arn = "$1"
[[ -z "$policy_arn" ]] && return 0
if is_aws_managed_policy "$policy_arn" ; then
log " [policy] AWS-managed: $policy_arn — cannot delete, only detach."
return 0
fi
log " [policy] Detaching from all roles/groups/users…"
aws iam list-entities-for-policy --policy-arn "$policy_arn" --entity-filter Role \
--query 'PolicyRoles[].RoleName' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r R ; do [[ -n "$R" ]] && aws iam detach-role-policy --role-name "$R" --policy-arn "$policy_arn" 2> /dev/null || true ; done
aws iam list-entities-for-policy --policy-arn "$policy_arn" --entity-filter Group \
--query 'PolicyGroups[].GroupName' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r G ; do [[ -n "$G" ]] && aws iam detach-group-policy --group-name "$G" --policy-arn "$policy_arn" 2> /dev/null || true ; done
aws iam list-entities-for-policy --policy-arn "$policy_arn" --entity-filter User \
--query 'PolicyUsers[].UserName' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r U ; do [[ -n "$U" ]] && aws iam detach-user-policy --user-name "$U" --policy-arn "$policy_arn" 2> /dev/null || true ; done
log " [policy] Deleting non-default versions…"
aws iam list-policy-versions --policy-arn "$policy_arn" \
--query 'Versions[?IsDefaultVersion==`false`].VersionId' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r VID ; do [[ -n "$VID" ]] && aws iam delete-policy-version --policy-arn "$policy_arn" --version-id "$VID" 2> /dev/null || true ; done
log " [policy] Deleting policy itself"
aws iam delete-policy --policy-arn "$policy_arn" 2> /dev/null || true
}
log "==> PHASE 2 start (Lambda@Edge + IAM)"
FN_NAME = "$( fn_name_from_any "$LAMBDA_NAME")"
ROLE_FROM_CFG = ""
# --- Lambda: cleanup URLs, aliases, versions, base function ---
if aws lambda get-function --function-name "$FN_NAME" > /dev/null 2>&1 ; then
CFG = "$( aws lambda get-function --function-name "$FN_NAME")"
BASE_ARN = "$( echo "$CFG" | jq -r '.Configuration.FunctionArn')"
ROLE_FROM_CFG = "$( echo "$CFG" | jq -r '.Configuration.Role')"
log "Function: $FN_NAME"
log " ARN: $BASE_ARN"
log " Role (from config): ${ROLE_FROM_CFG :- <none>}"
# Function URLs
log "Cleaning function URL configs"
URLS_JSON = "$( aws lambda list-function-url-configs --function-name "$FN_NAME" 2> /dev/null || echo '{}')"
echo "$URLS_JSON" | jq -r '.FunctionUrlConfigs[]? | .Qualifier // ""' | while read -r Q ; do
if [[ -z "$Q" ]]; then
aws lambda delete-function-url-config --function-name "$FN_NAME" 2> /dev/null || true
else
aws lambda delete-function-url-config --function-name "$FN_NAME" --qualifier "$Q" 2> /dev/null || true
fi
done
# Aliases
log "Cleaning aliases"
aws lambda list-aliases --function-name "$FN_NAME" \
| jq -r '.Aliases[].Name // empty' | while read -r ALIAS ; do
[[ -z "$ALIAS" ]] && continue
aws lambda delete-alias --function-name "$FN_NAME" --name "$ALIAS" 2> /dev/null || true
done
# Versions (except $LATEST) — delete with strong waiting until replicas are gone
log 'Deleting versions (except $LATEST)'
aws lambda list-versions-by-function --function-name "$FN_NAME" \
| jq -r '.Versions[].Version' \
| grep -v '^\$LATEST$' \
| while read -r V ; do
[[ -z "$V" ]] && continue
V_ARN = "${BASE_ARN}:${V}"
delete_lambda_version_with_wait "$V_ARN" 120 15 || true
done
# Finally delete base function (may need retries too)
log "Deleting base function"
retry 120 15 aws lambda delete-function --function-name "$FN_NAME" 2> /dev/null || {
log " WARN: base function not deleted yet (replication/retention). Try later:"
log " aws lambda delete-function --function-name \"$FN_NAME\""
}
else
log "Function not found — skipping Lambda deletion."
fi
# --- Policy: detach & delete ---
if [[ -n "$POLICY_ARN" ]]; then
log "Processing policy: $POLICY_ARN"
delete_policy_fully "$POLICY_ARN"
else
log "POLICY_ARN not set — skipping policy step."
fi
# --- Role: detach & delete ---
ROLE_ARN = "${IAM_ROLE_ARN :- $ROLE_FROM_CFG}"
ROLE_NAME = "$( role_name_from_arn "$ROLE_ARN")"
if [[ -n "$ROLE_NAME" ]]; then
log "Cleaning and deleting role: $ROLE_NAME"
# detach all managed policies
aws iam list-attached-role-policies --role-name "$ROLE_NAME" \
--query 'AttachedPolicies[].PolicyArn' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r PARM ; do
[[ -z "$PARM" ]] && continue
aws iam detach-role-policy --role-name "$ROLE_NAME" --policy-arn "$PARM" 2> /dev/null || true
done
# delete inline policies
aws iam list-role-policies --role-name "$ROLE_NAME" \
--query 'PolicyNames[]' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r PNAME ; do
[[ -z "$PNAME" ]] && continue
aws iam delete-role-policy --role-name "$ROLE_NAME" --policy-name "$PNAME" 2> /dev/null || true
done
# delete instance profiles
aws iam list-instance-profiles-for-role --role-name "$ROLE_NAME" \
--query 'InstanceProfiles[].InstanceProfileName' --output text 2> /dev/null | tr '\t' '\n' | \
while read -r IPN ; do
[[ -n "$IPN" ]] && aws iam remove-role-from-instance-profile --instance-profile-name "$IPN" --role-name "$ROLE_NAME" 2> /dev/null || true
done
# delete role with retries (Edge sometimes keeps it for a while)
retry 40 15 aws iam delete-role --role-name "$ROLE_NAME" 2> /dev/null || {
log " WARN: role not deleted yet (replication/retention). Retry manually later:"
log " aws iam delete-role --role-name \"$ROLE_NAME\""
}
else
log "Role not set and not found in config — skipping role deletion."
fi
log "==> PHASE 2 completed."
Collect necessary data
Variable Where to find in the AWS dashboard Description DDB_TABLES DynamoDB -> Tables envs and snaphots table names LOG_GROUPS Cloudwatch -> Log groups Cloudwatch log groups of Lambda@Edge function CF_DIST_ID Cloudfront -> Distributions Cloudfron distribution id CF_FUNC_NAME Cloudfront -> Functions Cloudfront function name SECRET_NAME Secrets Manager -> Secrets Secret store name S3_BUCKET S3 -> General purpose buckets Name of the bucket created during adding integration LAMBDA_NAME Lamda -> Functions Lambda@Edge name IAM_POLICY_SM_ARN IAM -> Policies Security manager policy ARN POLICY_ARN IAM -> Policies Lambda@Edge policy ARN IAM_ROLE_ARN IAM -> Roles Lambda@Edge role ARN
Run the script for the first stage
DDB_TABLES="<envs_ddb_table> <snapshots_ddb_table>"
LOG_GROUPS="<global_log_group> <log_group>"
CF_DIST_ID=<cloudfront_distribution_id>
CF_FUNC_NAME=<cloudfront_function_name>
SECRET_NAME=<secret_store_name>
S3_BUCKET=<s3_bucket_name>
IAM_POLICY_SM_ARN="<secret_manager_arn>"
bash ./cleanup-stage1.sh
Run the script for the second stage
LAMBDA_NAME="<lambda_name>"
POLICY_ARN="<policy_arn>"
IAM_ROLE_ARN="<iam_role_arn>"
bash ./cleanup-stage2.sh