Skip to content

Instantly share code, notes, and snippets.

@wknapik
Last active March 5, 2024 09:53
Show Gist options
  • Save wknapik/191619bfa650b8572115cd07197f3baf to your computer and use it in GitHub Desktop.
Save wknapik/191619bfa650b8572115cd07197f3baf to your computer and use it in GitHub Desktop.
Empty an s3 bucket of all object versions and delete markers in batches of 400
#!/usr/bin/env bash
set -eEo pipefail
shopt -s inherit_errexit >/dev/null 2>&1 || true
if [[ ! "$#" -eq 2 || "$1" != --bucket ]]; then
echo -e "USAGE: $(basename "$0") --bucket <bucket>"
exit 2
fi
# $@ := bucket_name
empty_bucket() {
local -r bucket="${1:?}"
for object_type in Versions DeleteMarkers; do
local opt=() next_token=""
while [[ "$next_token" != null ]]; do
page="$(aws s3api list-object-versions --bucket "$bucket" --output json --max-items 400 "${opt[@]}" \
--query="[{Objects: ${object_type}[].{Key:Key, VersionId:VersionId}}, NextToken]")"
objects="$(jq -r '.[0]' <<<"$page")"
next_token="$(jq -r '.[1]' <<<"$page")"
case "$(jq -r .Objects <<<"$objects")" in
'[]'|null) break;;
*) opt=(--starting-token "$next_token")
aws s3api delete-objects --bucket "$bucket" --delete "$objects";;
esac
done
done
}
empty_bucket "${2#s3://}"
@jobwat
Copy link

jobwat commented Dec 1, 2019

Thanks! Worked like a charm 👍

Copy link

ghost commented Jan 13, 2020

hi , i am getting this error
line 25: /usr/local/bin/aws: Argument list too long
any suggestion ?

@wknapik
Copy link
Author

wknapik commented Jan 19, 2020

Hi @MichaelM88. You can change the 1000 on line 17 to a lower value to sort it out. I'm curious what system you've seen this on. Check out https://www.in-ulm.de/~mascheck/various/argmax/ to find out more.

@zishanj
Copy link

zishanj commented May 25, 2020

Any possibility to delete files older than specific number of days only? Like delete all versions older than 7 days. It will be useful to control the storage space and maintain the backup based on weekly basis.

@bitlifter
Copy link

Super handy, thx. after some limited testing, had to limit max-items to 400 but works great!

@wknapik
Copy link
Author

wknapik commented Jan 4, 2021

@zishanj I only ever needed to delete them all, to be able to delete the bucket. But you could probably achieve that just by modifying the page query on line 18.

@bitlifter thanks for the hint, I lowered the number in the script.

@xavbourdeau
Copy link

xavbourdeau commented Mar 5, 2024

When you have files with very long name, it will fail with Argument list too long, even with a very low bunch of --max-items

So I rewrite a bit your function. Thanks anyway, it helps ! so I share mine. It's very slow as it loop over each file, but at least it works

function empty_bucket() {
  for object_type in Versions DeleteMarkers; do
    aws s3api list-object-versions \
      --bucket "${BUCKET_NAME}" \
      --output json \
      --query="[{Objects: ${object_type}[].{Key:Key, VersionId:VersionId}}]" > /tmp/${object_type}.json
    is_null=$(jq -c '.[0].Objects' /tmp/${object_type}.json)
    if [ "${is_null}" = "null" ]; then
      echo "Nothing to clean-up for ${object_type}"
    else
      jq -r -c '.[].Objects[] | .VersionId, .Key' /tmp/${object_type}.json | while read -r version && read -r key; do
        aws s3api delete-object \
          --bucket "${BUCKET_NAME}" \
          --key "${key}" \
          --version-id "${version}" \
          --no-paginate \
          --no-cli-pager \
          --output text
      done
    fi
  done
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment