Skip to content

aws-sdk-js-v3

You must generate an Access Key before getting started. All examples will utilize access_key_id and access_key_secret variables which represent the Access Key ID and Secret Access Key values you generated.


JavaScript or TypeScript users may continue to use the @aws-sdk/client-s3 npm package as per normal. You must pass in the R2 configuration credentials when instantiating your S3 service client:

import {
S3Client,
ListBucketsCommand,
ListObjectsV2Command,
GetObjectCommand,
PutObjectCommand,
} from "@aws-sdk/client-s3";
const S3 = new S3Client({
region: "auto",
endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY,
},
});
console.log(await S3.send(new ListBucketsCommand({})));
// {
// '$metadata': {
// httpStatusCode: 200,
// requestId: undefined,
// extendedRequestId: undefined,
// cfId: undefined,
// attempts: 1,
// totalRetryDelay: 0
// },
// Buckets: [
// { Name: 'user-uploads', CreationDate: 2022-04-13T21:23:47.102Z },
// { Name: 'my-bucket-name', CreationDate: 2022-05-07T02:46:49.218Z }
// ],
// Owner: {
// DisplayName: '...',
// ID: '...'
// }
// }
console.log(
await S3.send(new ListObjectsV2Command({ Bucket: "my-bucket-name" })),
);
// {
// '$metadata': {
// httpStatusCode: 200,
// requestId: undefined,
// extendedRequestId: undefined,
// cfId: undefined,
// attempts: 1,
// totalRetryDelay: 0
// },
// CommonPrefixes: undefined,
// Contents: [
// {
// Key: 'cat.png',
// LastModified: 2022-05-07T02:50:45.616Z,
// ETag: '"c4da329b38467509049e615c11b0c48a"',
// ChecksumAlgorithm: undefined,
// Size: 751832,
// StorageClass: 'STANDARD',
// Owner: undefined
// },
// {
// Key: 'todos.txt',
// LastModified: 2022-05-07T21:37:17.150Z,
// ETag: '"29d911f495d1ba7cb3a4d7d15e63236a"',
// ChecksumAlgorithm: undefined,
// Size: 279,
// StorageClass: 'STANDARD',
// Owner: undefined
// }
// ],
// ContinuationToken: undefined,
// Delimiter: undefined,
// EncodingType: undefined,
// IsTruncated: false,
// KeyCount: 8,
// MaxKeys: 1000,
// Name: 'my-bucket-name',
// NextContinuationToken: undefined,
// Prefix: undefined,
// StartAfter: undefined
// }

Use SHA-1/SHA-256 checksum algorithms

You can also use SHA-1 and SHA-256 algorithms for checksum.

import {
S3Client,
ListBucketsCommand,
ListObjectsV2Command,
GetObjectCommand,
PutObjectCommand,
} from "@aws-sdk/client-s3";
import { createHash } from "node:crypto";
const S3 = new S3Client({
region: "auto",
endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY,
},
});
// Create an array buffer
const arrayBuffer = await OBJECT.arrayBuffer();
// Create SHA-1 hash of the object to upload
const ChecksumSHA1 = createHash("sha256")
.update(Buffer.from(arrayBuffer))
.digest("base64");
// Upload the object with the checksums
console.log(
await S3.send(
new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: OBJECT_KEY,
Body: OBJECT,
ChecksumSHA1,
}),
),
);
// {
// '$metadata': {
// httpStatusCode: 200,
// requestId: undefined,
// extendedRequestId: undefined,
// cfId: undefined,
// attempts: 1,
// totalRetryDelay: 0
// },
// ETag: '"355801ab7ccb9ffddcb4c47e8cd61584"',
// ChecksumSHA1: '6MMnUIGMVR/u6AO3uCoUcSRnmzQ=',
// VersionId: '7e6b8ae6e2198a8c1acf76598af339ef'
// }
// Create SHA-256 hash of the object to upload
const ChecksumSHA256 = createHash("sha256")
.update(Buffer.from(arrayBuffer))
.digest("base64");
// Upload the object with the checksums
console.log(
await S3.send(
new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: OBJECT_KEY,
Body: OBJECT,
ChecksumSHA256,
}),
),
);
// {
// '$metadata': {
// httpStatusCode: 200,
// requestId: undefined,
// extendedRequestId: undefined,
// cfId: undefined,
// attempts: 1,
// totalRetryDelay: 0
// },
// ETag: '"f0d8680d5c596202dd81afa17428c65f"',
// ChecksumSHA256: 'jSIKqrDnDlJg3pSnXflg9QJyzGiexsvIa3tCCRfb3DA=',
// VersionId: '7e6b8ae42793fb4a693f020ff58ef8d0'
// }

Generate presigned URLs

You can also generate presigned links that can be used to share public read or write access to a bucket temporarily.

import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
// Use the expiresIn property to determine how long the presigned link is valid.
console.log(
await getSignedUrl(
S3,
new GetObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }),
{ expiresIn: 3600 },
),
);
// https://my-bucket-name.<accountid>.r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential<credential>&X-Amz-Date=<timestamp>&X-Amz-Expires=3600&X-Amz-Signature=<signature>&X-Amz-SignedHeaders=host&x-id=GetObject
// You can also create links for operations such as putObject to allow temporary write access to a specific key.
console.log(
await getSignedUrl(
S3,
new PutObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }),
{ expiresIn: 3600 },
),
);

You can use the link generated by the putObject example to upload to the specified bucket and key, until the presigned link expires.

Terminal window
curl -X PUT https://my-bucket-name.<accountid>.r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential<credential>&X-Amz-Date=<timestamp>&X-Amz-Expires=3600&X-Amz-Signature=<signature>&X-Amz-SignedHeaders=host&x-id=PutObject -F "data=@dog.png"