Cloudflare produces data related to all traffic seen across zones and audit logging to actions taken in the Cloudflare console.
Chronicle Data Types
- This integration requires a Cloudflare Enterprise subscription.
- It is recommended either an AWS S3 or Google Cloud Storage bucket be setup for use with Cloudflare's LogPush. Depending on which is chosen, either the AWS S3 Bucket or GCP GCS Bucket may be followed for ingesting Cloudflare logs.
Enable Log Push to Amazon S3
Cloudflare Logpush supports pushing logs directly to Amazon S3 via the Cloudflare dashboard or via API. Customers that use AWS GovCloud locations should use S3-compatible endpoint and not the Amazon S3 endpoint.
Manage via CloudFlare Dashboard
Enable Logpush to Amazon S3 via the dashboard.
To enable the Cloudflare Logpush service:
- Log in to the Cloudflare dashboard.
- Select the Enterprise account or domain you want to use with Logpush.
- Go to Analytics & Logs > Logs.
- Click Connect a service. A modal window opens where you will need to complete several steps.
- Select the dataset you want to push to a storage service.
- Select the data fields to include in your logs. Add or remove fields later by modifying your settings in Logs > Logpush.
- Select Amazon S3.
- Enter or select the following destination information:
- Bucket path
- Daily subfolders
- Bucket region
- Encryption constraint in bucket policy
- For Grant Cloudflare access to upload files to your bucket, make sure your bucket has a policy (if you did not add it already):
- Copy the JSON policy, then go to your bucket in the Amazon S3 console and paste the policy in Permissions > Bucket Policy and click Save.
- Click Validate access.
- Enter the Ownership token (included in a file or log Cloudflare sends to your provider) and click Prove ownership. To find the ownership token, click the Open button in the Overview tab of the ownership challenge file.
- Click Save and Start Pushing to finish enabling Logpush.
Once connected, Cloudflare lists Amazon S3 as a connected service under Logs > Logpush. Edit or remove connected services from here.
Manage via API
Cloudflare uses Amazon Identity and Access Management (IAM) to gain access to your S3 bucket. The Cloudflare IAM user needs
PutObject permission for the bucket.
Logs are written into that bucket as gzipped objects using the S3 Access Control List (ACL)
Only roles with Cloudflare Log Share edit permissions can read and configure Logpush jobs because job configurations may contain sensitive information. Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job.
For illustrative purposes, imagine that you want to store logs in the bucket
burritobot, in the
logs directory. The S3 URL would then be
To enable Logpush to Amazon S3:
- Create an S3 bucket.
Note: Buckets in China regions (
cn-northwest-1) are currently not supported.
- Edit and paste the policy below into S3 > Bucket > Permissions > Bucket Policy, replacing the Resource value with your own bucket path. The AWS Principal is owned by Cloudflare and shouldn’t be changed.
Logpush uses multipart upload for S3. Aborted uploads will result in incomplete files remaining in your bucket. To minimize your storage costs, Amazon recommends configuring a lifecycle rule using the
Please sign in to leave a comment.