Skip to content

Error "amazon_s3_exception: header does not match what was computed. (Service: Amazon S3; Status Code: 400; Error Code: XAmzContentSHA256Mismatch; Request ID: null; S3 Extended Request ID: null; Proxy: null)" #20

@alsarins

Description

@alsarins

Hello.

I'm trying to set up s3proxy with Elasticsearch and ceph s3 storage for client-side encryption and getting the error "header does not match what was computed". If I use ceph s3 storage in Elasticsearch directly (without s3proxy) - eveything is ok, no errors. But when I try to register s3proxy as s3 repository in Elasticsearch, it fails with error:

curl -k -X PUT "https://elastic:password_here@localhost:9200/_snapshot/ceph?pretty" -H 'Content-Type: application/json' -d'
 {
   "type": "s3",
   "settings": {
     "bucket": "es-dba",
     "endpoint": "127.0.0.1:80",
     "protocol": "http",
         "client": "default",
         "path_style_access": "true"
   }
 }'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_verification_exception",
        "reason" : "[ceph] path  is not accessible on master node"
      }
    ],
    "type" : "repository_verification_exception",
    "reason" : "[ceph] path  is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to upload object [tests-0SKi0skPRH2vXfWJsGVIAA/master.dat] using a single upload",
      "caused_by" : {
        "type" : "amazon_s3_exception",
        "reason" : "amazon_s3_exception: header does not match what was computed. (Service: Amazon S3; Status Code: 400; Error Code: XAmzContentSHA256Mismatch; Request ID: null; S3 Extended Request ID: null; Proxy: null)"
      }
    }
  },
  "status" : 500
}

My setup is the following:

  • elasticsearch: Version: 8.16.0, Build: deb/12ff76a92922609df4aba61a368e7adf65589749/2024-11-08T10:05:56.292914697Z, JVM: 23
  • s3proxy 1.4.0 (latest, docker image)
  • ceph - actually I do not know exact version. It works behind nginx load balancer with TLS support.
  • running s3proxy in docker on every Elasticsearch node, with our local trusted certificates volume, attached to container on every machine. localhost:80 is exposed for plain HTTP locally on every node.
sudo docker run -d --restart unless-stopped -p 127.0.0.1:80:4433 -e AWS_ACCESS_KEY_ID="access_key_here" -e AWS_SECRET_ACCESS_KEY="secret_key_here" -e S3PROXY_ENCRYPT_KEY="SOME_TEXT_HERE" -e S3PROXY_HOST="s3db.ceph.local.domain" -e S3PROXY_DEKTAG_NAME="es-test" --name s3proxy  -v /usr/local/share/ca-certificates:/usr/local/share/ca-certificates -v /etc/ssl:/etc/ssl ghcr.io/intrinsec/s3proxy

And s3proxy is up and running, logs show warning, like

time="2025-02-06T18:40:40Z" level=info msg=listening ip=0.0.0.0 port=4433 region=eu-west-1
time="2025-02-06T18:40:40Z" level=warning msg="TLS is disabled"

What is the reason for error and how to fix it? Do I need to set s3.client.default.protocol: http in Elasticsearch configs, or how can we enable and use TLS with s3proxy?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions