"type": "repository_exception",
"reason": "[s3_repository_demo] Could not determine repository generation from root blobs"
"type": "repository_exception",
"reason": "[s3_repository_demo] Could not determine repository generation from root blobs",
"caused_by": {
"type": "i_o_exception",
"reason": "Exception when listing blobs by prefix [index-]",
"caused_by": {
"type": "sdk_client_exception",
"reason": "Unable to execute HTTP request: s3-cn-northwest-1.amazonaws.com",
"caused_by": {
"type": "unknown_host_exception",
"reason": "s3-cn-northwest-1.amazonaws.com"
"status": 500
This is not a valida
s3 endpoint
.
It should be
s3.cn-northwest-1.amazonws.com
according to the
s3
documentation.
But why you change it? I see no reason, your original error is a permissions error, you need to check the permissions of the access key and secret key.
"type": "repository_exception",
"reason": "[s3_repository_demo] Could not determine repository generation from root blobs"
"type": "repository_exception",
"reason": "[s3_repository_demo] Could not determine repository generation from root blobs",
"caused_by": {
"type": "i_o_exception",
"reason": "Exception when listing blobs by prefix [index-]",
"caused_by": {
"type": "sdk_client_exception",
"reason": "Unable to execute HTTP request: s3.cn-northwest-1.amazonaws.com",
"caused_by": {
"type": "unknown_host_exception",
"reason": "s3.cn-northwest-1.amazonaws.com"
"status": 500
Permission works fine inside the ELK server with the same key. Found no issues with keys while accessing s3
![](https://avatars.discourse-cdn.com/v4/letter/s/e47774/48.png)
sraman:
Permission works fine inside the ELK server with the same key. Found no issues with keys while accessing s3
Not sure what would be the issue then, it is clearly saying that the permission was wrong before, now it is an error with the endpoint you are using.
I would remove all the
s3
configuration from the
elasticsearch-keystore
and start again.
How are you adding the
access_key
and
secret_key
to the keystore? Are you pasting the values in the terminal?
If you are pasting the values I would recommend that you add it using the
--stdin
option, put the access_key on a file and cat this file and pipe the result to the elasticsearch-keystore tool as explained in the
documentation
.
For example, create a file named
s3-access-key.txt
and put only the access_key on it.
your_access_key
Then use this command inside /usr/share/elasticsearch/
cat s3-access-key.txt | bin/elasticsearch-keystore add --stdin s3.client.default.access_key
Do the same thing to the secret_key
and test again.
No luck leandro, i have removed the s3 configuration from elasticsearch-keystore added the keys using --stdin option as suggested. Now it is permission error as previous
"name": "ResponseError",
"meta": {
"body": {
"error": {
"root_cause": [
"type": "repository_verification_exception",
"reason": "[s3-elk-repository] path is not accessible on master node"
"type": "repository_verification_exception",
"reason": "[s3-elk-repository] path is not accessible on master node",
"caused_by": {
"type": "i_o_exception",
"reason": "Unable to upload object [tests-khuYNvqJQti4KIPe36s-fA/master.dat] using a single upload",
"caused_by": {
"type": "amazon_s3_exception",
"reason": "The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 7SFDS66X6ZPGMMC3; S3 Extended Request ID: HooZ1crC3nNu0zJs1e3CuV0lj2Ncp7aK6uooq8PGR1/72xWy0HauTA8EPkGu1LyE2Sy8BhO2fnc=; Proxy: null)"
"status": 500
"statusCode": 500,
"headers": {
"x-opaque-id": "374c5f54-e53e-4808-8293-971a2c752d3e;kibana:application:management:",
"x-elastic-product": "Elasticsearch",
"content-type": "application/json;charset=utf-8",
"content-length": "739"
"meta": {
"context": null,
"request": {
"params": {
"method": "POST",
"path": "/_snapshot/s3-elk-repository/_verify",
"querystring": "",
"headers": {
"user-agent": "Kibana/8.6.2",
"x-elastic-product-origin": "kibana",
"authorization": "Basic ZWxhc3RpYzo5QnArX0FmV3ZNc05rTngwNFVKcQ==",
"x-opaque-id": "374c5f54-e53e-4808-8293-971a2c752d3e;kibana:application:management:",
"x-elastic-client-meta": "es=8.4.0p,js=16.18.1,t=8.2.0,hc=16.18.1",
"accept": "application/vnd.elasticsearch+json; compatible-with=8,text/plain"
"options": {
"opaqueId": "374c5f54-e53e-4808-8293-971a2c752d3e;kibana:application:management:",
"headers": {
"x-elastic-product-origin": "kibana",
"user-agent": "Kibana/8.6.2",
"authorization": "Basic ZWxhc3RpYzo5QnArX0FmV3ZNc05rTngwNFVKcQ==",
"x-opaque-id": "374c5f54-e53e-4808-8293-971a2c752d3e",
"x-elastic-client-meta": "es=8.4.0p,js=16.18.1,t=8.2.0,hc=16.18.1"
"id": 1
How did you create the repository in Elasticsearch?
Also, you created the bucket in s3, right?
For example, if you use the following request to create an repository:
PUT _snapshot/my_s3_repository
"type": "s3",
"settings": {
"bucket": "my-bucket"
You need to create the bucket my-bucket
first in AWS first.
Also, check if your access_key has these permissions.
Both ways, i tried via dev tools and snapshot/restore console,
We have created the s3 bucket first and then repository.
Access key has all the permission.