oerp@oerp:~/src/timefordev-ias$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
status code: 403, request id: somerequestid, host id: someid
Also tried with profile
. Same thing.
And when try this:
oerp@oerp:~/src/timefordev-ias$ terraform workspace list
AccessDenied: Access Denied
status code: 403, request id: aaaa, host id: bbb
Expected Behavior
Actual Behavior
Steps to Reproduce
Additional Context
User that is trying to access S3, have these policies set:
AmazondRDSFullAccess
AmazondEC2FullAccess
AmazondS3FullAccess
I also tried adding AdministratorAccess, but it did not change anything.
References
#13589
I am encountering this same issue exception I am using default profile without a shared credentials file.
REDACTED
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: REDACTED
Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.
@oerp-odoo if you have aws credentials set as environment variables, those will override whatever you is set in your terraform configuration (including the credentials file) - is that what you meant?
Another common confusion I've seen is when the AWS credentials used for the backend (the s3 bucket) are not be the same credentials used for the AWS provider.
What i want is simply to use S3 for backend. Im using same credentials for
aws provider and backend. I get access for provider, but not for backend..
so dont get it why S3 denies access if its the same credentials.
Regarding environment variables, I probably don't want that, but I'm looking for a workaround so it would at least let me use remote state somehow.
On Wed, Sep 12, 2018, 22:07 Kristin Laemmert ***@***.***> wrote:
@oerp-odoo <
https://github.com/oerp-odoo> if you have aws credentials set
as environment variables, those will override whatever you is set in your
terraform configuration (including the credentials file) - is that what you
meant?
Another common confusion I've seen is when the AWS credentials used for
the backend (the s3 bucket) are not be the same credentials used for the
AWS provider.
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
#18801 (comment)>,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AHc3esvKZ4dn6oBnQ3F379OISnasiGk8ks5uaVtugaJpZM4WddaZ>
My situation is similar. My AWS environment uses SAML authentication with assumed roles. For CLI we get a token, and set environment variables. This has been working as recently as yesterday in account A. However, when I switched to account B, Terraform is no longer able to connect to the remote S3 state. This is using an administrator (i.e. full permissions) role.
TF_LOG=DEBUG terraform init -backend-config=env/backend-prod.tfvars
2018/09/20 10:26:55 [INFO] Terraform version: 0.11.8
2018/09/20 10:26:55 [INFO] Go runtime version: go1.10.3
2018/09/20 10:26:55 [INFO] CLI args: []string{"/usr/local/Cellar/terraform/0.11.8/bin/terraform", "init", "-backend-config=env/backend-prod.tfvars"}
2018/09/20 10:26:55 [INFO] CLI command args: []string{"init", "-backend-config=env/backend-prod.tfvars"}
2018/09/20 10:26:55 [DEBUG] command: loading backend config file: /Users/cgwong/workspace/terraform/superset-reporting-service
Initializing modules...
- module.asg
2018/09/20 10:26:55 [DEBUG] found local version "2.8.0" for module terraform-aws-modules/autoscaling/aws
2018/09/20 10:26:55 [DEBUG] matched "terraform-aws-modules/autoscaling/aws" version 2.8.0 for
- module.rds
2018/09/20 10:26:55 [DEBUG] found local version "1.21.0" for module terraform-aws-modules/rds/aws
2018/09/20 10:26:55 [DEBUG] matched "terraform-aws-modules/rds/aws" version 1.21.0 for
- module.rds.db_subnet_group
- module.rds.db_parameter_group
- module.rds.db_option_group
- module.rds.db_instance
2018/09/20 10:26:55 [DEBUG] command: adding extra backend config from CLI
Initializing the backend...
2018/09/20 10:26:55 [WARN] command: backend config change! saved: 9345827190033900985, new: 1347110252068987742
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
2018/09/20 10:26:55 [INFO] Building AWS region structure
2018/09/20 10:26:55 [INFO] Building AWS auth structure
has changed. Terraform will now check for existing state in the backends.
2018/09/20 10:26:55 [INFO] Setting AWS metadata API timeout to 100ms
2018/09/20 10:26:56 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2018/09/20 10:26:56 [INFO] AWS Auth provider used: "EnvProvider"
2018/09/20 10:26:56 [INFO] Initializing DeviceFarm SDK connection
2018/09/20 10:27:01 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BCCC2ADD33902C31</RequestId><HostId>pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=</HostId></Error>
2018/09/20 10:27:01 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
status code: 403, request id: BCCC2ADD33902C31, host id: pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=
2018/09/20 10:27:01 [DEBUG] plugin: waiting for all plugin processes to complete...
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: BCCC2ADD33902C31, host id: pXrkrZ+7Ui33C5IN4SyV/UjqQEeY4W27zBVpIXxwRUAIcIlaSKWCqvfDo7+fzfkJan3iCkDTb94=
Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.
Really strange since the role has full access, is working in account A, and was working in account B when last I checked. I am still digging into this and will be using an EC2 instance as a test/workaround.
Using an EC2 instance (admin access) more silently fails, though the error is different:
-----------------------------------------------------
2018/09/20 16:24:34 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>loc360-superset/prod/terraform.tfstate</Key><RequestId>3EDB0ACE5E60CAED</RequestId><HostId>9URnCiboGfgOwd44LwqU3ZcDItf2ZZS3vctjdaVSIval2lUSHHLbBiTvXy0hXqoEM9FnAvNhrCA=</HostId></Error>
2018/09/20 16:24:34 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/GetObject failed, not retrying, error NoSuchKey: The specified key does not exist.
Even after I create the folder/key it gives the same error, though again, unless I enable DEBUG mode it says it was successful.
started getting the same issue. I have provider set up in the main script as below.
provider aws {}
But I guess tf wont be referring to main script while running init.
I am calling terrafrom by exporting current profile as AWS_PROFILE
value and subsequently running
terraform init
It is working well for the same user but not for another. I changed the profile of another user to have all admin level access to dynamodb and S3 bucket both. Still no luck.
Disabled encrypt
on terraform config still no luck. Was wondering if state file gets encrypted specific to profile user.
@oerp-odoo
try running aws sts get-caller-identity
& aws sts get-caller-identity --profile=desiredProfile
checkout the profile being configured for each call.
Check env variable for AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
& AWS_SESSION_TOKEN
if present set it to empty or unset
Check env variable for AWS_PROFILE
if configured correctly.
With above 2 points addressed you could simply remove shared credentials file
from both main.tf and terraform.tf. Just provide the profile in provider block hardcoded or like ${var.profile}
(need to declare variable profile in variables.tfvarsand env var as
TF_VAR_profile` with the desired profile name.
See if that works.
EDIT: Follow-up
My issue proved to be nothing to do with terraform. The instance profile lacked several key S3 permissions in its IAM policy. Not sure if terraform even gets useful info back from AWS on those errors to produce meaningful error states. Regardless, if you're seeing this, try looking over your S3
policies to make sure you can ListBucket, GetObject, PutObject, and DeleteObject.
Has anyone had additional luck with this issue? I have a scenario where my s3 backend receives the following error
Failed to save state: failed to upload state: AccessDenied: Access Denied
status code: 403, request id: REDACTED, host id: REDACTED
Error: Failed to persist state to backend.
my provider:
provider "aws" {
region = "${var.aws_region}"
terraform {
backend "s3" {
bucket = "REDACTED"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "REDACTED"
encrypt = true
My init and plan run just fine. I've seen this work when I'm using AWS Access key / Secret Key, but in this case my worker node is using an assumed role. I can run any aws cli commands from the command-line just fine. Perhaps Terraform has to be specially configured when running from a worker node that infers its permissions from an instance profile?
I have similar issue, but only on AWS EC2 instance. I use shared credentials file.
ubuntu@ip-172-17-2-175:~/test$ terraform init -reconfigure
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
status code: 403, request id: 5CCE9150B36AA433, host id: vkguLMArsd3MdiP4JKx1AUnFddaceg+v1UfAacFpJbjzRZ9hM7oTD6iu2QRpoppajhbTdHGfRFM=
I tested on different cloud providers and works well. I think this issue only happen on AWS EC2 instance.
Terraform conf. file
terraform = {
backend "s3" {
bucket = "bucket_name"
key = ".../terraform.tfstate"
region = "us-east-1"
encrypt = true
profile = "default"
dynamodb_table = "terraform-table"
shared_credentials_file = "$HOME/.aws/credentials"
Terraform version: 0.11.10
@mildwonkey
bmwant, sjoerd-dijkstra, meshuaib, eldalo, medaminejridi, betinro, awseverquote, Fomin2402, denist-huma, and yasra002 reacted with thumbs up emoji
meshuaib, ybmadhu, and joyboy-us reacted with hooray emoji
ybmadhu reacted with heart emoji
ybmadhu reacted with rocket emoji
All reactions
A good start is to run terraform init with DEBUG set,
TF_LOG=DEBUG terraform init
If you see the 403 comes from a ListObjects action over a bucket in a X-account operation, then you may just need to get the ACLs sorted in the destination bucket (for the tfstate). SO go to that bucket, Permissions, ACLs, and in Other Accounts or similar, add the Canonical ID of the calling account (set surely in your .aws conf). The a terraform init (X-account remember) won't error.
I have similar issue, when I run terraform init -backend-config="profile=myProfileAws"
. And I receive the following error:
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: 752DEFCBA5D4DB53
P.S:I used that profile for create my s3
That command gives me the following:
2019/03/21 06:06:44 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Thu, 21 Mar 2019 11:06:43 GMT
Server: AmazonS3
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
X-Amz-Request-Id: A6FCDB6D99F1D56C
2019/03/21 06:06:44 [DEBUG] [aws-sdk-go]
AccessDenied
Access DeniedA6FCDB6D99F1D56CHj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
2019/03/21 06:06:44 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
status code: 403, request id: A6FCDB6D99F1D56C, host id: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
2019/03/21 06:06:44 [DEBUG] plugin: waiting for all plugin processes to complete...
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: A6FCDB6D99F1D56C, host id: Hj/K+QYv3Vw0B2NwV2Z+EGBOr3tnJtdDyZaegTSMdfd842GY8iPOPSvHDOmctRPXHMeFbyXCWhk=
My profile has the following configuration
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": "",
"Resource": ""
rarkebauer, suryaval, dmattia, nijamashruwala, erickinho1bra, juanluisbaptiste, eerohele, and dcaughill reacted with thumbs up emoji
rarkebauer reacted with hooray emoji
All reactions
Error Details Below:
Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error loading state: AccessDenied: Access Denied
Workaround
cat ~/.aws/credentials
export AWS_PROFILE= matching-credential-profile-name
terraform init works now! 🔥 🔥 🔥
MinGoRi, hisener, hangingman, yoannes, georgeroman, ashwajce, anthonyringoet, puristL, and lgosh reacted with thumbs up emoji
MinGoRi and yoannes reacted with laugh emoji
MinGoRi, hisener, dmarchuk, yoannes, and lgosh reacted with hooray emoji
MinGoRi, kbence, yoannes, and lgosh reacted with heart emoji
MinGoRi, hisener, tienvu461, yoannes, and lgosh reacted with rocket emoji
All reactions
I managed to work around this problem by replacing the [default] section of my ~/.aws/credentials, but I think this is still a terragrunt bug, since terraform init
works with the same configuration.
Some extra detail:
My procedure for setting up the ~/.aws/credentials file was simply to run aws configure
and enter my login data.
Once I did that, terraform 0.11.13 runs terraform init
just fine without any aws provider block.
Once I move the backend configuration into the terragrunt block of terraform.tfvars, terragrunt init
fails with the 403 error.
Then, copying my personal profile from the [username-redacted] section of the ~/.aws/credentials file into the [default] section of that file allows terragrunt init
to run.
I'm not actually sure where aws configure
sets what it and terraform appear to agree is the default profile. Maybe in the [profile username-redacted] section of ~/.aws/config? I don't have any AWS_* environment variables set. Regardless, terragrunt and terraform behave differently here and I think terraform 0.11.13 has it right.
I was getting this error:
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403
Finally, I figure it out that the state files where write with different terraform versions:
Terraform doesn't allow running any operations against a state
that was written by a future Terraform version. The state is
reporting it is written by Terraform '0.11.13'
Please run at least that version of Terraform to continue.
I did update the version and everything works well.
I experienced this today with Terraform v0.12.1 + provider.aws v2.14.0. I created a new S3 bucket, created an IAM policy to hold the ListBucket, GetObject, and PutObject permissions (with the appropriate resource ARNs), then attached that to my user.
That user's key/secret are in a named profile in my ~/.aws/credentials file. Here's my entire main.tf
in a clean directory:
terraform {
backend "s3" {
region = "us-east-1"
bucket = "BUCKET_NAME_HERE"
key = "KEY_NAME_HERE"
required_providers {
aws = ">= 2.14.0"
provider "aws" {
region = "us-east-1"
shared_credentials_file = "CREDS_FILE_PATH_HERE"
profile = "PROFILE_NAME_HERE"
When I run TF_LOG=DEBUG terraform init
, the sts identity section of the output shows that it is using the creds from the default section of my credentials file. If I add -backend-config="profile=PROFILE_NAME_HERE"
to that call, it still uses the default profile creds.
Amending it to
AWS_PROFILE=PROFILE_NAME_HERE TF_LOG=DEBUG terraform init
finally got it to use the actual profile. Seems like a legit bug, given that both methods of passing the profile name failed (in the provider block's profile
arg, and as a CLI argument), but overriding via env var worked.
So if you're experiencing this failure, and your bucket and IAM policy are correct, it appears the workaround is to either export AWS_PROFILE
, or to rename the [default]
section of your credentials file to something else.
Edit: I also tried adding shared_credentials_file
and profile
arguments to the terraform.backend
block directly, just for kicks (I have no idea if those are even valid args in that block)--it didn't change anything.
From what I found I had to add the profile again in the back end as it was using the default profile along with the profile for the env.
provider "aws" {
region = "${var.region}"
profile = "myprofile"
version = "~> 1.50"
terraform {
backend "s3" {
bucket = "REDACTED"
key = "sysops/servername.tfstate"
region = "us-east-1"
encrypt = true
profile = "myprofile"
data "terraform_remote_state" "sysops" {
backend = "s3"
config {
bucket = "REDACTED"
key = "REDACTED"
region = "us-east-1"
profile = "myprofile"
I ran into this as well, and my problem was different then everything here. I'd recently built out a "dev" stack of configuration directories; VPC, security groups, etc. I then copied all those over to a "prod" stack, and proceeded to update the backend stanza with the prod bucket. BUT, I didn't delete the .terraform directory from each resource directory before running init. So even though the backend was correct in my .tf file and my credentials were correct it was failing to list the dev bucket (where it thought there was an existing state file). Dumb mistake in the end, but the root cause didn't jump out at me.
I can confirm seeing the same problem as @nballenger, i.e the profile
config sometimes doesn't work for the following versions: Terraform v0.11.13 + provider.aws v2.20.0, only updating the env variable AWS_PROFILE
seems to solve the issue.
Note: I'm saying sometimes, because running another test just now by initializing the profile through the profile
config works fine, will add more details if I run into this problem again.
Something worth mentioning is that after configuring it successfully with the env variable, it seems the s3 backend configuration is persisted in .terraform/terraform.tfstate
, so this may be a good way to confirm whether the backend is using the correct profile.
It seems like a lot of issues are producing the same error, I'd be happy to contribute a fix if we can successfully determine a source for the issue where the code requires a change 😄
EDIT: Follow-up
My issue proved to be nothing to do with terraform. The instance profile lacked several key S3 permissions in its IAM policy. Not sure if terraform even gets useful info back from AWS on those errors to produce meaningful error states. Regardless, if you're seeing this, try looking over your S3
policies to make sure you can ListBucket, GetObject, PutObject, and DeleteObject.
Has anyone had additional luck with this issue? I have a scenario where my s3 backend receives the following error
Failed to save state: failed to upload state: AccessDenied: Access Denied
status code: 403, request id: REDACTED, host id: REDACTED
Error: Failed to persist state to backend.
my provider:
provider "aws" {
region = "${var.aws_region}"
terraform {
backend "s3" {
bucket = "REDACTED"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "REDACTED"
encrypt = true
My init and plan run just fine. I've seen this work when I'm using AWS Access key / Secret Key, but in this case my worker node is using an assumed role. I can run any aws cli commands from the command-line just fine. Perhaps Terraform has to be specially configured when running from a worker node that infers its permissions from an instance profile?
For anyone still having issues, I had to include these actions as well as the ones above: s3:GetBucketObjectLockConfiguration, s3:GetEncryptionConfiguration, s3:GetLifecycleConfiguration, s3:GetReplicationConfiguration
manish-raut, jobycxa, infinitydon, mars64, gastoncan, mfykmn, rbk, betinro, acsbendi, afdecastro879, and 8 more reacted with thumbs up emoji
afdecastro879, bricef, rui-lu, brandonmailhiot, gredfearn, and enzo-billis reacted with hooray emoji
brandonmailhiot, gredfearn, shazinahmed, and enzo-billis reacted with heart emoji
All reactions
Sorry, this is tripping you up, trying to handle multiple accounts in the backend can be confusing.
The access denied is because when you run init and change the backend config, terraform's default behavior is to migrate the state from the previous backend to the new backend. So your new configuration may be correct, but you don't probably have the credentials loaded to access the previous state.
This is what the -reconfigure flag was added to support, which ignores the previous configuration altogether.
I have 2 AWS accounts.
I have copied module implementation from one repository to another(one repo per account). When you run terraform init you terraform is trying to get information from the old bucket(another account).
Solution:
Remove .terraform file inside the module.
So today I learnt that s3 remote states resolve their credentials separately from the back-end and provider...
So if you're passing a Role around for auth ensure that it's in your remote state block as well or you'll get this error when the the remote state resolves the credential chain.
I run into Error copying state from the previous "local" backend to the newly configured "s3" backend: failed to upload state: AccessDenied: Access Denied
For me I have default encryption enabled on the backend but was required to specify -backend-config=encrypt=true
to init when migrated from local state to S3
I just put profile in the config bloc like
terraform { backend "s3" { bucket = "mybucket" key = "path/to/my/key" region = "us-east-1" profile = "your_profile" } }
and everything works well
In my case, I was trying to create bucket for static website hosting, however before, I had manually changed to block all public access. The message was access denied(probably should have been another one) for Administrator account. Changing it to setting where buckets with public access can exist and trying terraform apply resolved everything.
Got this same error and I want to provide the workaround that worked for me
"I deleted the .terraform folder and ran the init again- and it worked" ..... Chukwunonso Agbo
This guy is a genius
If you are switching S3 as backend at the middle of the work. You may need to run terraform init -reconfigure
instead terraform init
.
Most probably, the option-reconfigure
is what you are looking for
I'm getting this error "
Error: error configuring S3 Backend: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 0bda7445-fc52-43fa-a1e9-1a6040db165b"
here is my backend is configured
terraform {
backend "s3" {
bucket = "mybucket"
region = "us-east-1"
key = "mybucket/terraform.tfstate"
access_key = "mykey"
secret_key = "secretkey"
While the policy I created and attached to the user is
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
"Resource": "arn:aws:s3:::mybucket/*"
but still getting the error mentioned above
Just an observation: this situation can happen when using the same repository with different AWS accounts. Then you can delete the .terraform
directory an initialize again:
rm -rf .terraform
terraform init
I was so dump to forget the region in the backend configuration to the correct region.
~/.aws/credentials file profile changed to "default"
aws provider and backend configuration corrected to the same region and it fixed the issue.
Might have some insight for why @bzamecnik solution works. If you do not remove the .terraform
folder, it tries to use the old backend configuration, even if the reason for the re-init is you just changed the backend config. In my case, that meant it was trying to use the wrong bucket name. An rm of the entire .terraform folder and it works as expecte
I encountered this problem when switching between AWS accounts and AWS profiles while working in the same repo/workspace. You can delete the local .terraform folder and rerun terraform init
to fix the issue. If your state is actually remote and not local this shouldn't be an issue.
the .terraform/terraform.tfstate
file clearly showed that it was pointing to an S3 bucket in the wrong account which the currently applied AWS credentials couldn't read from.
I was on TF 0.12.29 and feels like a bug that TF will fail and display this error instead of displaying an error that you can't run a version older than the latest TF version that has written to remote state. Update your local Terraform version, clear local state, and try again:
$ cat .terraform-version
latest:^0.12
$ tfenv install 0.12.30
$ rm -rf .terraform
$ terraform init
Hi All,
Thanks for the follow-ups here which mention init -reconfigure
. This appears to be the solution to the original issue, so I'm going to close this out.
If anyone is having continuing problems, we use GitHub issues for tracking bugs and enhancements, rather than for questions. While we can sometimes help with certain simple problems here, it's better to use the community forum where there are more people ready to help.
Thanks!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.