添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bitnami/mongodb-sharded 5.0.1

What steps will reproduce the bug?

Failed to create new mongodb sharded cluster using helm install with auth options from scratch.

Are you using any custom parameters or values?

helm install mongodb --set global.storageClass=local-path-delete,auth.enabled=true,auth.rootUser=root,auth.rootPassword=admin bitnami/mongodb-sharded

What is the expected behavior?

No response

What do you see instead?

04:56:01.45 INFO ==> Setting node as primary
mongodb 04:56:01.50
mongodb 04:56:01.50 Welcome to the Bitnami mongodb-sharded container
mongodb 04:56:01.50 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 04:56:01.50 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 04:56:01.50
mongodb 04:56:01.50 INFO ==> ** Starting MongoDB Sharded setup **
mongodb 04:56:01.55 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 04:56:01.60 INFO ==> Initializing MongoDB Sharded...
mongodb 04:56:01.62 INFO ==> Deploying MongoDB Sharded from scratch...
MongoNetworkError: connect ECONNREFUSED 10.216.3.227:27017

Additional information

No response

Same issue with the default values.

$ helm install mongodb-sharded bitnami/mongodb-sharded
NAME: mongodb-sharded
LAST DEPLOYED: Sun May 15 12:52:31 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 5.0.2
APP VERSION: 5.0.8
** Please be patient while the chart is being deployed **
The MongoDB® Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:
    mongodb-sharded.default.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run --namespace default mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:5.0.8-debian-10-r5 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace default svc/mongodb-sharded 27017:27017 &
    mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
$ export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
$ kubectl run --namespace default mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:5.0.8-debian-10-r5 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
If you don't see a command prompt, try pressing enter.
Current Mongosh Log ID:	62807b2715f80020c62dd5f7
Connecting to:		mongodb://mongodb-sharded:27017/admin?directConnection=true&appName=mongosh+1.3.1
MongoNetworkError: connect ECONNREFUSED 10.110.210.222:27017
pod "mongodb-sharded-client" deleted
pod default/mongodb-sharded-client terminated (Error)
$ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
mongodb-sharded-configsvr-0              0/1     Running   0          81s
mongodb-sharded-mongos-cb6b5c858-d2v4v   0/1     Running   0          81s
mongodb-sharded-shard0-data-0            0/1     Running   0          81s
mongodb-sharded-shard1-data-0            0/1     Running   0          81s
$ kubectl logs -p mongodb-sharded-mongos-cb6b5c858-d2v4v
mongodb 04:01:11.75
mongodb 04:01:11.78 Welcome to the Bitnami mongodb-sharded container
mongodb 04:01:11.80 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 04:01:11.82 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 04:01:11.84
mongodb 04:01:11.86 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 04:01:12.11 INFO  ==> Validating settings in MONGODB_* env vars...
          

My Kubernetes have 10 nodes installed.
This is the pods list of 5.0.1

$ kubectl get pod -o wide
NAME                                             READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
mongodb-mongodb-sharded-configsvr-0              0/1     Running   0          7s    10.212.3.13    ntest7   <none>           <none>
mongodb-mongodb-sharded-mongos-cbc7b5cd6-qr9c8   0/1     Running   0          7s    10.212.3.241   ntest8   <none>           <none>
mongodb-mongodb-sharded-shard0-data-0            0/1     Running   0          7s    10.212.3.242   ntest8   <none>           <none>
mongodb-mongodb-sharded-shard1-data-0            0/1     Running   0          7s    10.212.3.11    ntest7   <none>           <none>

4.0.17 is fine with same options. (version option is the only difference)
please refer the logs of 4.0.17 below

$ helm install mongodb --set global.storageClass=local-path-delete,auth.enabled=true,auth.rootUser=root,auth.rootPassword=admin bitnami/mongodb-sharded --version 4.0.17
NAME: mongodb
LAST DEPLOYED: Tue May 17 09:58:43 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 4.0.17
APP VERSION: 4.4.13
** Please be patient while the chart is being deployed **
The MongoDB&reg; Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:
    mongodb-mongodb-sharded.default.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb-mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run --namespace default mongodb-mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:4.4.13-debian-10-r30 --command -- mongo admin --host mongodb-mongodb-sharded
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace default svc/mongodb-mongodb-sharded 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
$ kubectl get pods -o wide
NAME                                              READY   STATUS              RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
mongodb-mongodb-sharded-configsvr-0               0/1     ContainerCreating   0          6s    <none>         ntest7   <none>           <none>
mongodb-mongodb-sharded-mongos-544585bbb6-px7zp   0/1     Running             0          6s    10.212.3.238   ntest8   <none>           <none>
mongodb-mongodb-sharded-shard0-data-0             0/1     ContainerCreating   0          6s    <none>         ntest8   <none>           <none>
mongodb-mongodb-sharded-shard1-data-0             0/1     ContainerCreating   0          6s    <none>         ntest7   <none>           <none>
$ kubectl logs mongodb-mongodb-sharded-configsvr-0
 00:58:57.82 INFO  ==> Setting node as primary
mongodb 00:58:57.86
mongodb 00:58:57.87 Welcome to the Bitnami mongodb-sharded container
mongodb 00:58:57.87 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 00:58:57.87 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 00:58:57.87
mongodb 00:58:57.87 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 00:58:57.92 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 00:58:57.94 INFO  ==> Initializing MongoDB Sharded...
mongodb 00:58:57.96 INFO  ==> Deploying MongoDB Sharded from scratch...
mongodb 00:59:05.09 INFO  ==> Creating users...
mongodb 00:59:05.09 INFO  ==> Creating root user...
MongoDB shell version v4.4.13
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d8096fce-8fe2-45a6-b0ac-76faf57b583b") }
MongoDB server version: 4.4.13
Successfully added user: {
	"user" : "root",
	"roles" : [
			"role" : "root",
			"db" : "admin"
mongodb 00:59:05.28 INFO  ==> Users created
mongodb 00:59:05.29 INFO  ==> Writing keyfile for replica set authentication...
mongodb 00:59:05.31 INFO  ==> Enabling authentication...
mongodb 00:59:05.33 INFO  ==> Configuring MongoDB Sharded replica set...
mongodb 00:59:05.33 INFO  ==> Stopping MongoDB...
mongodb 00:59:07.85 INFO  ==> Configuring MongoDB primary node...: mongodb-mongodb-sharded-configsvr-0.mongodb-mongodb-sharded-headless.default.svc.cluster.local
mongodb 00:59:07.99 INFO  ==> Stopping MongoDB...
mongodb 00:59:09.01 INFO  ==> ** MongoDB Sharded setup finished! **
mongodb 00:59:09.07 INFO  ==> ** Starting MongoDB **
{"t":{"$date":"2022-05-17T00:59:09.098+00:00"},"s":"I",  "c":"CONTROL",  "id":20698,   "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-05-17T00:59:09.101+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-05-17T00:59:09.103+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-05-17T00:59:09.103+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-05-17T00:59:09.155+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-mongodb-sharded-configsvr-0"}}

The difference I found is that 5.0.1 doesn't seem to create root user. (not sure if that's causing the problem though)

I couldn't reproduce your issue using version 5.0.5. This is what I did:

$ helm install mg --set auth.enabled=true,auth.rootUser=root,auth.rootPassword=admin bitnami/mongodb-sharded
NAME: mg
LAST DEPLOYED: Wed May 25 12:32:32 2022
NAMESPACE: n
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 5.0.5
APP VERSION: 5.0.8
** Please be patient while the chart is being deployed **
The MongoDB&reg; Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:
    mg-mongodb-sharded.n.svc.cluster.local
To get the root password run:
    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace n mg-mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run --namespace n mg-mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:5.0.8-debian-10-r18 --command -- mongosh admin --host mg-mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace n svc/mg-mongodb-sharded 27017:27017 &
    mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
$ k get po
NAME                                         READY   STATUS    RESTARTS   AGE
mg-mongodb-sharded-configsvr-0               1/1     Running   0          5m33s
mg-mongodb-sharded-mongos-7f44bd5558-svvtk   1/1     Running   0          5m33s
mg-mongodb-sharded-shard0-data-0             1/1     Running   0          5m33s
mg-mongodb-sharded-shard1-data-0             1/1     Running   0          5m33s
          

same issue.

https://github.com/bitnami/bitnami-docker-mongodb-sharded
At first, I thought the docker image was made wrong.
So through the example above, I constructed docker-compose and failed to execute mongodb sharded via docker.io/bitnami/mongodb-sharded:5.0

mongodb-shard2-secondary_1  | mongodb 01:44:02.12
mongodb-shard2-secondary_1  | mongodb 01:44:02.18 Welcome to the Bitnami mongodb-sharded container
mongodb-shard2-secondary_1  | mongodb 01:44:02.24 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb-shard2-secondary_1  | mongodb 01:44:02.28 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb-shard2-secondary_1  | mongodb 01:44:02.31
mongodb-shard2-secondary_1  | mongodb 01:44:02.36 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb-shard2-secondary_1  | mongodb 01:44:02.91 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb-sharded_1           | mongodb 01:44:01.63
mongodb-sharded_1           | mongodb 01:44:01.71 Welcome to the Bitnami mongodb-sharded container
mongodb-sharded_1           | mongodb 01:44:01.77 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb-sharded_1           | mongodb 01:44:01.82 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb-sharded_1           | mongodb 01:44:01.86
mongodb-sharded_1           | mongodb 01:44:01.94 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb-sharded_1           | mongodb 01:44:02.55 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb-sharded-2_1         | mongodb 01:44:01.44
mongodb-sharded-2_1         | mongodb 01:44:01.50 Welcome to the Bitnami mongodb-sharded container
mongodb-sharded-2_1         | mongodb 01:44:01.54 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb-sharded-2_1         | mongodb 01:44:01.57 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb-sharded-2_1         | mongodb 01:44:01.65
mongodb-sharded-2_1         | mongodb 01:44:01.70 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb-sharded-2_1         | mongodb 01:44:02.33 INFO  ==> Validating settings in MONGODB_* env vars...

No further progress has been made here.
However, it was possible to configure normally through docker.io/bitnami/mongodb-sharded:4.4 image.

docker-library/mongo#509
Next step, I looked up the docker-library mongodb issue.
Through the issue above, 5.0 and later, config server is to be replica set required requirement.

version: '3.1'
services:
  config01a:
    container_name: config01a
    image: mongo:5.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: root
    command: mongod --configsvr --replSet config01 --bind_ip_all --port 27017 --dbpath /data/db --auth --keyFile /etc/mongo.key
    volumes:
      - /data/config01a:/data/db
      - ./mongo.key:/etc/mongo.key
    networks:
      - mongo-cluster
    restart: always
  config01b:
    container_name: config01b
    image: mongo:5.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: root
    command: mongod --configsvr --replSet config01 --bind_ip_all --port 27017 --dbpath /data/db --auth --keyFile /etc/mongo.key
    volumes:
      - /data/config01b:/data/db
      - ./mongo.key:/etc/mongo.key
    networks:
      - mongo-cluster
    restart: always
shard01a:
    container_name: shard01a
    image: mongo:5.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: root
    command: mongod --shardsvr --replSet shard01 --port 27017 --dbpath /data/db --auth --keyFile /etc/mongo.key
    volumes:
      - /data/shard01a:/data/db
      - ./mongo.key:/etc/mongo.key
    networks:
      - mongo-cluster
    restart: always
  shard01b:
    container_name: shard01b
    image: mongo:5.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: root
    command: mongod --shardsvr --replSet shard01 --port 27017 --dbpath /data/db --auth --keyFile /etc/mongo.key
    volumes:
      - /data/shard01b:/data/db
      - ./mongo.key:/etc/mongo.key
    networks:
      - mongo-cluster
    restart: always
router01a:
    container_name: router01a
    image: mongo:5.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: root
    command: mongos --port 27017 --configdb config01/config01a:27017,config01b:27017 --bind_ip_all --keyFile /etc/mongo.key
    volumes:
      - ./mongo.key:/etc/mongo.key
    ports:
      - "27017:27017"
    depends_on:
      - config01a
      - config01b
      - shard01a
      - shard01b
    networks:
      - mongo-cluster
    restart: always
networks:
  mongo-cluster:
    driver: bridge

After configuring replica set, mongodb sharded cluster was possible through the docker-compose above.
Is it possible to run the config server after configuring the replica set in bitnami Helmchart?

I tried again, on a fresh cluster, and couldn't reproduce it:

$ helm install mg --set auth.enabled=true,auth.rootUser=root,auth.rootPassword=admin bitnami/mongodb-sharded
NAME: mg
LAST DEPLOYED: Tue May 31 09:58:02 2022
NAMESPACE: mongo
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 5.0.6
APP VERSION: 5.0.9
** Please be patient while the chart is being deployed **
$ k get po
NAME                                         READY   STATUS    RESTARTS   AGE
mg-mongodb-sharded-configsvr-0               1/1     Running   0          2m24s
mg-mongodb-sharded-mongos-54d7cfbfbf-mfs6r   1/1     Running   0          2m24s
mg-mongodb-sharded-shard0-data-0             1/1     Running   0          2m24s
mg-mongodb-sharded-shard1-data-0             1/1     Running   0          2m24s

Did you try the latest versions?

sure, I tried latest and got this.

$ helm install mg --set auth.enabled=true,auth.rootUser=root,auth.rootPassword=admin bitnami/mongodb-sharded
NAME: mg
LAST DEPLOYED: Wed Jun  1 10:48:36 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 5.0.6
APP VERSION: 5.0.9
** Please be patient while the chart is being deployed **
$ kubectl get po
NAME                                       READY   STATUS              RESTARTS   AGE
mg-mongodb-sharded-configsvr-0             0/1     ContainerCreating   0          9m45s
mg-mongodb-sharded-mongos-75fccc4c-87s72   0/1     Running             3          9m45s
mg-mongodb-sharded-shard0-data-0           0/1     ContainerCreating   0          9m45s
mg-mongodb-sharded-shard1-data-0           0/1     ContainerCreating   0          9m45s
$ kubectl logs mg-mongodb-sharded-configsvr-0
Error from server (BadRequest): container "mongodb" in pod "mg-mongodb-sharded-configsvr-0" is waiting to start: ContainerCreating

Please let me know what information you need to solve the problem.

I think I found the reason why it works in 4.4.x and not in 5.0.x

In 4.4.x, mg-mongodb-sharded-configsvr-0 connect to 127.0.0.1. See the logs below

mongodb 00:58:57.96 INFO ==> Deploying MongoDB Sharded from scratch... mongodb 00:59:05.09 INFO ==> Creating users... mongodb 00:59:05.09 INFO ==> Creating root user... MongoDB shell version v4.4.13 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb

On the other hand, in 5.0.x, it connect to pod ip.

mongodb 04:56:01.62 INFO ==> Deploying MongoDB Sharded from scratch... MongoNetworkError: connect ECONNREFUSED 10.216.3.227:27017

In the mg-mongodb-sharded-configsvr-0 of 5.0.x,
I tried to connect to both addresses(pod id and 127.0.0.1)
Here is the result

$ /opt/bitnami/mongodb/bin/mongo 10.224.3.227
MongoDB shell version v5.0.9
connecting to: mongodb://10.224.1.46:27017/test?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 10.224.1.46:27017, connection attempt failed: SocketException: Error connecting to 10.224.3.227:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:372:17
@(connect):2:6
exception: connect failed
exiting with code 1
$ /opt/bitnami/mongodb/bin/mongo 127.0.0.1
MongoDB shell version v5.0.9
connecting to: mongodb://127.0.0.1:27017/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("2590ee55-d458-4669-bc69-0e78942054e2") }
MongoDB server version: 5.0.9
mg-mongodb-sharded-configsvr:PRIMARY>

I don't know why, but it seems to be impossible to access with the pod ip.

This error you share indicates things are still being deployed as there are containers pending creation:

Error from server (BadRequest): container "mongodb" in pod "mg-mongodb-sharded-configsvr-0" is waiting to start: ContainerCreating

What happens if you wait enough for it to finish?

Even if I wait 100 years, nothing happens.

As I mentioned, mongodb connection can not be established with pod's ip(10.216.3.227).
That is because bind ip is set like this in /opt/bitnami/mongodb/conf/mongodb.conf

# network interfaces bindIpAll: false bindIp: 127.0.0.1

This means connection is not allowed except 127.0.0.1
I doubt that you succeeded in creating the cluster.
Can you share your logs?

So I tried to add --bind_ip_all option to mongodbExtraArg like this

helm install mg --set auth.rootUser=root,auth.rootPassword=admin,configsvr.mongodbExtraFlags="--bind_ip_all",shardsvr.dataNode.mongodbExtraFlags="--bind_ip_all" bitnami/mongodb-sharded

Then, mongodb connection established (takes too much times(more than 40 mins) though)
But after that, mongodb cluster is not created, only unknown logs are repeating over and over again.
Please refer the logs below and attached full logs

Log of configsvr

 14:46:20.46 INFO  ==> Setting node as primary
mongodb 14:46:20.51
mongodb 14:46:20.52 Welcome to the Bitnami mongodb-sharded container
mongodb 14:46:20.52 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 14:46:20.52 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 14:46:20.52
mongodb 14:46:20.52 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 14:46:20.58 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 14:46:20.62 INFO  ==> Initializing MongoDB Sharded...
mongodb 14:46:20.65 INFO  ==> Deploying MongoDB Sharded from scratch...
mongodb 14:46:20.67 DEBUG ==> Starting MongoDB in background...
about to fork child process, waiting until server is ready for connections.
forked process: 57
child process started successfully, parent exiting
mongodb 15:03:54.04 DEBUG ==> Validating 127.0.0.1 as primary node...
mongodb 15:12:38.35 DEBUG ==> Starting MongoDB in background...
mongodb 15:12:38.35 INFO  ==> Creating users...
mongodb 15:12:38.36 INFO  ==> Creating root user...
Current Mongosh Log ID:	629f6ae7ff9e421a90612ab4
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.2
Using MongoDB:		5.0.9
Using Mongosh:		1.4.2
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
   The server generated these startup warnings when booting:
   2022-06-07T14:46:20.716+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
------
mg-mongodb-sharded-configsvr [direct: primary] test> {
  ok: 1,
  '$gleStats': {
    lastOpTime: { ts: Timestamp({ t: 1654614760, i: 4 }), t: Long("1") },
    electionId: ObjectId("7fffffff0000000000000001")
  lastCommittedOpTime: Timestamp({ t: 1654614760, i: 1 }),
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1654614760, i: 4 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
  operationTime: Timestamp({ t: 1654614760, i: 4 })
mongodb 15:21:24.67 INFO  ==> Users created
mongodb 15:21:24.67 INFO  ==> Writing keyfile for replica set authentication...
mongodb 15:21:24.70 INFO  ==> Enabling authentication...
mongodb 15:21:24.71 INFO  ==> Configuring MongoDB Sharded replica set...
mongodb 15:21:24.72 INFO  ==> Stopping MongoDB...
mongodb 15:21:26.73 DEBUG ==> Starting MongoDB in background...
mg-mongodb-sharded-configsvr [direct: primary] test> about to fork child process, waiting until server is ready for connections.
forked process: 1115
child process started successfully, parent exiting
mongodb 15:30:13.06 INFO  ==> Configuring MongoDB primary node...: mg-mongodb-sharded-configsvr-0.mg-mongodb-sharded-headless.mongodb.svc.cluster.local
mongodb 15:38:59.39 INFO  ==> Stopping MongoDB...
mongodb 15:39:00.42 INFO  ==> ** MongoDB Sharded setup finished! **
mongodb 15:39:00.48 INFO  ==> ** Starting MongoDB **
{"t":{"$date":"2022-06-07T15:39:00.515+00:00"},"s":"I",  "c":"CONTROL",  "id":20698,   "ctx":"-","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-06-07T15:39:00.518+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2022-06-07T15:39:00.521+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-06-07T15:39:00.521+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-06-07T15:39:00.521+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-06-07T15:39:00.577+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-06-07T15:39:00.577+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ReshardingCoordinatorService","ns":"config.reshardingOperations"}}
{"t":{"$date":"2022-06-07T15:39:00.578+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2022-06-07T15:39:00.578+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mg-mongodb-sharded-configsvr-0"}}
{"t":{"$date":"2022-06-07T15:39:00.578+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.9","gitVersion":"6f7dae919422dcd7f4892c10ff20cdc721ad00e6","openSSLVersion":"OpenSSL 1.1.1n  15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-06-07T15:39:00.578+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 4.15.0-156-generic"}}}
{"t":{"$date":"2022-06-07T15:39:00.578+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"mg-mongodb-sharded-configsvr"},"security":{"authorization":"enabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"false"},"sharding":{"clusterRole":"configsvr"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2022-06-07T15:39:00.580+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/bitnami/mongodb/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2022-06-07T15:39:00.580+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2022-06-07T15:39:00.580+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3476M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2022-06-07T15:39:01.364+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1654616341:364181][1:0x7f4a027f6100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 3"}}
{"t":{"$date":"2022-06-07T15:39:01.482+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1654616341:482640][1:0x7f4a027f6100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 3 through 3"}}
{"t":{"$date":"2022-06-07T15:39:01.583+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1654616341:583596][1:0x7f4a027f6100], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 2/457088 to 3/256"}}
{"t":{"$date":"2022-06-07T15:39:01.584+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1654616341:584035][1:0x7f4a027f6100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 3"}}
{"t":{"$date":"2022-06-07T15:39:01.648+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1654616341:648715][1:0x7f4a027f6100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 3 through 3"}}
{"t":{"$date":"2022-06-07T15:39:01.809+00:00"},"s":"I",  "c":"REPL",     "id":21529,   "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}}
{"t":{"$date":"2022-06-07T15:39:01.809+00:00"},"s":"I",  "c":"REPL",     "id":4280504, "ctx":"initandlisten","msg":"Cleaning up any partially applied oplog batches & reading last op from oplog"}
{"t":{"$date":"2022-06-07T15:39:01.810+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}}
{"t":{"$date":"2022-06-07T15:39:01.810+00:00"},"s":"I",  "c":"REPL",     "id":21544,   "ctx":"initandlisten","msg":"Recovering from stable timestamp","attr":{"stableTimestamp":{"$timestamp":{"t":1654616338,"i":1}},"topOfOplog":{"ts":{"$timestamp":{"t":1654616338,"i":1}},"t":2},"appliedThrough":{"ts":{"$timestamp":{"t":0,"i":0}},"t":-1}}}
{"t":{"$date":"2022-06-07T15:39:01.811+00:00"},"s":"I",  "c":"REPL",     "id":21545,   "ctx":"initandlisten","msg":"Starting recovery oplog application at the stable timestamp","attr":{"stableTimestamp":{"$timestamp":{"t":1654616338,"i":1}}}}
{"t":{"$date":"2022-06-07T15:39:01.884+00:00"},"s":"I",  "c":"REPL",     "id":21392,   "ctx":"OplogApplier-0","msg":"New replica set config in use","attr":{"config":{"_id":"mg-mongodb-sharded-configsvr","version":2,"term":3,"members":[{"_id":0,"host":"mg-mongodb-sharded-configsvr-0.mg-mongodb-sharded-headless.mongodb.svc.cluster.local:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":5,"tags":{},"secondaryDelaySecs":0,"votes":1}],"configsvr":true,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"629f66cd98a55fb33856f073"}}}}}
{"t":{"$date":"2022-06-07T15:39:01.884+00:00"},"s":"I",  "c":"REPL",     "id":21393,   "ctx":"OplogApplier-0","msg":"Found self in config","attr":{"hostAndPort":"mg-mongodb-sharded-configsvr-0.mg-mongodb-sharded-headless.mongodb.svc.cluster.local:27017"}}
{"t":{"$date":"2022-06-07T15:39:01.884+00:00"},"s":"I",  "c":"REPL",     "id":6015310, "ctx":"OplogApplier-0","msg":"Starting to transition to primary."}
{"t":{"$date":"2022-06-07T15:39:01.884+00:00"},"s":"I",  "c":"REPL",     "id":6015309, "ctx":"OplogApplier-0","msg":"Logging transition to primary to oplog on stepup"}
{"t":{"$date":"2022-06-07T15:39:01.887+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":400}}
{"t":{"$date":"2022-06-07T15:39:01.888+00:00"},"s":"I",  "c":"STORAGE",  "id":20657,   "ctx":"OplogApplier-0","msg":"IndexBuildsCoordinator::onStepUp - this node is stepping up to primary"}
{"t":{"$date":"2022-06-07T15:39:01.889+00:00"},"s":"I",  "c":"SHARDING", "id":22049,   "ctx":"PeriodicShardedIndexConsistencyChecker","msg":"Checking consistency of sharded collection indexes across the cluster"}
{"t":{"$date":"2022-06-07T15:39:01.889+00:00"},"s":"I",  "c":"REPL",     "id":21331,   "ctx":"OplogApplier-0","msg":"Transition to primary complete; database writes are now permitted"}
{"t":{"$date":"2022-06-07T15:39:01.889+00:00"},"s":"I",  "c":"REPL",     "id":6015306, "ctx":"OplogApplier-0","msg":"Applier already left draining state, exiting."}
{"t":{"$date":"2022-06-07T15:39:01.889+00:00"},"s":"I",  "c":"SHARDING", "id":21856,   "ctx":"Balancer","msg":"CSRS balancer is starting"}
{"t":{"$date":"2022-06-07T15:39:01.889+00:00"},"s":"W",  "c":"SHARDING", "id":21876,   "ctx":"Balancer","msg":"Got error while refreshing balancer settings, will retry with a backoff","attr":{"backoffIntervalMillis":10000,"error":{"code":134,"codeName":"ReadConcernMajorityNotAvailableYet","errmsg":"Failed to refresh the balancer settings :: caused by :: Read concern majority reads are currently not possible."}}}
{"t":{"$date":"2022-06-07T15:39:01.924+00:00"},"s":"I",  "c":"TXN",      "id":22452,   "ctx":"TransactionCoordinator","msg":"Need to resume coordinating commit for transactions with an in-progress two-phase commit/abort","attr":{"numPendingTransactions":0}}
{"t":{"$date":"2022-06-07T15:39:01.924+00:00"},"s":"I",  "c":"TXN",      "id":22438,   "ctx":"TransactionCoordinator","msg":"Incoming coordinateCommit requests are now enabled"}
{"t":{"$date":"2022-06-07T15:39:01.925+00:00"},"s":"I",  "c":"REPL",     "id":5123005, "ctx":"ReshardingCoordinatorService-0","msg":"Rebuilding PrimaryOnlyService due to stepUp","attr":{"service":"ReshardingCoordinatorService"}}

Log of shard-data

 14:46:20.07 INFO  ==> Setting node as primary
mongodb 14:46:20.12
mongodb 14:46:20.13 Welcome to the Bitnami mongodb-sharded container
mongodb 14:46:20.13 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb-sharded
mongodb 14:46:20.13 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb-sharded/issues
mongodb 14:46:20.13
mongodb 14:46:20.13 INFO  ==> ** Starting MongoDB Sharded setup **
mongodb 14:46:20.19 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 14:46:20.24 INFO  ==> Initializing MongoDB Sharded...
mongodb 14:46:20.27 INFO  ==> Deploying MongoDB Sharded from scratch...
mongodb 14:46:20.29 DEBUG ==> Starting MongoDB in background...
about to fork child process, waiting until server is ready for connections.
forked process: 58
child process started successfully, parent exiting
mongodb 15:03:54.13 DEBUG ==> Validating 127.0.0.1 as primary node...
mongodb 15:12:38.42 DEBUG ==> Starting MongoDB in background...
mongodb 15:12:38.43 INFO  ==> Creating users...
mongodb 15:12:38.43 INFO  ==> Creating root user...
Current Mongosh Log ID:	629f6ae76ce7d377038dd6f0
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.2
Using MongoDB:		5.0.9
Using Mongosh:		1.4.2
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
   The server generated these startup warnings when booting:
   2022-06-07T14:46:20.340+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
------
mg-mongodb-sharded-shard-0 [direct: primary] test> {
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1654614760, i: 4 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
  operationTime: Timestamp({ t: 1654614760, i: 4 })
mongodb 15:21:24.75 INFO  ==> Users created
mongodb 15:21:24.75 INFO  ==> Writing keyfile for replica set authentication...
mongodb 15:21:24.78 INFO  ==> Enabling authentication...
mongodb 15:21:24.80 INFO  ==> Configuring MongoDB Sharded replica set...
mongodb 15:21:24.81 INFO  ==> Stopping MongoDB...
mongodb 15:21:26.82 DEBUG ==> Starting MongoDB in background...
mg-mongodb-sharded-shard-0 [direct: primary] test> about to fork child process, waiting until server is ready for connections.
forked process: 1084
child process started successfully, parent exiting
mongodb 15:30:13.14 INFO  ==> Configuring MongoDB primary node...: mg-mongodb-sharded-shard0-data-0.mg-mongodb-sharded-headless.mongodb.svc.cluster.local
mongodb 15:38:57.43 INFO  ==> Stopping MongoDB...
mongodb 15:38:59.45 DEBUG ==> Waiting for primary node...
mongodb 15:38:59.45 INFO  ==> Trying to connect to MongoDB server mg-mongodb-sharded...
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"
timeout reached before the port went into state "inuse"

configsvr_log.txt

After I added --bind_ip_all option, I think mongodb is up properly.
However, the pod status is not getting into READY.
I guess this is because of the readiness probe is failed.
When I tried to ping in mongosh in the pod using db.adminCommand('ping')
I got this

> db.adminCommand('ping')
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : Timestamp(0, 0),
		"electionId" : ObjectId("000000000000000000000000")
	"lastCommittedOpTime" : Timestamp(0, 0)

But, readiness probe is still failing.
Isn't it related to mongosh's hanging issue?
I'm facing same stucking when I do mongosh --eval.

Finally, the config server is up and running with the settings below.

mongodb-sharded:
  image:
    debug: true
  auth:
    enabled: true
    rootUser: root
    rootPassword: admin
  mongos:
    replicaCount: 3
  configsvr:
    mongodbExtraFlags: "--bind_ip_all"
    replicaCount: 3
    readinessProbe:
      enabled: false
    customReadinessProbe:
      exec:
        command:
        - /bin/sh
        - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
      initialDelaySeconds: 5
      periodSeconds: 5
  shards: 1
  shardsvr:
    dataNode:
      mongodbExtraFlags: "--bind_ip_all"

It seems that readinessprobe was blocking the start.

$ kubectl get po 
NAME                                        READY   STATUS    RESTARTS   AGE
mg-mongodb-sharded-configsvr-0              1/1     Running   0          6m35s
mg-mongodb-sharded-configsvr-1              1/1     Running   0          5m49s
mg-mongodb-sharded-configsvr-2              1/1     Running   0          71s
mg-mongodb-sharded-mongos-97f74d859-8c6jw   0/1     Running   2          6m38s
mg-mongodb-sharded-mongos-97f74d859-mlwcd   0/1     Running   2          6m38s
mg-mongodb-sharded-mongos-97f74d859-rpgq5   0/1     Running   2          6m38s
mg-mongodb-sharded-shard0-data-0            0/1     Running   0          6m35s

But, for sharded data server, customReadinessProbe is not working like below.

$ helm install mg . 
Error: INSTALLATION FAILED: template: managed-mongodb/charts/mongodb-sharded/templates/shard/shard-data-statefulset.yaml:245:23: executing "managed-mongodb/charts/mongodb-sharded/templates/shard/shard-data-statefulset.yaml" at <$.Value.shardsvr.dataNode.customReadinessProbe>: nil pointer evaluating interface {}.shardsvr

I guess there is some syntex error here, isn't it?

I'm glad it is almost running. Regarding the customReadinessProbe issue I was not able to reproduce it.

I deployed the Helm chart with the following parameters:

 mongos:
   replicaCount: 3
 configsvr:
   mongodbExtraFlags: "--bind_ip_all"
   replicaCount: 3
   readinessProbe:
     enabled: false
   customReadinessProbe:
     exec:
       command:
       - /bin/sh
       - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
     initialDelaySeconds: 5
     periodSeconds: 5
 shards: 1
 shardsvr:
   dataNode:
     mongodbExtraFlags: "--bind_ip_all"
$ helm template bitnami/mongodb-sharded --generate-name -f myvalues.yaml
# Source: mongodb-sharded/templates/config-server/config-server-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-mongodb-sharded-configsvr
  namespace: "default"
  labels:
    app.kubernetes.io/name: mongodb-sharded
    helm.sh/chart: mongodb-sharded-5.0.10
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: configsvr
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: mongodb-sharded
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: configsvr
  serviceName: release-name-mongodb-sharded-headless
  replicas: 3
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mongodb-sharded
        helm.sh/chart: mongodb-sharded-5.0.10
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: configsvr
    spec:
      serviceAccountName: "default"
      affinity:
        podAffinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: mongodb-sharded
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/component: configsvr
                namespaces:
                  - "default"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
      securityContext:
        fsGroup: 1001
      initContainers:
      containers:
        - name: mongodb
          image: docker.io/bitnami/mongodb-sharded:5.0.9-debian-10-r0
          imagePullPolicy: IfNotPresent
          securityContext:
            readOnlyRootFilesystem: false
            runAsNonRoot: true
            runAsUser: 1001
          ports:
            - containerPort: 27017
              name: mongodb
            - name: MONGODB_ENABLE_NUMACTL
              value: "no"
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MONGODB_SYSTEM_LOG_VERBOSITY
              value: "0"
            - name: MONGODB_DISABLE_SYSTEM_LOG
              value: "no"
            - name: MONGODB_MAX_TIMEOUT
              value: "120"
            - name: MONGODB_SHARDING_MODE
              value: "configsvr"
            - name: MONGODB_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MONGODB_PORT_NUMBER
              value: "27017"
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: release-name-mongodb-sharded-configsvr-0.release-name-mongodb-sharded-headless.default.svc.cluster.local
            - name: MONGODB_REPLICA_SET_NAME
              value: release-name-mongodb-sharded-configsvr
            - name: MONGODB_ADVERTISED_HOSTNAME
              value: $(MONGODB_POD_NAME).release-name-mongodb-sharded-headless.default.svc.cluster.local
            - name: MONGODB_ROOT_USER
              value: "root"
            - name: MONGODB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: release-name-mongodb-sharded
                  key: mongodb-root-password
            - name: MONGODB_REPLICA_SET_KEY
              valueFrom:
                secretKeyRef:
                  name: release-name-mongodb-sharded
                  key: mongodb-replica-set-key
            - name: MONGODB_ENABLE_IPV6
              value: "no"
            - name: MONGODB_ENABLE_DIRECTORY_PER_DB
              value: "no"
            - name: MONGODB_EXTRA_FLAGS
              value: "--bind_ip_all"
          command:
            - /bin/bash
            - /entrypoint/replicaset-entrypoint.sh
          livenessProbe:
            failureThreshold: 2
            initialDelaySeconds: 60
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 20
            exec:
              command:
                - pgrep
                - mongod
          readinessProbe:
            exec:
              command:
              - /bin/sh
              - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
            initialDelaySeconds: 5
            periodSeconds: 5
          startupProbe:
            failureThreshold: 30
            initialDelaySeconds: 0
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            tcpSocket:
              port: mongodb

You may misunderstood my comment.

What I meant to say was that the default readinessProbe is broking the mongodb cluster creation.
That is why I added customReadinessProbe and it work for configuration server.
But, when I tried to add same customReadinessProbe to data server, it didn't.

Please apply below configuration.

# https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded
mongodb-sharded:
  auth:
    enabled: true
    rootUser: root
  mongos:
    replicaCount: 3
  configsvr:
    mongodbExtraFlags: "--bind_ip_all"
    replicaCount: 3
    readinessProbe:
      enabled: false
    customReadinessProbe:
      exec:
        command:
        - /bin/sh
        - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
      initialDelaySeconds: 5
      periodSeconds: 5
  shards: 1
  shardsvr:
    dataNode:
      mongodbExtraFlags: "--bind_ip_all"
      replicaCount: 3
      readinessProbe:
        enabled: false
      customReadinessProbe:
        exec:
          command:
          - /bin/sh
          - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
        initialDelaySeconds: 5
        periodSeconds: 5

You may see the error message like this.

Error: INSTALLATION FAILED: template: managed-mongodb/charts/mongodb-sharded/templates/shard/shard-data-statefulset.yaml:245:23: executing "managed-mongodb/charts/mongodb-sharded/templates/shard/shard-data-statefulset.yaml" at <$.Value.shardsvr.dataNode.customReadinessProbe>: nil pointer evaluating interface {}.shardsvr
          

Using the values.yaml you mention but removing the initial mongodb-sharded since I'm using it directly as a main chart and not as a subchart, everything work as expected and the custom probes are set, see

$ helm template bitnami/mongodb-sharded --generate-name -f myvalues.yaml | grep 'disableImplicitSessions' -A 2 -B 5
          readinessProbe:
            exec:
              command:
              - /bin/sh
              - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
            initialDelaySeconds: 5
            periodSeconds: 5
          readinessProbe:
            exec:
              command:
              - /bin/sh
              - /opt/bitnami/mongodb/bin/mongo --disableImplicitSessions --eval "db.adminCommand('ping')"
            initialDelaySeconds: 5
            periodSeconds: 5

As your advice, when I use the values as main chart, I can start the installation.
But I stuck some where else eventually.
In any case, I end up running into situations where the mongosh hangs.

I found many similar issue like these.
https://stackoverflow.com/questions/72139325/readiness-probe-failes-because-mongosh-eval-freezes
#10264

I was stuck with the same error for several hours.

What I found after magic-driven development was that the mongodb-replica-set-key secret should be short around 16 chars, and from this charset: [a-zA-z0-9]

These values do not work for me:
...% - including % symbol in the word - I did it random :) and copy it with the % - 1 hour lost to find this
mongodb_replica_set_key - this also does not work - I assume it is long or the _ should not be used

I hope that will help someone that ends up with the issue here.