I've read a lot of different threads about this issue, here and at other sites, and I cannot find a solution so far.
Im running three servers at Azure, one at each zone. Server 1 is Windows, 2 and 3 (replicas) are Linux. This is not a production but a "learning" environment.
I've set this content for all three files at mongod.conf:
net:
port: 27017
bindIp: 0.0.0.0
security:
authorization: enabled
keyFile: /var/lib/mongodb/mongo.key
replication:
replSetName: "lab"
And they all share the same key.
I also created the same "root" user on each server:
{
user: "mongodevuser",
pwd: "ThisIsAPassword!@#123#@!",
roles: [{ role: "root", db: "admin" }],
authenticationRestrictions: [
{
clientSource: ["xx.xx.xx.xx", "127.0.0.1"]
}
]
}
And I initiated rs and added the two server IPs.
The access to the key file was setup right (I went thru sudo chown, chmod 400, etc) till I figured the right way to do this.
But now, I have both linux servers in stuck in stateStr: 'STARTUP' when I rs.status() at the "main" windows server.
Now, from the Windows server, I was able to mongosh to both linux servers, and vice versa, so it is not a firewall issue
Here is my rs.status()
members: [
{
_id: 0,
name: 'mongo:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 10111,
optime: { ts: Timestamp({ t: 1677620041, i: 1 }), t: Long("2") },
optimeDate: ISODate("2023-02-28T21:34:01.000Z"),
lastAppliedWallTime: ISODate("2023-02-28T21:34:01.216Z"),
lastDurableWallTime: ISODate("2023-02-28T21:34:01.216Z"),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1677609950, i: 1 }),
electionDate: ISODate("2023-02-28T18:45:50.000Z"),
configVersion: 3,
configTerm: 2,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'xx.xx.xx.xx:27017',
health: 1,
state: 0,
stateStr: 'STARTUP',
uptime: 8791,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
lastHeartbeat: ISODate("2023-02-28T21:34:04.912Z"),
lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
pingMs: Long("36"),
lastHeartbeatMessage: '',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: -2,
configTerm: -1
},
{
_id: 2,
name: 'xx.xx.xx.xx:27017',
health: 1,
state: 0,
stateStr: 'STARTUP',
uptime: 9296,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
lastAppliedWallTime: ISODate("1970-01-01T00:00:00.000Z"),
lastDurableWallTime: ISODate("1970-01-01T00:00:00.000Z"),
lastHeartbeat: ISODate("2023-02-28T21:34:03.423Z"),
lastHeartbeatRecv: ISODate("1970-01-01T00:00:00.000Z"),
pingMs: Long("32"),
lastHeartbeatMessage: '',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: -2,
configTerm: -1
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1677620041, i: 1 }),
signature: {
hash: Binary(Buffer.from("c4080375be71381d41b04860817041b7b064a3c3", "hex"), 0),
keyId: Long("7205078939239186438")
}
},
operationTime: Timestamp({ t: 1677620041, i: 1 })
}
I've hide the IPs above, which are Public ones. I tried restarting the services, the servers, and I always get:
MongoServerError: node is not in primary or recovering state
At mongosh on the replicated servers.
And this is the error I get at the log files
"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.
and
Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}}
Thanks.