Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save harveyconnor/518e088bad23a273cae6ba7fc4643549 to your computer and use it in GitHub Desktop.
Save harveyconnor/518e088bad23a273cae6ba7fc4643549 to your computer and use it in GitHub Desktop.
MongoDB Replica Set / docker-compose / mongoose transaction with persistent volume

This will guide you through setting up a replica set in a docker environment using.

  • Docker Compose
  • MongoDB Replica Sets
  • Mongoose
  • Mongoose Transactions

Thanks to https://gist.github.com/asoorm for helping with their docker-compose file!

mongo-setup:
container_name: mongo-setup
image: mongo
restart: on-failure
networks:
default:
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ] # Make sure this file exists (see below for the setup.sh)
depends_on:
- mongo1
- mongo2
- mongo3
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo
expose:
- 27017
ports:
- 27017:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data1/db:/data/db # This is where your volume will persist. e.g. VOLUME-DIR = ./volumes/mongodb
- <VOLUME-DIR>/mongo/data1/configdb:/data/configdb
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo
expose:
- 27017
ports:
- 27018:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data2/db:/data/db # Note the data2, it must be different to the original set.
- <VOLUME-DIR>/mongo/data2/configdb:/data/configdb
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo
expose:
- 27017
ports:
- 27019:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data3/db:/data/db
- <VOLUME-DIR>/mongo/data3/configdb:/data/configdb
# NOTE: This is the simplest way of achieving a replicaset in mongodb with Docker.
# However if you would like a more automated approach, please see the setup.sh file and the docker-compose file which includes this startup script.
# run this after setting up the docker-compose This will instantiate the replica set.
# The id and hostname's can be tailored to your liking, however they MUST match the docker-compose file above.
docker-compose up -d
docker exec -it localmongo1 mongo
rs.initiate(
{
_id : 'rs0',
members: [
{ _id : 0, host : "mongo1:27017" },
{ _id : 1, host : "mongo2:27017" },
{ _id : 2, host : "mongo3:27017", arbiterOnly: true }
]
}
)
exit
// If on a linux server, use the hostname provided by the docker compose file
// e.g. HOSTNAME = mongo1, mongo2, mongo3
// If on MacOS add the following to your /etc/hosts file.
// 127.0.0.1 mongo1
// 127.0.0.1 mongo2
// 127.0.0.1 mongo3
// And use localhost as the HOSTNAME
mongoose.connect('mongodb://<HOSTNAME>:27017,<HOSTNAME>:27018,<HOSTNAME>:27019/<DBNAME>', {
useNewUrlParser : true,
useFindAndModify: false, // optional
useCreateIndex : true,
replicaSet : 'rs0', // We use this from the entrypoint in the docker-compose file
})
#!/bin/bash
#MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
#MONGODB2=`ping -c 1 mongo2 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
#MONGODB3=`ping -c 1 mongo3 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB1=mongo1
MONGODB2=mongo2
MONGODB3=mongo3
echo "**********************************************" ${MONGODB1}
echo "Waiting for startup.."
until curl http://${MONGODB1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
# echo curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1
# echo "Started.."
echo SETUP.sh time now: `date +"%T" `
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs0",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
"priority": 2
},
{
"_id": 1,
"host": "${MONGODB2}:27017",
"priority": 0
},
{
"_id": 2,
"host": "${MONGODB3}:27017",
"priority": 0
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.slaveOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSlaveOk();
EOF
async function transaction() {
// Start the transaction.
const session = await ModelA.startSession();
session.startTransaction();
try {
const options = { session };
// Try and perform operation on Model.
const a = await ModelA.create([{ ...args }], options);
// If the first operation succeeds this next one will get called.
await ModelB.create([{ ...args }], options);
// If all succeeded with no errors, commit and end the session.
await session.commitTransaction();
session.endSession();
return a;
} catch (e) {
// If any error occured, the whole transaction fails and throws error.
// Undos changes that may have happened.
await session.abortTransaction();
session.endSession();
throw e;
}
}
@crapthings
Copy link

can we put these into another docker container?
this docker container will wait for those mongo get started, then
other service will waiting for replset to ready

rs.initiate(
  {
    _id : 'rs0',
    members: [
      { _id : 0, host : "mongo1:27017" },
      { _id : 1, host : "mongo2:27017" },
      { _id : 2, host : "mongo3:27017" }
    ]
  }
)

@harveyconnor
Copy link
Author

@crapthings not sure what you mean?

@DannyMcwaves
Copy link

DannyMcwaves commented Jul 22, 2019

@harveyconnor when you add the members to the replica set and you try to connect to it from the docker host using the string mongodb://localhost:27017,localhost:27018,localhost:27019/DB_NAME?replicaSet=rs0, you get an error.

2019/07/22 13:23:37 server selection error: server selection timeout
current topology: Type: ReplicaSetNoPrimary
Servers:
Addr: mongo1:27017, Type: Unknown, State: Connected, Avergage RTT: 0, Last error: dial tcp: lookup mongo1: no such host
Addr: mongo2:27017, Type: Unknown, State: Connected, Avergage RTT: 0, Last error: dial tcp: lookup mongo2: no such host
Addr: mongo3:27017, Type: Unknown, State: Connected, Avergage RTT: 0, Last error: dial tcp: lookup mongo3: no such host

I'm using Golang's mongo-go-driver. My guess is that the docker host is trying to resolve hostnames of the replicaset members.

@harveyconnor
Copy link
Author

@DannyMcwaves
Please read the comments in the mongoose.js file.
When on linux you'll need to use the hostname of the docker file/ container name. (e.g. mongo1:27017).
Let me know how that goes.

@nclabz
Copy link

nclabz commented Aug 21, 2019

It stops after entering the mongodb shell

@thearabbit
Copy link

thearabbit commented Sep 23, 2019

I base on Meteor + Mongo (Mac)
I get error

{
	"message" : "no primary found in replicaset or invalid replica set name",
	"name" : "MongoError"
}

My rs.config()

{
	"_id" : "rabbit_rs",
	"version" : 2,
	"protocolVersion" : NumberLong(1),
	"writeConcernMajorityJournalDefault" : true,
	"members" : [
		{
			"_id" : 0,
			"host" : "mongo0:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {

			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		},
		{
			"_id" : 1,
			"host" : "mongo1:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {

			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		},
		{
			"_id" : 2,
			"host" : "mongo2:27017",
			"arbiterOnly" : true,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {

			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		}
	],
	"settings" : {
		"chainingAllowed" : true,
		"heartbeatIntervalMillis" : 2000,
		"heartbeatTimeoutSecs" : 10,
		"electionTimeoutMillis" : 100000,
		"catchUpTimeoutMillis" : -1,
		"catchUpTakeoverDelayMillis" : 30000,
		"getLastErrorModes" : {

		},
		"getLastErrorDefaults" : {
			"w" : 1,
			"wtimeout" : 0
		},
		"replicaSetId" : ObjectId("5d88b48be8156ea01e2ec79a")
	}
}

@harveyconnor
Copy link
Author

@thearabbit
In meteor you'll need to specify the rs name somehow:
"rabbit_rs"
See https://gist.github.com/harveyconnor/518e088bad23a273cae6ba7fc4643549#file-mongoose-js for example.

@thearabbit
Copy link

@harveyconnor, thanks for your reply.
I tried this on local mac, If work fine I will try to deploy on Digital Ocean.
I start Meteor with Mongo replication by package.json

  "scripts": {
    "start": "MONGO_URL='mongodb://localhost:4200,localhost:4201,localhost:4202/meteor?replicaSet= rabbit_rs' meteor run",
  },

And hosts config

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost
255.255.255.255 broadcasthost
::1             localhost
127.0.0.1       mongo0
127.0.0.1       mongo1
127.0.0.1       mongo2

Could you help me?

@thearabbit
Copy link

My docker-compose.yml

version: "3.6"
services:
  mongo0:
    hostname: mongo0
    container_name: mongo0
    image: mongo:4.0.6
    ports:
      - 4201:27017
    networks:
      - rabbit_net
    restart: always
    volumes:
      - /data/volume-mongo-rs/db0:/data/db
      - /data/volume-mongo-rs/mongod0.conf:/etc/mongod.conf
    command: --config /etc/mongod.conf --bind_ip_all --replSet rabbit_rs

  mongo1:
    hostname: mongo1
    container_name: mongo1
    image: mongo:4.0.6
    ports:
      - 4201:27017
    networks:
      - rabbit_net
    restart: always
    volumes:
      - /data/volume-mongo-rs/db1:/data/db
      - /data/volume-mongo-rs/mongod1.conf:/etc/mongod.conf
    depends_on:
       - mongo0
    command: --config /etc/mongod.conf --bind_ip_all --replSet rabbit_rs

  mongo2:
    # Host name = Container name
    hostname: mongo2
    container_name: mongo2
    image: mongo:4.0.6
    ports:
      - 4202:27017
    networks:
      - rabbit_net
    restart: always
    volumes:
      - /data/volume-mongo-rs/db2:/data/db
      - /data/volume-mongo-rs/mongod2.conf:/etc/mongod.conf
    depends_on:
       - mongo0
    command: --config /etc/mongod.conf --bind_ip_all --replSet rabbit_rs

networks:
  rabbit_net:
    driver: bridge
    name: rabbit_net

@harveyconnor
Copy link
Author

@thearabbit I'm not sure.

@thearabbit
Copy link

I tried your tutorial.

  • docker-compose.yml
  • run docker-compose
  • config rs.initiate()
  • config hosts name
    But change any ports
version: "3.6"
services:
  mongo1:
    hostname: mongo1
    container_name: localmongo1
    image: mongo:4.0.6
    expose:
      - 27017
    ports:
      - 27018:27017
    restart: always
    entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]
    volumes:
      - /data/volumes/mongo-rs:/data/db # This is where your volume will persist. e.g. VOLUME-DIR = ./volumes/mongodb
  mongo2:
    hostname: mongo2
    container_name: localmongo2
    image: mongo:4.0.6
    expose:
      - 27017
    ports:
      - 27019:27017
    restart: always
    entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]
  mongo3:
    hostname: mongo3
    container_name: localmongo3
    image: mongo:4.0.6
    expose:
      - 27017
    ports:
      - 27020:27017
    restart: always
    entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]

And try to connect via Mongo Booster, still get error

{
	"message" : "no primary found in replicaset or invalid replica set name",
	"name" : "MongoError"
}

image

image

@thearabbit
Copy link

But work fine for individual connect

image

@thearabbit
Copy link

Excuse me, Does you have any demo video on Youtube?

@thearabbit
Copy link

thearabbit commented Sep 26, 2019

It work fine for rs.initaite() with Host IP and Host Port.
And don't need to config /etc/hosts name`

rs.initiate(
  {
    _id : 'rs0',
    members: [
      { _id : 0, host : "192.168.1.100:4200" },
      { _id : 1, host : "192.168.1.100:4201" },
      { _id : 2, host : "192.168.1.100:4202" }
    ]
  }
)

Connect

mongo 'mongodb://192.168.1.100:4200,192.168.1.100:4201,192.168.1.100:4202/?replicaSet= rs0'

👍

@jessequinn
Copy link

jessequinn commented Dec 28, 2019

great work here; however, if i may offer a suggestion

#!/bin/sh
docker-compose stop
docker-compose up --build --remove-orphans -d
sleep 2
docker exec localmongo1 mongo --eval "
rs.initiate(
  {
    _id : 'rs0',
    members: [
      { _id : 0, host : \"mongo1:27017\" },
      { _id : 1, host : \"mongo2:27017\" },
      { _id : 2, host : \"mongo3:27017\", arbiterOnly: true }
    ]
  }
)
"

The above script will/should correctly create the replicate setup you desire without the need of initially building etc. You may reduce the sleep time.

@funfungo
Copy link

funfungo commented Jan 6, 2020

when I initialize the replica set , I got the errmsg:
replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo2:27017 failed with Error connecting to mongo2:27017 :: caused by :: Could not find address for mongo2:27017: SocketException: Host not found (authoritative), mongo3:27017 failed with Error connecting to mongo3:27017 :: caused by :: Could not find address for mongo3:27017: SocketException: Host not found (authoritative)

how these set couldn't connect each other

@Sashakil12
Copy link

Failed to connect to mongo on startup - retrying in 1 sec MongoNetworkError: failed to connect to server [mongo3:27019] on first connect [Error: connect ECONNREFUSED 172.26.0.4:27019
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1142:16) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
}]
at Pool. (/usr/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
at Pool.emit (events.js:315:20)
at Pool.EventEmitter.emit (domain.js:485:12)
at /usr/app/node_modules/mongodb/lib/core/connection/pool.js:561:14
at /usr/app/node_modules/mongodb/lib/core/connection/pool.js:1008:9
at /usr/app/node_modules/mongodb/lib/core/connection/connect.js:31:7
at callback (/usr/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
at Socket. (/usr/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
at Object.onceWrapper (events.js:422:26)
at Socket.emit (events.js:315:20)
at Socket.EventEmitter.emit (domain.js:485:12)
at emitErrorNT (internal/streams/destroy.js:100:8)
at emitErrorCloseNT (internal/streams/destroy.js:68:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
[Symbol(mongoErrorContextSymbol)]: {}

i got this error

@harveyconnor
Copy link
Author

@jessequinn Thanks, I have adapted that and updated the gist.

@funfungo & @Sashakil12 try the new updated docker compose + setup.sh

@Crackz
Copy link

Crackz commented Jul 26, 2020

what if i have services that require waiting for rs initialization

@harveyconnor
Copy link
Author

@Crackz
Add

depends_on:
  - mongo-setup

to that service

@gauravsahni25b
Copy link

gauravsahni25b commented Aug 7, 2020

@harveyconnor I am using the docker-compose and setup.sh combination above and I am getting

mongo-setup | standard_init_linux.go:211: exec user process caused "no such file or directory"

mongo-setup exited with code 1 Error.

Although when I visit: http://localhost:27017/
I see the message:
It looks like you are trying to access MongoDB over HTTP on the native driver port.

Ideas?

@gauravsahni25b
Copy link

If anyone else faces the same error on windows.

Please change the LineEndings to LF in setup.sh [if it is CRLF].

For VS Users, look at the Right Part of the Status Bar [in the bottom].

@harveyconnor
Copy link
Author

Thanks for reporting and solving :)

@AnushaPulichintha
Copy link

AnushaPulichintha commented Dec 24, 2020

@harveyconnor, @thearabbit Hi, I am having the same issues like "thearabbit", I can only connect using host ip_address. is there any way i can connect using docker host names or local host. I have posted detailed question here. Can someone tell me what am i missing?

@harveyconnor
Copy link
Author

@alissonfpmorais
Copy link

First, thanks @harveyconnor for all the work here.
My main issue while following the steps above was that I could connect directly from host to a single mongo container, but no with the replica set. To make it work I had to change a few configurations to both docker-compose.yml and setup.sh, so here's what I did:

TLDR: Here's a gist with my custom setup.

My first issue was that the setup.sh couldn't connect to the other containers, so the setup wasn't occurring properly. To fix this, I've added ALL containers to the same network as mongo-setup, i.e.:

mongo1:
  # other options
  networks:
    default:

The second issue is that the hosts and ports known by the replica set are the same hosts and ports that I need to use (in my local machine, docker's host) in order establish the connection properly, so I had to:

  1. Add aliases in /etc/hosts (I've added the IP printed by docker network inspect <network_name>, haven't tested with 127.0.0.1). Here's a great tutorial to make it automatically;
  2. Update container_name to match the service name, i.e.: service mongo1 -> container_name mongo1, ...
  3. Change the port used by mongod in entrypoint, like: entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false", "--port", "27018" ]. I choose, 27017 for mongo1, 27018 for mongo2 and 27019 for mongo3;
  4. Update both expose and ports for each container accordingly to the port used in step 2;
  5. Update members array in variable cfg inside setup.sh. Change each host's port to the same one choosed in step 2.

Note: I'm currently using Pop!_OS 21.10 (Ubuntu-based)

@JonJagger
Copy link

JonJagger commented Apr 2, 2022

This worked for me. But only sometimes. Empirically, I find that if I have N replicas in the set, then I have to wait until they are all ready before sending the json config to one of them. Eg, with 3 replicas defined in docker-compose.yaml with service names mongo1, mongo2, and mongo3 then:

# setup.sh
...
for n in $(seq 3); do
  until mongo --host "mongo${n}" --eval "print(\"waited for connection\")"; do
      echo -n .; sleep 2
  done
done
...

@caffeinatedgaze
Copy link

You might need authentication with the remote mongo replica. I used this script and it went well – somehow you need to supply schema for this to work

#!/bin/bash

mongosh -u root -p example mongodb://mongo1:27017 << EOF
rs.initiate(
    {
        _id: 'rs',
        members: [
            {_id: 0, host: "mongo1:27017"},
            {_id: 1, host: "mongo2:27017"},
            {_id: 2, host: "mongo3:27017"}
        ]
    }
);
EOF

@bentu-noodoe
Copy link

This worked for me. But only sometimes. Empirically, I find that if I have N replicas in the set, then I have to wait until they are all ready before sending the json config to one of them. Eg, with 3 replicas defined in docker-compose.yaml with service names mongo1, mongo2, and mongo3 then:

# setup.sh
...
for n in $(seq 3); do
  until mongo --host "mongo${n}" --eval "print(\"waited for connection\")"; do
      echo -n .; sleep 2
  done
done
...

Thank you, this works for me.

@AhmedBHameed
Copy link

AhmedBHameed commented Mar 23, 2024

[RESOLVED] I found a solution for my following issue. See below.

I'm using Linux. I configured replicaset successfully and even connected via a server running at the same network of the mongo containers.

However, If I wanted to connect mongo compass but all my tries failed!!

mongodb://<USER>:<PASS>@mongo1:27017,mongo2:27018,mongo3:27019/?replicaSet=rs0 // failed
mongodb://<USER>:<PASS>@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0 // failed

Even using my local IP address failed to connect. I do see docker sensing the connecting but paining with the following error:

mongo1  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":11}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":12}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":22}}
mongo1  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn78","msg":"client metadata","attr":{"remote":"192.168.2.84:56344","client":"conn78","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn81","msg":"client metadata","attr":{"remote":"192.168.2.84:58228","client":"conn81","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn92","msg":"client metadata","attr":{"remote":"192.168.2.84:40516","client":"conn92","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo1  | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn78","msg":"Connection ended","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":10}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn81","msg":"Connection ended","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":11}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.514+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn92","msg":"Connection ended","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":21}}

Running rs.status() giving me the following

{
  set: 'rs0',
  date: ISODate('2024-03-23T17:24:32.085Z'),
  myState: 2,
  term: Long('2'),
  syncSourceHost: 'mongo3:27019',
  syncSourceId: 2,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    lastCommittedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    appliedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    durableOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
    lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1711214658, i: 1 }),
  electionParticipantMetrics: {
    votedForCandidate: true,
    electionTerm: Long('2'),
    lastVoteDate: ISODate('2024-03-23T17:11:28.736Z'),
    electionCandidateMemberId: 2,
    voteReason: '',
    lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
    maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
    priorityAtElection: 1,
    newTermStartDate: ISODate('2024-03-23T17:11:28.780Z'),
    newTermAppliedDate: ISODate('2024-03-23T17:11:28.802Z')
  },
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 794,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      syncSourceHost: 'mongo3:27019',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27018',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 793,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
      lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo3:27019',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 2
    },
    {
      _id: 2,
      name: 'mongo3:27019',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 793,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
      lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1711213888, i: 1 }),
      electionDate: ISODate('2024-03-23T17:11:28.000Z'),
      configVersion: 1,
      configTerm: 2
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1711214668, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('VClxfA6lXgDXDuYJP68mAP6veSw=', 0),
      keyId: Long('7349605288829255686')
    }
  },
  operationTime: Timestamp({ t: 1711214668, i: 1 })
}

Any idea how to use mongo compass app with docker containers of mongo replicaset ?

test

UPDATE:

I managed to make it working by adding IP mapping in /etc/hosts

127.0.0.1       mongo1
127.0.0.1       mongo2
127.0.0.1       mongo3

Then connection was working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment