To secure cluster mongodb you can enable Mongodb authentication and authorization with –keyFile flag. When using –keyFile with a replica set, database contents are sent over the network between mongod nodes unencrypted.

Query index performance

Indexes dramatically improve the speed and the efficient resolution of read operations. Without indexes, MongoDB must scan every document of a collection to select those documents that match the query statement. This scan is highly inefficient and require the MongoDB to process a large volume of data.

Indexes store a small portion of the data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field as specified in index.

Optional you can specify the name of the index. If unspecified, MongoDB generates an index name by concatenating the names of the indexed fields and the sort order.

If the field does not exists in any of the documents from the collection, MongoDB will create the index without any warning. A MongoDB index can have keys of different types (i.e., ints, dates, string).

$explain allows you to evaluate the write operations. When applying it does not perform the operations but it estimates the time required to perform it and figure out what indexes are used.

JavaScript

1

2

db.mycoll.find({a:17,b:12}).count()

db.mycoll.explain().remove({a:17,b:12})

Output:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

{

"queryPlanner":{

"plannerVersion":1,

"namespace":"example.mycoll",

"indexFilterSet":false,

"parsedQuery":{

"$and":[

{

"a":{

"$eq":17

}

},

{

"b":{

"$eq":12

}

}

]

},

"winningPlan":{

"stage":"FETCH",

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[17.0, 17.0]"

],

"b":[

"[12.0, 12.0]"

]

}

}

},

"rejectedPlans":[

{

"stage":"FETCH",

"filter":{

"a":{

"$eq":17

}

},

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[12.0, 12.0]"

]

}

}

}

]

},

"serverInfo":{

"host":"BLUEONE",

"port":27017,

"version":"3.2.0",

"gitVersion":"45d947729a0315accb6d4f15a6b06be6d9c19fe7"

},

"ok":1

}

queryPlanner vs executionStats vs allPlansExecution

The behavior of db.collection.explain() and the amount of information returned depend on the verbosity mode. The default mode for $explain is queryPlanner.

executionStats includes queryPlanner and additional information:

time to execute the query

number of documents returned

documents examined.

JavaScript

1

2

exp=db.mycoll.explain("executionStats")

exp.find({a:17,b:55})

The output:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

{

"queryPlanner":{

"plannerVersion":1,

"namespace":"example.mycoll",

"indexFilterSet":false,

"parsedQuery":{

"$and":[

{

"a":{

"$eq":17

}

},

{

"b":{

"$eq":55

}

}

]

},

"winningPlan":{

"stage":"FETCH",

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[17.0, 17.0]"

],

"b":[

"[55.0, 55.0]"

]

}

}

},

"rejectedPlans":[

{

"stage":"FETCH",

"filter":{

"a":{

"$eq":17

}

},

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[55.0, 55.0]"

]

}

}

}

]

},

"executionStats":{

"executionSuccess":true,

"nReturned":100,

"executionTimeMillis":8,

"totalKeysExamined":100,

"totalDocsExamined":100,

"executionStages":{

"stage":"FETCH",

"nReturned":100,

"executionTimeMillisEstimate":0,

"works":102,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":2,

"restoreState":2,

"isEOF":1,

"invalidates":0,

"docsExamined":100,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":100,

"executionTimeMillisEstimate":0,

"works":101,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":2,

"restoreState":2,

"isEOF":1,

"invalidates":0,

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[17.0, 17.0]"

],

"b":[

"[55.0, 55.0]"

]

},

"keysExamined":100,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

}

},

"serverInfo":{

"host":"BLUEONE",

"port":27017,

"version":"3.2.0",

"gitVersion":"45d947729a0315accb6d4f15a6b06be6d9c19fe7"

},

"ok":1

}

MongoDB runs the query optimizer to choose the winning plan, executes the winning plan to completion, and returns statistics describing the execution of the winning plan.

If we drop the index and rerun the statement the output will change:

JavaScript

1

2

db.mycoll.dropIndex({a:1,b:1})

exp.find({a:17,b:55})

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

{

"queryPlanner":{

"plannerVersion":1,

"namespace":"example.mycoll",

"indexFilterSet":false,

"parsedQuery":{

"$and":[

{

"a":{

"$eq":17

}

},

{

"b":{

"$eq":55

}

}

]

},

"winningPlan":{

"stage":"FETCH",

"filter":{

"a":{

"$eq":17

}

},

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[55.0, 55.0]"

]

}

}

},

"rejectedPlans":[]

},

"executionStats":{

"executionSuccess":true,

"nReturned":100,

"executionTimeMillis":24,

"totalKeysExamined":10000,

"totalDocsExamined":10000,

"executionStages":{

"stage":"FETCH",

"filter":{

"a":{

"$eq":17

}

},

"nReturned":100,

"executionTimeMillisEstimate":20,

"works":10001,

"advanced":100,

"needTime":9900,

"needYield":0,

"saveState":78,

"restoreState":78,

"isEOF":1,

"invalidates":0,

"docsExamined":10000,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":10000,

"executionTimeMillisEstimate":0,

"works":10001,

"advanced":10000,

"needTime":0,

"needYield":0,

"saveState":78,

"restoreState":78,

"isEOF":1,

"invalidates":0,

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[55.0, 55.0]"

]

},

"keysExamined":10000,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

}

},

"serverInfo":{

"host":"BLUEONE",

"port":27017,

"version":"3.2.0",

"gitVersion":"45d947729a0315accb6d4f15a6b06be6d9c19fe7"

},

"ok":1

}

We see executionTimeMillis = 24ms, totalKeysExamined and totalDocsExamined but also the executionStages.

allPlansExecution - like executionStats but runs each available plan and returns the stats for each one.

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

db.mycoll.explain("allPlansExecution").find({a:14,b:12})

{

"queryPlanner":{

"plannerVersion":1,

"namespace":"example.mycoll",

"indexFilterSet":false,

"parsedQuery":{

"$and":[

{

"a":{

"$eq":14

}

},

{

"b":{

"$eq":12

}

}

]

},

"winningPlan":{

"stage":"FETCH",

"filter":{

"a":{

"$eq":14

}

},

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[12.0, 12.0]"

]

}

}

},

"rejectedPlans":[]

},

"executionStats":{

"executionSuccess":true,

"nReturned":100,

"executionTimeMillis":25,

"totalKeysExamined":10000,

"totalDocsExamined":10000,

"executionStages":{

"stage":"FETCH",

"filter":{

"a":{

"$eq":14

}

},

"nReturned":100,

"executionTimeMillisEstimate":30,

"works":10001,

"advanced":100,

"needTime":9900,

"needYield":0,

"saveState":78,

"restoreState":78,

"isEOF":1,

"invalidates":0,

"docsExamined":10000,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":10000,

"executionTimeMillisEstimate":10,

"works":10001,

"advanced":10000,

"needTime":0,

"needYield":0,

"saveState":78,

"restoreState":78,

"isEOF":1,

"invalidates":0,

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[12.0, 12.0]"

]

},

"keysExamined":10000,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

},

"allPlansExecution":[]

},

"serverInfo":{

"host":"BLUEONE",

"port":27017,

"version":"3.2.0",

"gitVersion":"45d947729a0315accb6d4f15a6b06be6d9c19fe7"

},

"ok":1

}

JavaScript

1

2

3

db.mycoll.createIndex({a:1,b:1})--make sure we have2indexes

db.mycoll.createIndex({b:1})

db.mycoll.explain("allPlansExecution").find({a:14,b:12})

The output will show all available execution plans with the stats:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

{

"queryPlanner":{

"plannerVersion":1,

"namespace":"example.mycoll",

"indexFilterSet":false,

"parsedQuery":{

"$and":[

{

"a":{

"$eq":14

}

},

{

"b":{

"$eq":12

}

}

]

},

"winningPlan":{

"stage":"FETCH",

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[14.0, 14.0]"

],

"b":[

"[12.0, 12.0]"

]

}

}

},

"rejectedPlans":[

{

"stage":"FETCH",

"filter":{

"a":{

"$eq":14

}

},

"inputStage":{

"stage":"IXSCAN",

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[12.0, 12.0]"

]

}

}

}

]

},

"executionStats":{

"executionSuccess":true,

"nReturned":100,

"executionTimeMillis":42,

"totalKeysExamined":100,

"totalDocsExamined":100,

"executionStages":{

"stage":"FETCH",

"nReturned":100,

"executionTimeMillisEstimate":40,

"works":102,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":3,

"restoreState":3,

"isEOF":1,

"invalidates":0,

"docsExamined":100,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":100,

"executionTimeMillisEstimate":40,

"works":101,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":3,

"restoreState":3,

"isEOF":1,

"invalidates":0,

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[14.0, 14.0]"

],

"b":[

"[12.0, 12.0]"

]

},

"keysExamined":100,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

},

"allPlansExecution":[

{

"nReturned":0,

"executionTimeMillisEstimate":0,

"totalKeysExamined":101,

"totalDocsExamined":101,

"executionStages":{

"stage":"FETCH",

"filter":{

"a":{

"$eq":14

}

},

"nReturned":0,

"executionTimeMillisEstimate":0,

"works":101,

"advanced":0,

"needTime":101,

"needYield":0,

"saveState":3,

"restoreState":3,

"isEOF":0,

"invalidates":0,

"docsExamined":101,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":101,

"executionTimeMillisEstimate":0,

"works":101,

"advanced":101,

"needTime":0,

"needYield":0,

"saveState":3,

"restoreState":3,

"isEOF":0,

"invalidates":0,

"keyPattern":{

"b":1

},

"indexName":"b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"b":[

"[12.0, 12.0]"

]

},

"keysExamined":101,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

}

},

{

"nReturned":100,

"executionTimeMillisEstimate":40,

"totalKeysExamined":100,

"totalDocsExamined":100,

"executionStages":{

"stage":"FETCH",

"nReturned":100,

"executionTimeMillisEstimate":40,

"works":101,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":2,

"restoreState":2,

"isEOF":1,

"invalidates":0,

"docsExamined":100,

"alreadyHasObj":0,

"inputStage":{

"stage":"IXSCAN",

"nReturned":100,

"executionTimeMillisEstimate":40,

"works":101,

"advanced":100,

"needTime":0,

"needYield":0,

"saveState":2,

"restoreState":2,

"isEOF":1,

"invalidates":0,

"keyPattern":{

"a":1,

"b":1

},

"indexName":"a_1_b_1",

"isMultiKey":false,

"isUnique":false,

"isSparse":false,

"isPartial":false,

"indexVersion":1,

"direction":"forward",

"indexBounds":{

"a":[

"[14.0, 14.0]"

],

"b":[

"[12.0, 12.0]"

]

},

"keysExamined":100,

"dupsTested":0,

"dupsDropped":0,

"seenInvalidated":0

}

}

}

]

},

"serverInfo":{

"host":"BLUEONE",

"port":27017,

"version":"3.2.0",

"gitVersion":"45d947729a0315accb6d4f15a6b06be6d9c19fe7"

},

"ok":1

}

If you look on allPlansExecution, the first plan returned nReturned = 0 documents and was stopped because it was overruled by secondPlan that was already finished.

$hint

Indexes are automatically used by mongoDB when performing queries (apart from using sparse index). The $hint operator forces the query optimizer to use the specified index to run a query. This is useful when you want to test performance of a query with different indexes.

Covered queries

As per the official MongoDB documentation, a covered query is a query in which:

all the fields in the query are part of an index and

all the fields returned in the query are in the same index

Since all the fields present in the query are part of an index, MongoDB matches the query conditions and returns the result using the same index without actually looking inside documents. Since indexes are present in RAM, fetching data from indexes is much faster as compared to fetching data by scanning documents.

Selectivity is the primary factor that determines how efficiently an index can be used. Ideally, the index enables us to select only those records required to complete the result set, without the need to scan a substantially larger number of index keys (or documents) in order to complete the query. Selectivity determines how many records any subsequent operations must work with. Fewer records means less execution time.

MongoDB Index types

In MongoDB there are few index types:

Sparse index is useful in case of rare fields. Sparse indexes only contain entries for documents that have the indexed field, even if the index field contains a null value. The index skips over any document that is missing the indexed field.

TTL index – MongoDB is using to automatically remove documents from a collection after a certain amount of time.

Storage Engine

Starting version 3.0, MongoDB adopted a pluggable architecture allowing the option to choose the storage engine. A storage engine is the part of a database that is responsible for managing how data is stored, both in memory and on disk.

Different engines perform better for specific workloads, one storage engine might offer better performance for read-heavy workloads, and another might support a higher-throughput for write operations.

MMAPv1 – the original MongoDB storage engine and is the default storage engine for MongoDB versions before 3.2. It maps the data files directly to virtual memory allowing the operting system to do the most of the work of the storage engine.

By using journal (write ahead-log) MongoDB ensures consistency of the data. Using journal you write what you are about to do, then you do it. So if a disk failure occur while performing fsync() to the disk, the storage engine doesn’t perform the update.

By default, MongoDB uses Power of 2 Sized Allocations so that every document in MongoDB is stored in a record which contains the document itself and extra space, or padding. Padding allows the document to grow as the result of updates while minimizing the likelihood of reallocation.

WiredTiger

WiredTiger Storage Engine is the first pluggable storage engine and brings few new features to MongoDB:

Document Level Locking – good concurrency protocol – you can technically achieve no locks and writes could scale with the number of threads (assuming no update to the same document or limit the threads to number of cores).

Compression

It locks some pitfalls of MMAPv1.

Big performance gains

To swich MongoDB to use WiredTiger simply start mongod with:

MS DOS

1

mongod--storageEnginewiredTiger

Please be aware your existing mongoDB server should not contain any MMAPv1 existing databases into /data/db/.

WiredTiger stores data on disk in Btrees, similar with Btrees used by MMAPv1 is using for indexes. New writes are initially separate, performed on files on unused regions and incorporated later in the background.

During an update, WiredTiger writes a new version of documents rather then overwriting existing data. So you don’t need to be worried about document moving or padding factor.

WiredTiger provides two caches:

WiredTiger Cache (WT Cache) – half of your RAM (default)

File System Cache (FS Cache)

Checkpoints – act as recovery points and are handle the “transfer” data from WT cache to FS Cache and then to the disk. During a checkpoint data goes from the WT Cache to FS Cache and then flushed to disk. It initiates a new check point 60s after the end of the last checkpoint. Each checkpoint is a consistent shapshot of your data. During the write of a new checkpoint, the previous checkpoint is still valid. As such, even if MongoDB terminates or encounters an error while writing a new checkpoint, upon restart, MongoDB can recover from the last valid checkpoint.

Compression – since WiredTiger has it’s own cache and since the data in WT Cache doesn’t’ have to be in the same format as in FS Cache, WF allows 3 levels of compression:

Sharding is the process of storing data records across multiple nodes when having demands of data growth. MongoDB is solves the problem with horizontal scaling by using the sharding mechanism.

Compared with replica set there are some changes from the client perspective. The client no longer talks with mongod instances directly. Instead a new component mongos is added. Mongos is aware where the data is stored after the shard (data partition) and it’s routing the queries to the appropriate mongod proceses. Usually mongos runs alongaside with the application client or on a very light environment as it’s only job is to route the queries.

Mongos is using config servers (at least 3 servers to be deployed for reliability) to retrieve metadata about sharded data. Each config server is storing the same configuration.

MongoDB 3.2 deprecates the use of three mirrored mongod instances for config servers.

Replication is the process of synchronizing data across multiple servers.

Replica sets and scaling are used to achieve reliable and high performance deployments. Replica sets ensure multiple copies of data are available. They are build using multiple types of MongoDB nodes. Replica sets exists on odd numbers in order to allow election of a primary node. The Write operations go to primary node, reads can be distributed to the order nodes.

Replication benefits:

High Availability – automatic failover

Data Safety – durability, extra copies

Disaster recovery

Scaling (some situations)

Node types:

Primary node – writes always go to it

Regular node – function as secondary nodes and it can take over the role of primary node in the event of a failure.

Arbiter node – it doesn’t keep a copy of the data. It plays a role in the elections that select a primary if the current primary is unavailable.

Special purpose nodes – active backup

On regular nodes we can apply restrictions to keep the role of that node (only read so node will never be promoted as primary node).

Building a replica set – it requires installing MongoDB on additional hosts (I recommend an automatic tool to do the provisioning – eg: vagrant + ansible) or to use a cloud based solution. Please be aware it’s highly recommended to keep your data folder out of any container so mapping host folder to guest can be a good practice.

Verifying if failover works

First we’ll need to config that all nodes are running. The status of the replica set can confirm that. Then we’ll remove one server to simulate the failure and we expect the replica set to elect a primary and remain responsive.

JavaScript

1

2

tasklist|grep mongod

taskkill/pid<primaryPid>--kill the primary mongoDB server

We’ll connect to the one of the running mongoDB instances (eg: port 27018).

You can see the second replica node was elected as primary in few seconds.

Read concern

After you finish the configuration of the replica set and everything is up and running you can start using it. If you’re performing an insert (on primary) via shell and you want to read the data from one of the secondary nodes, you will probably have an error: