1

I am currently testing OpenLDAP multimaster replication on four nodes and I have few problems.

I follow this tutorial: setup-openldap-multi-master-replication-centos-7 and I succeeded with configuration for four node multimaster replication.

If I have all nodes alive, everything works fine, data are replicated between all four nodes. Even if I stop one/two/three nodes, do some changes on only life node, data are replicated to nodes, when they are started back.

But problem is if I stop and start slapd service on nodes ldap1, ldap2 and ldap3, while I do some "multiple" changes on node ldap4.

One of my scenario, where I have problems: On one node I start script, for inserts users in ldap

for (( i=1; i<=5000; i++ )); do

> addUser.ldif
echo "
dn: uid=ldaptest$i,ou=People,dc=test,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldaptest
uid: ldaptest
uidNumber: 9988
gidNumber: 100
homeDirectory: /home/ldaptest
loginShell: /bin/bash
gecos: LDAP Replication Test User
userPassword: {crypt}x
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
" >> addUser.ldif

ldapadd -x -w xxxxx-D "cn=Manager,dc=test,dc=com" -f addUser.ldif
done

All users are now in all DBs - everything is synchronized.

Then on one node I start script for deleting users:

for (( i=1; i<=5000; i++ )); do
        echo $i
        ldapdelete -w xxxxx-D "cn=Manager,dc=test,dc=com" -x "uid=ldaptest$i,ou=People,dc=test,dc=com"
done

and in that time I stop/start/stop/start slapd service on other three nodes. When script finished deleting, ldap database is no longer synchronized.

Command ldapsearch -x cn=ldaptest -b dc=test,dc=com |grep numEntries return:

    ldap1: numEntries: 648 
    ldap4: numEntries: 0 (node where script was running)
    ldap3: numEntries: 5
    ldap2  numEntries: 24

Is this behaviour normal for LDAP or is maybe something wrong with my configuration?

Process is the same as in tutorial, I just added other olcSyncRepl entries for replication. Something like:

[root@ratitovec bkal]# cat ldap04_2.ldif
    dn: olcDatabase={2}hdb,cn=config
    changetype: modify
    add: olcSyncRepl
    olcSyncRepl: rid=004
      provider=ldap://192.168.26.180:389/
      bindmethod=simple
      binddn="cn=Manager,dc=test,dc=com"
      credentials=iskratel
      searchbase="dc=test,dc=com"
      scope=sub
      schemachecking=on
      type=refreshAndPersist
      retry="30 5 300 3"
      interval=00:00:01:00
    -

[root@ratitovec bkal]# ldapadd -Y EXTERNAL -H ldapi:/// -f ldap04_2.ldif

My first impression is, that ldap multimaster replication is not very reliable if nodes are restarted while we inserts or delete data in ldap db.

rtmktl
  • 9
  • 3

1 Answers1

4

Our experience with OpenLDAP multi master is that it is reliable with 2 ldap nodes. With 3 (probably also more nodes) replications goes wrong under a little stress. We could consequently reproduce this with Apache JMeter as test tool. Mainly delete actions gave the problem.

With 2 nodes the Apache JMeter test caused no problem at all with replication with a load of up to 8000 entries subsequently being added, read, changed and deleted as action in the testplan.

  • Thank you for your confirmation obout OpenLDAP multi master replication with more that two nodes. After bad experiences with OpenLDAP, we start our tests with "389 Directory Server" and results are much better. – rtmktl May 19 '17 at 10:45
  • @rtmktl maybe you will be interesting in openldap fork 'ReopneLDAP' https://github.com/leo-yuriev/ReOpenLDAP (I am not working for this project) – ipeacocks Apr 03 '18 at 11:31