Multi-Provider Setup
Created by Jason Trupp, last modified on Dec 04, 2015
For use when setting up a multi-provider environment with any number of provider servers
(slapd.conf inserts are in italics)
Configure the consumer part and the provider part on each server
Give each server a unique ID ServerID parameter. The first server will be 1 and the second will be 2
serverid 1
Enable the syncprov and accesslog modules. They are mandatory.
moduleload syncprov.la moduleload accesslog.la
Create two database Entries. The application database defines the accesslog and syncprov overlays
database mdb suffix “dc=symas,dc=com” rootdn “dc=symas,dc=com” rootpw {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw== # Indices to maintain index default eq index objectClass index cn eq,sub index memberUID index givenName eq,sub index uniqueMember index mail eq,sub index entryUUID eq index entryCSN eq index uid eq,sub directory /var/symas/openldap-data/symas maxsize 1073741824 overlay syncprov syncprov-checkpoint 100 10 syncprov-sessionlog 10000 syncprov-reloadhint TRUE overlay accesslog logdb cn=accesslog logops writes logsuccess TRUE logpurge 24:00 01+00:00
Note: The application database is the Consumer
Create the accesslog directory under /var/symas/openldap-data/symas. Accesslog database defines only the syncprov overlay.
database mdb directory /var/symas/openldap-data/accesslog maxsize 5120000 suffix “cn=accesslog” index default eq index objectClass,entryCSN,entryUUID,reqEnd,reqResult,reqStart overlay syncprov syncprov-nopresent TRUE syncprov-reloadhint TRUE syncprov-checkpoint 100 10 syncprov-sessionlog 10000
Note: The accesslog database Is the provider
Make the accesslog directory under /var/symas/openldap-data/accesslog
4. Define the server where the consumer (app db) will get data from
Use syncrepl
For delta-syncrepl set syncrepl to use the accesslog database
logfilter=“(&(objectClass=auditWriteObject)(reqResult=0))”
Filters the accesslog database, getting only Write operations (ie updates) which were successful (ie, reqresult =0)
Define as many syncrepl as there are servers in the replication including the server where the app database is stored
syncrepl
rid=1
provider=ldap://brie.rb.symas.net
syncrepl
rid=2
provider=ldap://cantal.rb.symas.net
syncrepl
rid=3
provider=ldap://livarot.rb.symas.net
Example: 3 servers A, B and C,
A could define sync(B), sync(C),
B could define sync(A), sync(C),
C could define sync(A), sync(B)
Simply configure sync(A), sync(B) sync(C) on all the servers
5. Setup the syncrepl parameters
Each syncrepl parameter instructs the server where to send requests for updates
binddn=“dc=example,dc=com”
credentials=secret
bindmethod=simple
schemachecking=on
searchbase=“dc=example,dc=com”
type=refreshAndPersist
interval=00:00:01:00
retry=“5 10 6 +”
filter=“(objectclass=*)”
scope=sub
retry=“60 +”
logbase=“cn=accesslog”
logfilter=“(&(objectClass=auditWriteObject)(reqResult=0))”
syncdata=accesslog
6. Add the MirrorMode flag set to true
mirrormode TRUE
7. If TLS not used, add the binddn and credentials for the syncrepl entry
binddn=“dc=symas,dc=com”
credentials=secret
8. Load the servers with all data before starting slapd
Use slapcat to backup database from first provider
Use slapadd to copy database to second provider
Notes:
In Multi-provider Replication each server is a consumer of the other provider and each server is a provider for the other provider
Various topologies:
With only 2 servers, this is really A <—> B
Both are providers, both are providers, both are consumers
With 3-4 servers, this is A <—> B <—> C (<—> D)
A and B are both providers, providers and consumers B and C (and D) are providers, providers and consumers
Updating A will update B which will update C (which will update D)
A and C are only connected via B unless A is specifically connected to C
Then updating A will update B and C
* 2 servers : 1 connection (A <-> B)
* 3 servers : 3 connections (A <-> B, A <-> C, B <-> C)
* 4 servers : 6 connections (A <-> B, A <-> C, A <-> D, B <-> C, B <-> D, C <-> D)
Note: When you have more than 4 servers, this becomes problematic since the number of connections is growing *very* fast as described below:
* N servers : (N -1) + ( N-2) + … + 2 + 1 = N x (N - 1 )/2. For N=10, this is 45 connections
That also means an entry being modified is going to be transmitted as many times as you have connections.