WSO2 Identity Server Cluster Setup

Sadil Chamishka
6 min readMay 11, 2022


Hi folks, In this blog I am going to setup WSO2 Identity Server as a clustered deployment. WSO2 Identity Server (WSO2 IS) is an API-driven open source Identity and Access Management (IAM) product designed to help developers build effective CIAM solutions in a short stretch of time. In the modern microservice architecture, centralised identity and access management is a key requirement. Also as developers we are more focused on building scalable and fault tolerant solutions. Therefore the identity server also should be comply to these non functional requirements as the single point of failures would cause for service disruptions. In this blog I am sharing my experience on configuring a multi node cluster setup for the WSO2 IS.

WSO2 IS comes with embedded LDAP server as the primary user store and embedded H2 database to store the identity related information. These default setups are not recommended for production deployments and not applicable for cluster deployments as the data has to store centrally to be accessed by all the nodes in the cluster. Therefore as the first step, let’s configure a MySQL database server as the primary user store of WSO2 IS.

There are two main databases used by WSO2 IS to provide its functionalities. We need to create two databases and make sure to set the character set as latin1.

1 — wso2identity_db = to store identity related data.

2 — wso2shared_db = to store user management data.

create database <database-name> character set latin1

Then we have to populate the tables in each databases. For the wso2identity_db, execute these database scripts “identity/mysql.sql” , “identity/uma/mysql.sql”, “consent/mysql.sql ” found in “dbscripts” folder in the WSO2 IS distribution. For the wso2shared_db, execute the “mysql.sql” database script file found in the same “dbscripts” folder.

source <IS_HOME>/dbscripts/mysql.sql

Now we have setup the necessary tables and let’s configure the WSO2 IS to use these MySQL databases instead of the embedded H2 databases configured by default. Also set the MySQL database as the primary user store instead of using the embedded LDAP user store. The configurations can be done easily through the “deployment.toml” file which can be found in “./repository/conf” folder of the distribution. Make sure to comment or remove the default configurations while adding the following configurations.

type = "database_unique_id"
type = "mysql"
url = "jdbc:mysql://;allowPublicKeyRetrieval=true"
username = "root"
password = "pAssWord"
type = "mysql"
url = "jdbc:mysql://;allowPublicKeyRetrieval=true"
username = "root"
password = "pAssWord"

In order to connect to the MySQL databases, the JDBC driver for MySQL has to be provided and it can be download from here and put into the “./repository/components/dropins” folder of the WSO2 IS distribution.

Now we have configured the basic setup for the first node of WSO2 IS cluster setup. The second node also can be configured by getting a copy of original distribution and follow the above mentioned configurations or you can get a copy of the already configured distribution and consider as the second node of the cluster. In order to run the identity server, execute the startup scripts found in “./bin” folder of the distribution.


The WSO2 IS startup of default port 9443 and as we know there can’t be two services running on same port, we have to start up the second instance with a port offset. Execute the below command on the second node to start the server on port 9444.

sh -DportOffset=1

As per the configurations, we have reached to the setup as shown in below figure. We can create a user from one instance and check whether the user is visible from the second instance to verify the deployment.

The clustered setup is not yet completed. The WSO2 IS uses caching to improve the performance by using multiple cache layers more specifically local to each nodes. Now we have to face a new problem called cache coherence if we did not setup the rest of the configurations. The WSO2 IS embedded with Hazelcast which provides embedded data grids capability and solve the cache coherence by issuing cache invalidations to all the nodes. Let’s setup a cluster by registering the member nodes in the “deployment.toml” file as follows.

membership_scheme = "wka"
local_member_host = ""
local_member_port = "4000"
members = ["", ""]

Here we are using the WKA (well-known-address) membership schema where the member nodes are predefined to create the cluster. The first IS node expose Hazelcast member in port 4000 and second node can be configured with port 4001 in the respective “deployment.toml” file. Also there are other membership schemas in case like the member nodes have to be auto discovered when we deploy in container orchestration systems.

The current setup of our deployment can be seen as following figure. The IS nodes can be accessed through the respective ports (9443, 9444) individually.

The final step is to expose the two IS nodes through a load balancer. The load balancer has the capability of balancing the load across the nodes based on a configured algorithm (ex- round robin). Also a reverse proxy can be used to route the traffic baed on a configured domain and path combination. NGNIX has both capabilities and let’s configure it. You can refer this link to install NGINX. For Homebrew users here is the command.

brew install nginx

As the first step, let’s configure the default port of the NGINX server to 80 in nginx.conf file resides in “/opt/homebrew/etc/nginx” (For other installations path can be “/usr/local/etc/nginx/”).

In order to enable TLS, we have to create a certificate and private key for the NGINX server. Execute the following command to generate a certificate.crt and privateKey.key files. Move the files into “/opt/homebrew/etc/nginx/ssl” folder (create “ssl” folder if does not exists).

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt

Now we have to define a server configuration file in order to manage the traffic for our two IS setup. The configurations files inside the “/opt/homebrew/etc/nginx/servers” are loaded when NGINX server startup. Therefore let’s add the following configuration file as “com.https.conf” inside “servers” folder.

In order to use NGINX as the proxy we have to add a configuration in deployment.toml file. Also we have to set the hostname of the IS instances to “” support the local domain name we are going to be used. Make sure to configure in both IS nodes.

hostname = ""
node_ip = ""
base_path ="https://$ref{server.hostname}:${}"
[] proxyPort = 443

As the final step we have to register the local domain name “” we are going to used pointing to the loop back address of “” in the “/etc/hosts” file.