Overview
This document provides instructions for deploying RadiantOne Identity Data Management on your Kubernetes cluster using Helm charts. It covers prerequisites, lists the microservices involved, and explains how to access the Identity Data Management control panel on your local machine via port-forwarding.
Self-managed Identity Data Management can be deployed on supported Kubernetes cluster (cloud or on-premise). Amazon EKS, Azure AKS and Google Kubernetes Engine are currently supported and support for additional Kubernetes vendor like RedHat OpenShift is being planned. The installation process exclusively utilizes Helm, meaning you will use helm install
or helm upgrade
commands.
The table below shows the mapping between the Identity Data Management application version and the self-managed Helm chart version:
Identity Data Management application version | Helm chart version |
---|---|
8.1.0 | 1.1.0 |
8.1.1 | 1.1.1 |
Ensure that you specify your target version when running installation and update commands that are listed in this document.
Prerequisites
- Kubernetes cluster of version 1.27 or higher. Refer to the Sizing a Kubernetes cluster document for additional details.
- Install Helm version 3.0 or higher.
- Install kubectl version 1.27 or higher and configure it to access your Kubernetes cluster.
- For new customers, an Identity Data Management license key will be provided to you during onboarding. For existing customers that want to upgrade to v8.1 self-managed, your existing license key should work. If you have issues, create a Radiant Logic Customer Support ticket at https://support.radiantlogic.com/.
- For new customers, ensure that you have received Container Registry Access and image pull credentials named (regred.yaml) from Radiant Logic during onboarding. For existing customers that want to upgrade to v8.1 self-managed, create a Radiant Logic Customer Support ticket at https://support.radiantlogic.com/ to request registry credentials.
- Ensure that you have necessary storage provisioners and storage classes configured for the Kubernetes cluster. Some examples of supported storage classes are
gp2
/gp3
, Azure disk, etc. - Estimate sufficient resources (CPU, memory, storage) for the deployment. Your Radiant Logic solutions engineer may guide you with this depending on your use case.
Steps for Deployment
-
Set up values.yaml file for Helm deployment
Create a file named
values.yaml
. In yourvalues.yaml
, ensure that you have the following properties at minimum. Note that the values of the properties such asstorageClass
,resources
, etc., will differ depending on your use case, cloud provider, and storage requirements. Work with your Radiant Logic Solution Engineer to customize your Helm configuration.Example
values.yaml
file:replicaCount: 1 # Use 1 for testing, use 2 or more for production if needed. image: tag: 8.1.1 fid: license: >- YourLicense rootPassword: "Enteryourrootpw" imagePullSecrets: - name: regcred persistence: enabled: true # Set the appropriate value for storageClass based on your cloud provider. storageClass: "gp3" # Set the appropriate value for size based on your requirements. size: 10Gi annotations: {} zookeeper: persistence: enabled: true # Set the appropriate value for this based on your cloud provider. storageClass: "gp3" resources: # Set appropriate values for these fields based on your requirements. Ensure that you monitor usage over time # and change the value accordingly when necessary. # Note that these values should be less than the sizing defined for your worker nodes. limits: cpu: 2 memory: 4Gi requests: cpu: 2 memory: 4Gi env: INSTALL_SAMPLES: false FID_SERVER_JOPTS: '-Xms2g -Xmx4g' #To avoid memory swapping, -Xmx should never exceed the memory size defined in resources.
Definitions of the properties:
- replicaCount: Specifies the number of RadiantOne nodes that will be deployed. Set the value to a minimum of 2 in production environments for high availability.
- image.repository: Specifies the Docker repository for the Identity Data Management image. Set to radiantone/fid.
- image.tag: Specifies the version of the Identity Data Management image to install or upgrade to.
- fid.rootUser: Denotes the root user for RadiantOne. Set to cn=Directory Manager.
- fid.rootPassword: Denotes the password for the root user. Set to a strong password value that meets your corporate security policy. You can update this password after install if needed.
- fid.license: Set your Identity Data Management license key.
- persistence.enabled: Indicates whether data persistence is enabled. Set to true or false.
- persistence.storageClass: Defines the storage class for provisioning persistent volumes.
- persistence.size: Specifies the size of the persistent volume for Identity Data Management. Ensure that you monitor usage over time and change the value as needed.
- zookeeper.persistence.enabled: Indicates if data persistence is enabled for Zookeeper.
- resources: Indicates the compute resources allocated to the Identity Data Management containers. Identity Data Management is deployed as a StatefulSet, which has implications for resource management. Changing resources requires careful planning as it affects all pods. Monitor your usage and change the values if needed over time.
Note that there are additional fields such as metrics
that you can use to enable metrics and logging.
-
Create a namespace for your IDDM cluster
kubectl create namespace self-managed
-
Deploy the credentials file provided to you in the same namespace
kubectl apply -n self-managed -f regcred.yaml
-
Optional - dry run your deployment
helm -n self-managed upgrade --install fid oci://ghcr.io/radiantlogic-devops/helm-v8/fid --version 1.1.1 --values values.yaml --set env.INSTALL_SAMPLES=true --debug --dry-run
This command will process your YAML config files without deploying anything. If everything looks good, re-run the command without the
--dry-run
parameter. SettingINSTALL_SAMPLES=true
is optional for testing purposes and not recommended for production deployment. -
Deploy self-managed Identity Data Management
Ensure that you provide the appropriate path for your values.yaml file before running this command:
helm -n self-managed install fid oci://ghcr.io/radiantlogic-devops/helm-v8/fid --version 1.1.1 --values </path/to/your/values.yaml> --debug
-
Verify deployment
kubectl get pod -n self-managed
You should see the following pods listed in the output, confirming that the deployment was successful:
- api-gateway: API Gateway (request routing) for the configuration REST API.
- authentication: Microservice for the configuration REST API endpoints concerning authentication. It provides REST API authentication functionality such as login, JWT token validation, etc.
- data-catalog: Microservice for the configuration REST API endpoints concerning data catalog (data sources, data source schema management)
- directory-browser: Microservice for the configuration REST API endpoints concerning directory browser.
- directory-namespace: Microservice for the configuration REST API endpoints concerning directory namespace.
- directory-schema: Microservice for the configuration REST API endpoints concerning directory server schema.
- fid-X: The core server/engine for Identity Data Management. Note that the number of deployed fid services is determined by the
replicaCount
property in your values.yaml file. For example, ifreplicaCount
is set to 1, you'll see only fid-0. If it's set to 2, you'll see both fid-0 and fid-1, and so on, depending on the value ofreplicaCount
. - iddm-proxy: Load balancer and reverse proxy service.
- iddm-ui: Front-end for the new v8.1 control panel.
- settings: Microservice for the configuration REST API endpoints concerning a variety of Identity Data Management server settings (security, ACIs, etc.).
- system-administration: Microservice for the configuration REST API endpoints concerning Identity Data Management administration (users, roles, permissions, etc.).
- zipkin: Distributed tracing system used for troubleshooting.
- zookeeper-X: Distributed configuration service used by Identity Data Management backend. You may see Zookeeper-0, Zookeeper-1, Zookeeper-3 etc.
Accessing RadiantOne Services
Accessing the Control Panel
To access the Identity Data Management control panel, first set up port forwarding for the iddm-proxy-service
on port 8443:
kubectl port-forward svc/iddm-proxy-service -n self-managed 8443:443
After setting up port forwarding, you can reach the control panel at https://localhost:8443/login.
Accessing the Classic Control Panel
If needed, access the classic control panel via https://localhost:8443/classic after port-forwarding the iddm-proxy-service
.
Accessing the Configuration API
To access the Configuration API, open a new terminal and run the following command to port-forward:
Ensure that port 8443 is not already in use on your local machine.
kubectl port-forward svc/fid-app -n self-managed 8443
- Access the Configuration API at https://localhost:8443/api.
Accessing the Data Management SCIM and REST/ADAP APIs:
To access the Data Management SCIM API and REST/ADAP API, open a new terminal and run the following command to port-forward:
Ensure that ports 8089 and 8090 are not already in use on your local machine.
kubectl port-forward svc/fid-app -n self-managed 8089 8090
- Access the ADAP Rest API at https://localhost:8089/adap.
- Access the ADAPs Rest API at https://localhost:8090/adap.
- Access the SCIM API at https://localhost:8090/scim2.
Accessing LDAP/LDAPs Service
To access the LDAP/LDAPs service, open a new terminal and run the following command to port-forward:
Ensure that ports 2389 and 2636 are not already in use on your local machine.
kubectl port-forward svc/fid-app -n self-managed 2389 2636
- Access the LDAP service at:
ldap://localhost:2389
from your LDAP browser. - Access the LDAPs service at:
ldaps://localhost:2636
from your LDAP browser.
Updating a Deployment
To update any resources or settings, change the values in values.yaml
and run the following command:
helm -n self-managed upgrade --install fid oci://ghcr.io/radiantlogic-devops/helm-v8/fid --version 1.1.1 --values </path/to/your/values.yaml> --debug
Troubleshooting your Kubernetes environment
The steps listed here are meant to help you identify and troubleshoot issues related to pod deployments in your Kubernetes environment.
-
Check events for deployment issues
This command lists events in the specified namespace, helping to identify any issues related to pod deployment.
kubectl get events -n <namespace>
-
Describe a specific pod
This command provides detailed information about the pod, including its status, conditions, and any errors that might be affecting its deployment.
kubectl describe pods/fid-0 -n <namespace>
-
Check Zookeeper status
Check if Zookeeper is running or not by executing:
kubectl exec -it zookeeper-0 -n <namespace> -- bash -c "export JAVA_HOME=/opt/radiantone/rli-zookeeper-external/jdk/jre/;/opt/radiantone/rli-zookeeper-external/zookeeper/bin/zkServer.sh status"
-
Access Zookeeper or FID container
Shell into the Zookeeper container. This will open an interactive shell session inside the zookeeper-0 pod, allowing you to execute commands directly within that container:
kubectl exec -it zookeeper-0 -n <namespace> -- /bin/bash
Shell into the FID container:
kubectl exec -it fid-0 -n <namespace> -- /bin/bash
-
Next, run cluster command
This command lists the cluster configuration, which can help identify any existing issues. Inside the FID container, run:
kubectl exec -it fid-0 -n <namespace> -- cluster.sh list
-
List Java processes
To see what Java processes are running in the FID container, execute:
kubectl exec -it fid-0 -n <namespace> -- /opt/radiantone/vds/jdk/bin/jps -lv
-
Get Kubernetes context
Ensure you're interacting with the correct cluster by running:
kubectl config get-contexts
Deleting Identity Data Management
-
Uninstall the Identity Data Management deployment
To uninstall the Identity Data Management deployment from your namespace, run:
helm uninstall --namespace=<namespace> fid
-
Verify uninstallation
To confirm that the deployment has been successfully removed, execute:
kubectl get all -n <namespace>
You should see that all Identity Data Management related pods have been removed. If everything looks good, proceed to the next step.
-
Delete PVCs
Delete all existing PVCs from your namespace.
kubectl get pvc -n <namespace> kubectl delete pvc <pvc-name> -n <namespace>
-
Delete the namespace
To delete the namespace you created, run:
kubectl delete namespace <namespace>
-
Verify namespace deletion
To check if the namespace has been deleted, execute:
kubectl get namespace
You should see that the previously deleted namespace is not listed.