top of page

Scaling Down vRealize Automation 7.x


 

Use Case


With vRealize Automation 7.x reaching it's end of like on September 2022 , most of the customers either adopted version 8.x or are in transition and will get there eventually.


It's not that easy to stop an enterprise application and decommission it overnight , but it's possible to scale it down rather than keep it distributed and highly available


Keeping this in mind i thought i'll pen down few steps on how to scale down vRA 7.x



 

Environment


Built a 3 node vRA appliance and a 2 node IAAS servers and called them as below



Server

Role

svraone

primary va

svratwo

secondary va

svrathree

tertiary va

siaasone

primary web , manager service , model manager data , proxy agent , dem worker and dem orchestrator

siaastwo

secondary web , secondary manager service , proxy agent , dem worker and dem orchestrator



 


Procedure


  • Take Snapshots before performing any of the below steps across all nodes. backup databases too

  • Output of listing all nodes in a cluster looks like below in my lab



Node:
  NodeHost: svraone.cap.org
  NodeId: cafe.node.631087009.16410
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: True
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
Node:
  NodeHost: siaastwo.cap.org
  NodeId: 7DD5F70C-976F-4635-89F8-582986851E98
  NodeType: IAAS
  Components:
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
        
Node:
  NodeHost: siaasone.cap.org
  NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9
  NodeType: IAAS
  Components:
    Component:
        Type: Database
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerData
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Passive
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
Node:
  NodeHost: svrathree.cap.org
  NodeId: cafe.node.384204123.10666
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
        
Node:
  NodeHost: svratwo.cap.org
  NodeId: cafe.node.776067309.27389
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317


  • To scale down , i'd like to remove my secondary nodes and just leave primary in my cluster

  • I'll begin my scaling down approach with IAAS nodes , that's siaastwo.cap.org


  • I'll open VAMI of my Master Node and then click on the cluster tab

  • Because we powered off second iaas node , it won't show in connected state



  • The moment i click on "Delete" next to the IAAS secondary node, I'll get a warning shown as below


svraone:5480 says

Do you really want to delete the node 7DD5F70C-976F-4635-89F8-582986851E98 which was last connected 11 minutes ago? You will need to remove its hostname from an external load balancer!


  • This ID: 7DD5F70C-976F-4635-89F8-582986851E98 belongs to siaastwo.cap.org , see the output below

        
Node:
  NodeHost: siaastwo.cap.org
  NodeId: 7DD5F70C-976F-4635-89F8-582986851E98
  NodeType: IAAS
  Components:
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        

  • Now confirm deletion




  • The node is now successfully removed

  • To monitor one can take a look at /var/log/messages




2022-07-05T23:12:47.929846+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Logging event node-removed

2022-07-05T23:12:47.929877+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync


2022-07-05T23:12:47.930565+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Resolved vCAC host: svraone.cap.org


2022-07-05T23:12:48.005902+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org svrathree.cap.org svratwo.cap.org'


2022-07-05T23:12:48.039511+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 05-db-sync is

2022-07-05T23:12:48.039556+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq

2022-07-05T23:12:48.125189+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org'

2022-07-05T23:12:48.130809+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 10-rabbitmq is

2022-07-05T23:12:48.130832+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy

2022-07-05T23:12:48.233369+00:00 svraone node-removed: Removing 'siaastwo.cap.org' from haproxy config

2022-07-05T23:12:48.265237+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Jul 05, 2022 11:12:48 PM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection


2022-07-05T23:12:48.265459+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896


2022-07-05T23:12:48.308782+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg
Reload service haproxy ..done


2022-07-05T23:12:48.308807+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/25-db


2022-07-05T23:12:48.353287+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode

2022-07-05T23:12:48.353314+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info command exit code: 1


2022-07-05T23:12:48.353322+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info cluster-mode-check [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode

2022-07-05T23:12:48.354087+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command...

2022-07-05T23:12:48.458204+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org'

2022-07-05T23:12:48.461776+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 25-db is

2022-07-05T23:12:48.461800+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db

2022-07-05T23:12:48.827039+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'

2022-07-05T23:12:48.847537+00:00 svraone node-removed: Removing 'siaastwo' from horizon database tables

2022-07-05T23:12:48.852777+00:00 svraone su: (to postgres) root on none

2022-07-05T23:12:50.279007+00:00 svraone su: last message repeated 3 times

2022-07-05T23:12:50.278863+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 30-vidm-db is DELETE 0
DELETE 0
Last login: Tue Jul  5 23:12:48 UTC 2022
DELETE 0
Last login: Tue Jul  5 23:12:49 UTC 2022
DELETE 0
Last login: Tue Jul  5 23:12:49 UTC 2022

2022-07-05T23:12:50.278889+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master

2022-07-05T23:12:50.370747+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'

2022-07-05T23:12:50.383950+00:00 svraone node-removed: Removing 'rabbit@siaastwo' from rabbitmq cluster

2022-07-05T23:12:50.424780+00:00 svraone su: (to rabbitmq) root on none

2022-07-05T23:12:50.476667+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Event request for siaastwo.cap.org timed out

2022-07-05T23:12:51.335973+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command...

2022-07-05T23:12:52.455441+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 40-rabbitmq-master is Removing node rabbit@siaastwo from cluster

2022-07-05T23:12:52.455467+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch

2022-07-05T23:12:52.550899+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True'

2022-07-05T23:12:52.564092+00:00 svraone node-removed: Restarting elasticsearch service

2022-07-05T23:12:52.733672+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 50-elasticsearch is Stopping elasticsearch:  process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done.
Starting elasticsearch: 2048

2022-07-05T23:12:52.733690+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health

2022-07-05T23:12:52.883943+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'


  • Executing list nodes command you can now see there are no references to siaastwo.cap.org


Node:
  NodeHost: svraone.cap.org
  NodeId: cafe.node.631087009.16410
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: True
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
        
Node:
  NodeHost: siaasone.cap.org
  NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9
  NodeType: IAAS
  Components:
    Component:
        Type: Database
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerData
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
Node:
  NodeHost: svrathree.cap.org
  NodeId: cafe.node.384204123.10666
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317


NodeHost: svratwo.cap.org
  NodeId: cafe.node.776067309.27389
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317

  • Now , let's move on to remove second and third appliance from the cluster

  • Before i remove nodes from cluster, i'll remove connectors coming from those nodes













  • Now that the connectors are removed, we will now move on with removing the vRA appliances from cluster

  • Take one more round of snapshots



  • Once the snapshot tasks are complete , we will proceed with appliance removal

  • Remember , you cannot and should not remove the master from cluster.



  • Ensure database is in Asynchronous mode

  • Click on delete next to svrathree.cap.org to delete the node or remove it from the cluster