Status

The Management > High Availability > Status tab lists all devices involved in a hot standby system or cluster and provides the following information:

  • ID: The device's node ID. In a hot standby system, the node ID is either 1 or 2.

    The node ID in a cluster can range from 1-10, as a cluster can have up to a maximum of 10 nodes.

  • Role: Each node within the cluster can assume one of the following roles:

    • MASTER: The primary system in a hot standby/cluster setup. It is responsible for synchronizing and distributing of data within a cluster.
    • SLAVE: The standby system in a hot standby/cluster setup which takes over operations if the master fails.
    • WORKER: A simple cluster node, responsible for data processing only.
  • Device name: The name of the device.
  • Status: The state of the device concerning its HA status; can be one of the following:

    • ACTIVE: The node is fully operational. In case of a hot standby (active-passive) setup, this is the status of the active node.
    • READY: The node is fully operational. In case of a hot standby (active-passive) setup, this is the status of the passive node.
    • RESERVED: The node has no matching version and is not involved in the process.
    • UNLINKED: One ore more interface links are down.
    • UP2DATE: An Up2Date is in progress.
    • UP2DATE-FAILED: An Up2Date has failed.
    • DEAD: The node is not reachable.
    • SYNCING: Data synchronization is in progress. This status is displayed when a node connects to a master. The initial synchronizing time is at least 5 minutes. It can, however, be lengthened by all synchronizing-related programs. While a SLAVE is synchronizing and in state SYNCING, there is no graceful takeover, e.g. due to link failure on master node.
  • Version: Version number of Sophos UTM Software installed on the system.
  • Last status change: The time when the last status change occurred.

Reboot/Shutdown: With these buttons, a device can be manually rebooted or shut down.

Remove Node: Use this button to remove a dead cluster node via WebAdmin. All node-specific data like mail quarantine and spool is then taken over by the master.

Click the button Open HA Live Log in the upper right corner to open the high availability live log in a separate window.