From 754bbf7a25a8dda49b5d08ef0d0443bbf5af0e36 Mon Sep 17 00:00:00 2001 From: Craig Jennings Date: Sun, 7 Apr 2024 13:41:34 -0500 Subject: new repository --- .../engine%2Fswarm%2Fmanage-nodes%2Findex.html | 64 ++++++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 devdocs/docker/engine%2Fswarm%2Fmanage-nodes%2Findex.html (limited to 'devdocs/docker/engine%2Fswarm%2Fmanage-nodes%2Findex.html') diff --git a/devdocs/docker/engine%2Fswarm%2Fmanage-nodes%2Findex.html b/devdocs/docker/engine%2Fswarm%2Fmanage-nodes%2Findex.html new file mode 100644 index 00000000..41b8ef5a --- /dev/null +++ b/devdocs/docker/engine%2Fswarm%2Fmanage-nodes%2Findex.html @@ -0,0 +1,64 @@ +

Manage nodes in a swarm

+ +

As part of the swarm management lifecycle, you may need to view or update a node as follows:

List nodes

To view a list of nodes in the swarm run docker node ls from a manager node:

$ docker node ls
+
+ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
+46aqrk4e473hjbt745z53cr3t    node-5    Ready   Active        Reachable
+61pi3d91s0w3b90ijw3deeb2q    node-4    Ready   Active        Reachable
+a5b2m3oghd48m8eu391pefq5u    node-3    Ready   Active
+e7p8btxeu3ioshyuj6lxiv6g0    node-2    Ready   Active
+ehkv3bcimagdese79dn78otj5 *  node-1    Ready   Active        Leader
+

The AVAILABILITY column shows whether or not the scheduler can assign tasks to the node:

The MANAGER STATUS column shows node participation in the Raft consensus:

For more information on swarm administration refer to the Swarm administration guide.

Inspect an individual node

You can run docker node inspect <NODE-ID> on a manager node to view the details for an individual node. The output defaults to JSON format, but you can pass the --pretty flag to print the results in human-readable format. For example:

$ docker node inspect self --pretty
+
+ID:                     ehkv3bcimagdese79dn78otj5
+Hostname:               node-1
+Joined at:              2016-06-16 22:52:44.9910662 +0000 utc
+Status:
+ State:                 Ready
+ Availability:          Active
+Manager Status:
+ Address:               172.17.0.2:2377
+ Raft Status:           Reachable
+ Leader:                Yes
+Platform:
+ Operating System:      linux
+ Architecture:          x86_64
+Resources:
+ CPUs:                  2
+ Memory:                1.954 GiB
+Plugins:
+  Network:              overlay, host, bridge, overlay, null
+  Volume:               local
+Engine Version:         1.12.0-dev
+

Update a node

You can modify node attributes as follows:

Change node availability

Changing node availability lets you:

For example, to change a manager node to Drain availability:

$ docker node update --availability drain node-1
+
+node-1
+

See list nodes for descriptions of the different availability options.

Add or remove label metadata

Node labels provide a flexible method of node organization. You can also use node labels in service constraints. Apply constraints when you create a service to limit the nodes where the scheduler assigns tasks for the service.

Run docker node update --label-add on a manager node to add label metadata to a node. The --label-add flag supports either a <key> or a <key>=<value> pair.

Pass the --label-add flag once for each node label you want to add:

$ docker node update --label-add foo --label-add bar=baz node-1
+
+node-1
+

The labels you set for nodes using docker node update apply only to the node entity within the swarm. Do not confuse them with the docker daemon labels for dockerd.

Therefore, node labels can be used to limit critical tasks to nodes that meet certain requirements. For example, schedule only on machines where special workloads should be run, such as machines that meet PCI-SS compliance.

A compromised worker could not compromise these special workloads because it cannot change node labels.

Engine labels, however, are still useful because some features that do not affect secure orchestration of containers might be better off set in a decentralized manner. For instance, an engine could have a label to indicate that it has a certain type of disk device, which may not be relevant to security directly. These labels are more easily “trusted” by the swarm orchestrator.

Refer to the docker service create CLI reference for more information about service constraints.

Promote or demote a node

You can promote a worker node to the manager role. This is useful when a manager node becomes unavailable or if you want to take a manager offline for maintenance. Similarly, you can demote a manager node to the worker role.

Note: Regardless of your reason to promote or demote a node, you must always maintain a quorum of manager nodes in the swarm. For more information refer to the Swarm administration guide.

To promote a node or set of nodes, run docker node promote from a manager node:

$ docker node promote node-3 node-2
+
+Node node-3 promoted to a manager in the swarm.
+Node node-2 promoted to a manager in the swarm.
+

To demote a node or set of nodes, run docker node demote from a manager node:

$ docker node demote node-3 node-2
+
+Manager node-3 demoted in the swarm.
+Manager node-2 demoted in the swarm.
+

docker node promote and docker node demote are convenience commands for docker node update --role manager and docker node update --role worker respectively.

Install plugins on swarm nodes

If your swarm service relies on one or more plugins, these plugins need to be available on every node where the service could potentially be deployed. You can manually install the plugin on each node or script the installation. You can also deploy the plugin in a similar way as a global service using the Docker API, by specifying a PluginSpec instead of a ContainerSpec.

Note

There is currently no way to deploy a plugin to a swarm using the Docker CLI or Docker Compose. In addition, it is not possible to install plugins from a private repository.

The PluginSpec is defined by the plugin developer. To add the plugin to all Docker nodes, use the service/create API, passing the PluginSpec JSON defined in the TaskTemplate.

Leave the swarm

Run the docker swarm leave command on a node to remove it from the swarm.

For example to leave the swarm on a worker node:

$ docker swarm leave
+
+Node left the swarm.
+

When a node leaves the swarm, the Docker Engine stops running in swarm mode. The orchestrator no longer schedules tasks to the node.

If the node is a manager node, you receive a warning about maintaining the quorum. To override the warning, pass the --force flag. If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.

For information about maintaining a quorum and disaster recovery, refer to the Swarm administration guide.

After a node leaves the swarm, you can run the docker node rm command on a manager node to remove the node from the node list.

For instance:

$ docker node rm node-2
+

Learn more

+

guide, swarm mode, node

+
+

+ © 2019 Docker, Inc.
Licensed under the Apache License, Version 2.0.
Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.
Docker, Inc. and other parties may also have trademark rights in other terms used herein.
+ https://docs.docker.com/engine/swarm/manage-nodes/ +

+
-- cgit v1.2.3