containerd socket exploitation part 2

  1. Required curl features for communicating with the containerd socket
  2. containerd socket API details
  3. Doing binary with curl from the command line
  4. A simple representative curl request to the API
  5. Understanding and exploring the .proto service definition files and parsing protobuf messages
  6. List namespaces with curl
  7. List running containers in a namespace with curl
  8. Conclusion

This is the second part of my series on containerd socket exploitation. The first part in the series is here, which covered how to exploit the containerd socket using the ctr command line tool.

In this part I will provide on introduction on how to talk to the containerd using curl, in cases where you dont have access to the ctr tool and cant transfer it in. Be aware that using ctr is vastly preferable wherever possible, as the curl process is quite involved and can require offline processing to parse binary API messages for some API operations. In fact, the process is very close to being more trouble than its worth, and I persisted with figuring it out more for the challenge of seeing whether it was possible as opposed to any real intention to actually use it.

I did find the whole process pretty interesting however, so hopefully you will too, even if you never use it.

This part of the series will cover:

  • The basics of communicating with the containerd socket using curl;
  • Some tips on how to use curl with binary data in limited command line environments;
  • How the containerd API is structured and how you can view the associated services and binary messages;
  • How you can read and write your own binary containerd API messages; and
  • Some examples of performing the simpler containerd API requests with curl.

Required curl features for communicating with the containerd socket

Communicating with the containerd socket requires a version of curl that supports HTTP2 and Unix sockets (this might apply to wget too but I will leave working that out as an exercise for the reader).

In detail, we are looking for the following:

  • The curl command must support http2
  • The curl command must have the --http2-prior-knowledge switch, which communicates HTTP2 from the start of the transaction and does not attempt to upgrade the connection from HTTP/0.9
  • The curl command must be able to support talking to unix sockets, which in my version is done using the --unix-socket switch
  • Its preferrable to be able to write/read binary files to some location on the filesystem for some operations

containerd socket API details

Here is a summary of the most important details about communicating with the containerd API we need to understand when talking to it using curl.

As mentioned in part 1, the communication protocol used to talk with the API is gRPC. This involves sending Protobuf messages over HTTP2, with some additional caveats. The first is the use of the gRPC 5 byte Length-Prefixed-Message header before each Protobuf encoded message. This header is comprised of 5 bytes at the start of the message that stores the size of the following data, encoded in big endian form. For an message with empty (no) data, this means 5 null bytes is sent. The second is the use of HTTP2 trailers, sent by the server with responses to indicate the status of certain operations. Trailers are basically like HTTP message headers, but sent AFTER the data in the response. This means you will want to be running curl commands with verbose output (-v switch) to see this header data when you make requests.

The various API endpoints are defined in *.proto files available in the containerd source tree here. API requests are directed to various service endpoints by setting that value in the :path of a POST HTTP2 request (which is analagous to the path in the URL from HTTP/1.1). In addition, each request generally has an associated Protobuf *Request and *Response message pairing associated with it. Some of these are quite simple, but the more complex ones require us to do some offline Protobuf parsing and sending/receiving of large binary files. There are some tricks we can do to allow us to do this in restricted command line environments (discussed next section), but its generally easier if we can write to/read from the local filesystem to facilitate this.

Doing binary with curl from the command line

Software exploitation scenarios can place limitations on our ability to transfer binary content, especially when we are interacting only through a command line, so it can be useful to know some tricks to work around this.

For sending data using only the command line we can echo (echo -ne "<DATA>") and pipe using the --data-binary @- switch to have curl read from STDIN. We can also output to the command line STDOUT using --output - and pipe into something like strings to get a decent general idea of the response content where its non empty. For bigger binary files, if we want to “echo” we can use helper tools to assist with the encoding.

If we can use the filesystem to store the content we send and receive, we can use the --data-binary @/tmp/filein option to read binary data from a file and --output /tmp/fileout to write to one. Reading to/from files simplifies our command lines a lot, and makes it easier to concentrate on the headers in the response without also being spammed with response content. It also gives us the ability to properly decode the response data offline if we can extract the binary content seperately after the command is run. The echo approach can be used alone in a restricted command environment to write binary files to the filesystem to allow us to benefit from the cleaner command line too.

For getting binary files from the remote system using only the command line, you can use tools such as base64 or xxd to encode the file to a format that can be displayed on a text based terminal, and then decode them back to binary format on your own system for processing.

A simple representative curl request to the API

Now lets look at a simple example of a curl request to this API: requesting the version of the containerd daemon using the Version service. This service has one endpoint, with one defined message type for the response. All of this is described in this .proto file.

Heres a representative curl request, using the echo and pipe trick described in the previous section.

$ echo -ne "\x00\x00\x00\x00\x00" | sudo curl -v -s --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST  -H "content-type: application/grpc" -H "te: trailers"  -H "grpc-accept-encoding: gzip" http://localhost/containerd.services.version.v1.Version/Version --data-binary @- --output /tmp/versionresponse.bin
*   Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c22b6bdec0)
> POST /containerd.services.version.v1.Version/Version HTTP/2
> Host: localhost
> user-agent: curl/7.81.0
> accept: */*
> content-type: application/grpc
> te: trailers
> grpc-accept-encoding: gzip
> content-length: 5
>
} [5 bytes data]
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
< HTTP/2 200
< content-type: application/grpc
<
{ [55 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact

Lets look at the important features of this example, many of which we will also use in future more complex requests:

  • View response headers: We use the -v switch so we can see the gRPC HTTP response trailers that indicate the success/failure status of our request. The grpc-status of 0 here means the request was successful. The grpc-message here is empty, which is the normal state for a successful request. When the request fails these headers can have useful information for throubleshooting.
  • Transport protocol settings and address: The --http2-prior-knowledge switch makes curl use HTTP2 from the start of the connection, differentiated from the -http2 switch which will initiate an upgrade from HTTP/0.9, and --unix-socket identifies the address.
  • Set gRPC headers: The -H "content-type: application/grpc" -H "te: trailers" -H "grpc-accept-encoding: gzip" switches identify to the server that you’re communicating using gRPC messages and that the client will accept the grpc-status and grpc-message trailers (although leaving off the -H "te: trailers" header does not usually cause a problem).
  • Url: We use a URL of http://localhost/containerd.services.version.v1.Version/Version, which sends the communication without TLS wrapping and addresses the service at /containerd.services.version.v1.Version/Version. The :authority of localhost is ignored by the server and can be set to any value that results in a valid URL from curls perspective.
  • Request and response content: We use a -X POST method to send data (all of these API calls use POST requests). We write the response to /tmp/122 with --output /tmp/122 and read the POSt data from STDIN with --data-binary @-. This uses the 5 null bytes “empty” message we piped into the curl command using echo -ne "\x00\x00\x00\x00\x00". We will look at how to parse the response data in /tmp/122 in the next section.

While a lot of these parameters will remain consistent through all of the different API requests we will send using curl, a few specific things will change. The main difference is that the service address (in this example /containerd.services.version.v1.Version/Version), and the request and response data will change depending on which API service we want to communicate with. In the next section we will discuss how to identify the available endpoints and the requests and responses that go along with it, as well as how to read the various non empty message types.

Understanding and exploring the .proto service definition files and parsing protobuf messages

To enable communicating with the containerd API, I created a simple tool that can read the Python version of the service definition files and allow you to explore the API services and associated message types. In addition, the tool will also allow you to read and write each of these message types, inclusive of the gRPC Length-Prefixed-Message header. The idea is to use this tool offline to process protobuf message files that will be transferred on the remote compromised system using curl.

Essentially, the process works like this:

  • Grab the tool and install its Python dependency using pip install protobuf
  • Download the .proto file relating to the service category you are interested in from the source tree here
  • Read the .proto file and identify any external dependencies (any coming from github.com) - these will need to be downloaded and processed individually and their references updated to local ones
  • Use the protoc tool to convert the .proto file and any external dependencies to their Python equivalent like so: protoc --python_out=. <filename>.proto to create <filename>_pb2.py
  • Parse the data:
    • View service and message type information using python protobuf_parser.py -m <filename>_pb2.py
    • Read a binary request or response message of type MessageType type using python protobuf_parser.py -m <filename>_pb2.py -t MessageType -d /tmp/binary_message.bin
    • Take a JSON document respresenting a message of type MessageType and convert it to protobuf using python protobuf_parser.py -m <filename>_pb2.py -t MessageType -i /tmp/message_data.json -o /tmp/message_protobuf.bin

Lets look at an example using version.proto, so we can use the results to understand the curl request we made earlier.

First grab the script and install the Python dependencies:

$ wget https://raw.githubusercontent.com/stephenbradshaw/pentesting_stuff/refs/heads/master/containerd/protobuf_parser.py
$ sudo pip install protobuf

Then install the protoc tool. Im doing this on Ubuntu and had some trouble with the version of the tool from the apt repositories so I used the latest binary release available at the time and unzipped it (the binary will be created at bin/protoc)

$ wget https://github.com/protocolbuffers/protobuf/releases/download/v29.2/protoc-29.2-linux-x86_64.zip
$ unzip protoc-29.2-linux-x86_64.zip

Now with the base requirements resolved we want to check the dependencies of the .proto file we are about to use to see if there are any external dependencies. We want to look for import lines and any that reference github.com need to be downloaded individually, their Python modules created and their import references updated to a local file path.

In the case of the version.proto file, there is only one dependency as shown here, which at the time of writing looks as follows. There are no external dependencies to convert here.

import "google/protobuf/empty.proto";

Now we grab the .proto file of interest and create the Python module implementation:

$ wget https://raw.githubusercontent.com/containerd/containerd/refs/heads/main/api/services/version/v1/version.proto
$ bin/protoc --python_out=. version.proto

At this point we can use the resulting Python module of version_pb2.py to understand the service. Running like the following will list the services by path, their inputs and outputs, and will describe the message types used in various levels of detail. The output also includes some rough JSON template examples being provided for the primary message types defined in this module.

$ python protobuf_parser.py -m version_pb2.py
Services:
* /containerd.services.version.v1.Version/Version
      Input: Empty
      Output: VersionResponse

==================================

Dependant Message Types:
* Empty

==================================

Message Types:
* VersionResponse
  Fields:
      version - string
      revision - string

  Rough JSON example (might need tweaking):

{
    "version": "string",
    "revision": "string"
}

==================================

Now lets use this to try and parse the response data from the version request we made with curl. To do this, we need to run the tool and reference the data file we want to read (/tmp/122) with the -d switch and the message type with -t. If we dont specify a type, the tool will tell us we need to do so and will list the known types we can use.

$ ./protobuf_parser.py -m version_pb2.py -d /tmp/versionresponse.bin
Provide a type for parsing of the response data file using -t
Available types include: VersionResponse
$ ./protobuf_parser.py -m version_pb2.py -d /tmp/versionresponse.bin -t VersionResponse
version: "1.7.24"
revision: "88bf19b2105c8b17560993bee28a01ddc2f97182"

We can see in the above the version and git revision of the containerd service have been returned.

List namespaces with curl

We can list namespaces using the namespace service. Grab the .proto file and generate the Python implementation as per the process above and lets look at the endpoints.

$ ./protobuf_parser.py -m namespace_pb2.py
Services:
* /containerd.services.namespaces.v1.Namespaces/Get
      Input: GetNamespaceRequest
      Output: GetNamespaceResponse
* /containerd.services.namespaces.v1.Namespaces/List
      Input: ListNamespacesRequest
      Output: ListNamespacesResponse
* /containerd.services.namespaces.v1.Namespaces/Create
      Input: CreateNamespaceRequest
      Output: CreateNamespaceResponse
* /containerd.services.namespaces.v1.Namespaces/Update
      Input: UpdateNamespaceRequest
      Output: UpdateNamespaceResponse
* /containerd.services.namespaces.v1.Namespaces/Delete
      Input: DeleteNamespaceRequest
      Output: Empty

==================================
[SNIP]

The service we want to use is /containerd.services.namespaces.v1.Namespaces/List, which takes a ListNamespacesRequest as input and generates a ListNamespacesResponse as output. The relevant message types look like the following (extracted from the snipped output of the previous command)

==================================
* ListNamespacesRequest
  Fields:
      filter - string

  Rough JSON example (might need tweaking):

{
    "filter": "string"
}

==================================
* ListNamespacesResponse
  Fields:
      namespaces - Namespace

  Rough JSON example (might need tweaking):

{
    "namespaces": {"name": "string", "labels": "LabelsEntry"}
}

The ListNamespacesRequest only has a single parameter that can filter the list of namespaces returned. However, if we want to return all namespaces, we can just send a message with this value unset - a null message. Heres the request.

$ echo -ne "\x00\x00\x00\x00\x00" | sudo curl -v -s --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST  -H "content-type: application/grpc" -H "te: trailers"  -H "grpc-accept-encoding: gzip" http://localhost/containerd.services.namespaces.v1.Namespaces/List --data-binary @- --output /tmp/listnamespacessresponse.bin
*   Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c115065ec0)
> POST /containerd.services.namespaces.v1.Namespaces/List HTTP/2
> Host: localhost
> user-agent: curl/7.81.0
> accept: */*
> content-type: application/grpc
> te: trailers
> grpc-accept-encoding: gzip
> content-length: 5
>
} [5 bytes data]
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
< HTTP/2 200
< content-type: application/grpc
<
{ [38 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact

This is what we get when we parse the response file:

$ ./protobuf_parser.py -m namespace_pb2.py -d /tmp/listnamespacessresponse.bin -t ListNamespacesResponse
namespaces {
  name: "buildkit"
}
namespaces {
  name: "default"
}
namespaces {
  name: "k8s.io"
}

We can see three different namespaces in the response.

List running containers in a namespace with curl

We list containers using the containers service. Grab the .proto file and generate the Python implementation as per the process above and let’s look at the endpoints.

$ ./protobuf_parser.py -m containers_pb2.py
Services:
* /containerd.services.containers.v1.Containers/Get
      Input: GetContainerRequest
      Output: GetContainerResponse
* /containerd.services.containers.v1.Containers/List
      Input: ListContainersRequest
      Output: ListContainersResponse
* /containerd.services.containers.v1.Containers/ListStream
      Input: ListContainersRequest
      Output: ListContainerMessage
* /containerd.services.containers.v1.Containers/Create
      Input: CreateContainerRequest
      Output: CreateContainerResponse
* /containerd.services.containers.v1.Containers/Update
      Input: UpdateContainerRequest
      Output: UpdateContainerResponse
* /containerd.services.containers.v1.Containers/Delete
      Input: DeleteContainerRequest
      Output: Empty

==================================

[SNIP]

The service we want is /containerd.services.containers.v1.Containers/List, which takes a ListContainersRequest as input and generates a ListContainersResponse as output. The relevant message types look like the following (extracted from the snipped output of the previous command)

==================================
* ListContainersRequest
  Fields:
      filters - string

  Rough JSON example (might need tweaking):

{
    "filters": "string"
}

==================================
* ListContainersResponse
  Fields:
      containers - Container

  Rough JSON example (might need tweaking):

{
    "containers": {"id": "string", "labels": "LabelsEntry", "image": "string", "runtime": "Runtime", "spec": "Any", "snapshotter": "string", "snapshot_key": "string", "created_at": "Timestamp", "updated_at": "Timestamp", "extensions": "ExtensionsEntry", "sandbox": "string"}
}

The ListContainersRequest, again only has a single filter parameter, meaning we can use an empty message again to get all containers without filtering. What you may notice at this point however, is that there is no obvious way in this request message to specify which namespace we want to list containers from. So how can we specify this?

As it turns out, we actually need to set a specific HTTP header containerd-namespace to specify a namespace for requests, as defined in the containerd code here.

So, to list containers in the k8s.io namespace, we add the -H "containerd-namespace: k8s.io" option to specify we want results from this namespace only, with the command looking like so:

$ echo -ne "\x00\x00\x00\x00\x00" | sudo curl -v -s --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST  -H "content-type: application/grpc" -H "te: trailers"  -H "grpc-accept-encoding: gzip" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.containers.v1.Containers/List --data-binary @- --output /tmp/listcontainersresponse.bin
*   Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x559ec3176ec0)
> POST /containerd.services.containers.v1.Containers/List HTTP/2
> Host: localhost
> user-agent: curl/7.81.0
> accept: */*
> content-type: application/grpc
> te: trailers
> grpc-accept-encoding: gzip
> containerd-namespace: k8s.io
> content-length: 5
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [5 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [32727 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact

And if we parse the response file as a ListContainersResponse message, we get the following:

$ ./protobuf_parser.py -m containers_pb2.py -d /tmp/listcontainersresponse.bin -t ListContainersResponse
containers {
  id: "01a91532d97f8f7162c477dd1e402520d313e9c4333827d74a93cde25dddc1cc"
  labels {
    key: "service.istio.io/canonical-revision"
    value: "latest"
  }
  labels {
    key: "service.istio.io/canonical-name"
    value: "argocd-applicationset-controller"
  }
  labels {
    key: "security.istio.io/tlsMode"
    value: "istio"
  }
  labels {
    key: "pod-template-hash"
    value: "5b899f5459"
  }
  labels {
    key: "io.kubernetes.pod.uid"
    value: "976a064f-a3a8-4153-83a7-475403b114b9"
  }
[SNIP]

This response includes all the details about each container (snipped above because there is A LOT of information).

Conclusion

That concludes this part of the series. In the next part, I will cover how to run your own containers using curl. Be warned, it only gets more involved from here…