Chances are if you have worked with computers lately you have probably heard of the concept of “containerization” or perhaps Docker because they have become extremely popular with developers and operations folks. Containers can make setting up your CTF work environment as easy as issuing a command or two.
Not sure what containers are? I’ll be doing my best to explain them!
What are containers?
Even if you have used Docker before I recommend reading this section because not only will it explain what docker is on a high level, but it will also explain some of the geeky technical aspects of how Docker works.
Docker and Containers 101
Containers are Linux processes that have their own isolated network stack, filesystem, and process list. This is accomplished by using a little feature in the Linux kernel called namespaces. For now, all you need to know is that containers act similar to a virtual machine, but they are not a virtual machine. (The comparison just makes it easier to illustrate a high level view of what a container is).
The big difference is that containers are a lot more lightweight given that they share kernel level resources. Containers also have a lot of tooling built around them to allow for rapid provisioning and deployment of a system. More simply put, it allows for a very high level of automation to be achieved when creating resources allowing for systems (I am using the term “system” loosely) to be quickly built up and torn down with a few commands.
Docker itself is a set of tooling built to make creating and managing containerized applications easier. Docker can be combined with an orchestration framework to make deploying multiple containers at scale much, much easier. (I will touch on that briefly towards the end.) With Docker containers are defined in a Dockerfile, which is a simple text file that contains the instructions for Docker to create your container. The syntax of the file is simply
STATEMENT arguments so in all caps you put the
STATEMENT you are going to have Docker run and in lowercase you put any
arguments to that
STATEMENT. Simple? I think so.
Namespaces and the Fun Technical Stuff Behind Containers
Note: You can skip this section and still be able to understand the rest of this post, but this section explains some of the internal workings of containers.
The fundamental technology that allows for containers to even exists is Linux namespaces. Namespaces were introduced into the kernel in 2002 and allow for a process to have an isolated set of resource managed by the kernel. This includes, but is not limited to, other processes running inside of that namespace, its own networking stack, filesystem, and set of restrictions (namely seccomp). Naturally we can allow a process to share namespaces with the default namespace (The one that all of our normal processes run in) or have it be completely isolated.
You can think of namespaces (and containers) like apartments. Depending on how the apartment was build the units probably share electric and plumbing, but as the architect for these apartments you can make choices to isolate the apartments more from each other, or integrate them more. The trick being they always have to share the same main plumbing/electric.
Namespaces can be thought of in this way because the different process in different namespace will always have to share the same core kernel resources like process scheduling. The kernel can be thought of like the plumbing/electric and the containers are the individual apartments. Each namespace is its own feature which you can chose to either group process into the same namespace or chose certain namespace to isolate while sharing others.
Everything going on with Linux namespaces are managed by a few system calls which can be found using
man namespaces of course there are a multitude of syscalls dealing with namespaces, but the man page boils it down to four core syscalls. With this information and a little bit of knowledge of the C programming language you could quickly write a program to do very basic container isolation.
Using Docker in a practical way
After all of that theory you are probably wondering to yourself “Okay but how does this help me with my CTF infrastructure?” to which the answer would be “Its time to create some containers”.
Here is the link to the Docker getting started page. Installing Docker is fairly easy and I will not be able to do a better job of explaining the install then Docker themselves, so follow their instructions to install Docker.
Lets start with what our Dockerfile is going to look like at the end.
FROM kalilinux/kali-rolling RUN apt-get update && apt-get upgrade -y RUN apt-get install -y nmap john dirb VOLUME /wordlists VOLUME /ctf
Okay, so we can see the
STATEMENT argument syntax that was mentioned earlier in the blog being used, but what dose all of this mean? Lets go statement by statement to explain. First of we have
FROM, which tells docker to inherit from a specific image. The traditional way of thinking about this would be to download your Kali virtual machine then boot it up and start making the changes you want. With Docker we just use
FROM and then the image name, so for this container we are using
kali-rolling as our parent image (I chose Kali because that seems fairly popular among NCL players). Next up is
RUN, which well runs a command, more specifically it runs a command during the creation of the image. In the case of this container I am updating the package repository list and then installing some basic scanning and cracking tools. Next comes
VOLUME which is used to set specific directories as ones that you can mount host files and directories too. In this case I want to use
/wordlists to mount word lists like
rockyou.txt to and
/ctf to mount binaries or other CTF related files too. There are a lot of other statements that can be performed by a dockerfile which can be found here, but for the case of our container these will suffice. Really quick here is a quick cheat sheet list of my most used statements.
FROM– Select a parent image to base the container on
COPY– copies files from the host into the container image
RUN– Runs a command while building the container image
VOLUME– create a mountable volume in the container
EXPOSE– tells docker that a network service port will be opened
CMD– used to specify a command to run each time the container is brought up
WORKDIR– specifics the working directory for a container
USER– specifics a non root user to be used by the container
So now that we have our container defined we need to tell docker to build it. This is as simple as
docker build . this command takes the dockerfile located in the current directory and builds its image. Then you can use
docker image list to find your image, but there is a simpler way to deal with the annoyance of having a random string as your container name and that is tags. Tags are metadata for container images and the most useful thing that you can do with them is to name your container. This can be done during the build process by running
docker build . -t "name:myname" which will build your container with the name
myname. Now we can spawn a shell in our container by running
docker run -it myname including
-it to tell docker we want an interactive terminal session attached to the container. You now have an interactive Linux shell to your container 🙂
Now let’s talk about working with volumes which can be a little trick to work with from the command line (Well not tricky, just kind of long and messy). Let’s look at the run command that will attach a volume to the container.
docker run -it --volume='wordlists:/wordlists:ro' --volume='ctf:/ctf:rw' myname
Okay so the main difference is the
--volume flag. Volume has it’s own special syntax that is fairly easy to break down. The syntax is as follows
'hostpath:containerpath:permission'so in our case above we are mounting
ctf/ on the host to
/ctf/ on the container in read write mode. See that was simple enough, but the command is too long for my liking, if only there was a simpler way…
Dead simple orchestration
Okay, so we all agree that the syntax for mounting volumes can seem a bit verbose at first. Which could be fixed by using a bash alias, but there is a much more powerful and useful solution. That solution is container orchestration software like docker compose or Kubernetes. Given that this is dead simple orchestration and not production ready orchestration I am going to detail how to use docker compose, not because it is better, but because it is a lot easier to use for our purposes. What container orchestration does is it allows for us to define multi- or single-container apps and how they should be run in a fashion similar to a dockerfile. Orchestration also handles things like health checks and ensuring the container is functioning properly, but for our uses it will be to have a much more clean way to start our container.
Lets look at our
version: "3.3" services: myname: build: ./ image: name:myname volumes: - type: bind source: ./wordlists target: /wordlists readonly: true - type: bind source: ./ctf target: /ctf
The docker compose file is a yaml file that looks a bit wordy at first, but is fairly self explanatory. At the start of the file we declare what version of docker compose our file is compatible with. Then we declare the services and start with
myname which is, as you guessed it, our container. We then declare how to build our container and also what to mount to the volumes. Then, once all of this is said and done, you just need to run
docker-compose up to get a shell to our container.
Of course compose has a lot more to offer than just this, but for our purposes this works rather well. If you are more interested in all the features of docker compose check out this link.
Quickly Tearing Down
After the CTF is said and done chances are you want to gain back a little bit of drive space. In docker all you have to do to remove your image is find the image like we did previously using
docker image list then run
docker rmi <image> This will delete the image from our drive so that all we have to do when the next CTF comes along is use
docker build to build our image back up again (Which, if you are using Kali as your base, I recommend doing because it is far from a stable ecosystem). Note you may have to use
-f when running
docker rmi to force the removal of an image.
Hopefully you will consider using docker for your next CTF and you build out your own cool tricked out docker image that makes set up for a CTF a process that mostly involves you running a command and then going to fetch yourself a mug of your favorite hot beverage.
With love and root shells,