Introducing Cinderella Clusters

Posted by Matt McClellan on July 7, 2020
Matt McClellan
Find me on:

Soluble Cinderella

Soluble Cinderella clusters provide a quick, easy way to explore the Kubernetes ecosystem. Create a cluster, deploy containers, test different configurations, try on the glass slippers. Go wild. And, like Cinderella’s coach — which turned back into a pumpkin when the clock struck midnight — Cinderella clusters are automatically deleted when their time runs out – one hour. No muss, no fuss.

Why we built it

The idea for Cinderella clusters started with our CTO, Rob Schoening. As CTO, he’s always experimenting, testing out new ideas, and test-driving new tools – which means he’s frequently spinning up new clusters and tearing them down when he’s done. And when all you want to do is try something out, that setup and tear down starts to look a lot like yak shaving. So, like any good developer, Rob automated shaving those yaks. 

Initially, we thought this would be an internal tool. However, we quickly realized how useful ephemeral infrastructure like Cinderella clusters could be. For example, during our conversations with CISOs and their security teams, many of them expressed concern that they don’t understand Kubernetes sufficiently. And, in the new world of cloud native development, with independent development teams empowered to provision their own infrastructure, the security team doesn’t always have easy access to sandbox environments. 

Now, with Soluble Cinderella clusters, they can spin up a short-lived cluster to run through tutorials or kick the tires on a new open source tool without the friction of asking for a new cluster to be provisioned or worrying about interfering with work that another member of the team is doing.

Taking Cinderella to the Ball

Let me walk you through creating a Cinderella cluster the first time. You’ll need a Google or GitHub account (Soluble uses Google and GitHub for authentication), an application that allows you to SSH to remote machines, and an SSH key (for logging into the node running the cluster). After the cluster is running, we’ll deploy a simple application.

Note: We are providing access for the community for free because we found it so useful ourselves.  We expect there will be issues you discover here and there and things we can improve that would make Cinderella clusters better for yourself and others. In the spirit of giving back, please give us your feedback, too, by emailing us.

First, let’s go to the “Try Cinderella” page on Soluble.

You won’t have to do this every time you start a Cinderella cluster; we’re just taking it nice and easy this time.

cinderella start 2

Click on "Let's go!" to get started.

Next, we’ll create an account. Click on the Google button or the GitHub button, whichever you prefer to use. If you already have an account, click on the appropriate button to authenticate. (If you’re already authenticated with Soluble, you’ll skip past this step.)


soluble login-1

Next we need to import one (or more) SSH public keys so we can log in to the node running the cluster. If you have a GitHub account with SSH public keys, Soluble can just import those; that’s what we’ll do here:

add ssh key-1

 

After clicking on Add, the keys are imported and we’re taken to the loading page. It takes about a minute for the cluster to spin up. When the cluster is ready, you’ll receive a notification in the upper right corner. When you click Continue, you’ll be taken to the Cinderella Clusters page:

cinderella_main

Next we’ll log in to the node running the cluster via SSH. To make this easy, we can copy the SSH command to log in to the node by clicking on the three vertical dots in the Actions column of the row with our Cinderella cluster:

Connection string 2

Next, open a terminal window, paste the command you copied at the prompt, and hit return. We’ll be asked to confirm the SSH key of the remote host and will get a prompt that looks something like:

initial-login

Now that we’ve logged in, we can take a look at the environment. Currently, Cinderella clusters run k3s k3s, a lightweight version of Kubernetes. Let’s get a list of the deployments by running the following command:

kubectl get deployments

No deployments running, huh? Well, then, let’s make the cluster do something! We’re going to deploy an HTTP service that responds with the headers it receives from the client. (This section is based off the Hello, Minikube tutorial at https://kubernetes.io/docs/tutorials/hello-minikube/)

First, we need to create a Deployment. The Deployment manages a Pod; the Pod runs a Container based on the Docker image specified:

kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4

Now, let’s look at the deployment: 

kubectl get deployments

The output is similar to:

NAME         READY  UP-TO-DATE  AVAILABLE  AGE
hello-node   1/1    1           1          43s

View the Pods on the system:

kubectl get pods

The output is similar to:

NAME                          READY   STATUS    RESTARTS   AGE
hello-node-7bf657c596-zncx8   1/1     Running   0          2m37s

By default, Pods are not accessible outside the Kubernetes virtual network. In order for us to interact with the hello-node Container, we have to expose it as a Kubernetes Service. To do this, we use the kubectl expose command, passing in a type of LoadBalancer to indicate we want to expose the service outside of the cluster:

kubectl expose deployment hello-node --type=LoadBalancer --port=8080

Now, view the Services on the system:

kubectl get services

The output is similar to:

NAME        TYPE          CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
kubernetes  ClusterIP     10.43.0.1     <none>         443/TCP        14m
hello-node  LoadBalancer  10.43.186.97  172.31.96.158  8080:32062/TCP 7s


Finally, let’s interact with the echoserver. We’re going to use curl to send a request to the server and see what comes back. Unfortunately, this can’t be a simple copy paste, because the port number the service is exposed on is randomly generated. You’ll need to copy the 5-digit port number displayed opposite to 8080 in services output and replace the 5-digit number in the command below:

curl -H "X-soluble-is-here: true" http://127.0.0.1:32062

The output will look something like:

CLIENT VALUES:
client_address=10.42.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://127.0.0.1:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=127.0.0.1:32062
user-agent=curl/7.61.1
x-soluble-is-here=true
BODY:
-no body in request-

For a more in-depth example of what’s possible with Cinderella clusters, take a look at this video of our Principal Security Researcher, Matt Hamilton, using Cinderella to spin up ephemeral Kubernetes clusters for security testing. In this video, he uses it to deploy Harbor for a web application security assessment, using his open source tool Kubetap. 

In addition to a k3s environment, Cinderella clusters are deployed with the Soluble Agent and Soluble CLI preconfigured. As you make changes to the Cinderella cluster, telemetry is transmitted back to Soluble Fusion by the agent. The Soluble CLI lets you interact with Soluble Fusion from the command line. For example, you can get a report of how your deployments match up with the recommendations on the Kubernetes Pod Security Standards page [https://kubernetes.io/docs/concepts/security/pod-security-standards/] by running the following command:

soluble query run --query-name security-standards

Thanks for reading. We’re very excited about Cinderella clusters – this is only the beginning. If you’d like to try them out (free of charge while we’re in beta), try them out now. Let us know what you think by emailing us.

Topics: Kubernetes, Cloud Native, CISO, DevSecOps

Matt McClellan

Written by Matt McClellan

With more than two decades in the cybersecurity industry, first as a developer and technology leader, and then in product management, Matt is Head of Product for Soluble. In addition to product management roles at Arbor Networks, the security division of NETSCOUT, and a stint as the director of technology for a government consulting firm, he spent several years as Tenable’s Product Manager for Nessus solutions – the de-facto industry standard vulnerability assessment solution for security practitioners that was the company’s biggest revenue driver. Matt has also held team and technology lead positions at Check Point Software Technologies, and was the product development director at NFR Security. Matt brings a depth of technical knowledge, customer understanding, and a wry sense of humor that shapes his view of the product landscape.