Building Kubetap

Posted by Matt Hamilton on May 26, 2020
Matt Hamilton
Find me on:


Kubetap

I developed Kubetap, a kubectl plugin, to help developers and security testers who need to interactively debug network traffic flowing into a Kubernetes Service. In this blog post, I detail the origin of Kubetap and design implementation considerations.

To get started with Kubetap and learn to "tap" a Service faster than you could fetch a beer from the fridge, check out the Quick Start guide.

Kubetap

"Just use a debugger"

"Debuggers aren't scary, you should use one". "Just use Squash".

As a Go programmer, I resolve 95% of bugs in my programs by thinking about the errant behavior I'm seeing. The remaining 5% require that I "poke" at the issue using printf or spew until I have enough information. I have never used delve or squash, nor do I want to.

Reading suggestions to "just use a debugger," I get flashbacks to my past struggles using GDB without peda. Good on those who work in environments where they can grow and nurture an "ideal development lifecycle" that manages to integrate these tools efficiently. Just do that.

As for the rest of us, I suspect we are the majority who not only can't do that, but have no desire to. It's great that those tools exist, but I'll probably never use any of them. Especially with Go, a strongly typed language whose built-in tooling enforces syntactical correctness.

The time cost and cognitive load is too expensive to justify learning to integrate delve, squash, et. al. into my workflow.

It is, however, frequently the case that I covet the ability to interactively debug network traffic for a deployed application. And, as is often the case, for applications I didn't author.

Because tooling to facilitate painless network traffic debugging for Kubernetes didn't exist, I made Kubetap.

 

Why Kubetap?

In my new role for security at Soluble, the need to inspect and manipulate incoming Kubernetes Service traffic had become too common. And annoying.

Prior to Kubetap, proxying incoming Service traffic was a laborious manual process, consuming valuable time to configure. The price paid to gain visibility was ludicrously high. The eye-rolling ritual of patching Deployments and Services to insert an intercepting proxy had become too frustrating.

Kubetap alters the status quo of Kubernetes application testing. It's now possible to proxy Kubernetes Services while enjoying a nice, cold beer. 🍺

 


 

Okay, but how does it work?

Kubetap places a "tap" proxy sidecar alongside the Deployment with selectors matching the Target Service. The proxy sidecar exists in the same network namespace as the primary container, meaning that traffic is proxied through localhost, 127.0.0.1, and not through the cluster intranet.

The vast majority of Services I deal with use the HTTP protocol. When tapping HTTP Services, Kubetap deploys mitmproxy. Operators can then access the mitmproxy web interface to view, intercept, modify, replay, and save traffic. Many thanks to @cortesi, @mhils, and the mitmproxy contributors for the fantastic utility.

Support for additional protocols, such as raw TCP/UDP, gRPC, etc. will come in future releases as I add features.

For security professionals and those who use Burp or another proxy that runs on their local machine, you are not forgotten. The original goal when starting this project was to funnel traffic into a local instance of BurpSuite, and while tools like ktunnel exist, the world still waits on kubernetes/kubernetes#20227.

 

Why did you do it *this* way and not *that* way?

When building Kubetap, I considered a few implementation options:

Sidecar Injection

The implementation I ultimately decided on, sidecar injection, modifies the target Deployment’s manifest to inject a second container into the pod. The Service manifest is then modified to direct traffic from the target port to the listener on the proxy container. The proxy container receives this traffic and sends it to the target container over a localhost connection.

The lack of traffic duplication over the network is advantageous, and eliminates the possibility of network hiccups between the proxy and the targeted application.

Pros:

  • Does not require a local proxy (Burp, Zap, mitmproxy, etc.)
  • Traffic does not traverse cluster intranet
  • Least complex
  • Traffic flows through proxy even without Operator presence
    (operator VPN disconnects will not cause traffic disruption)

Cons:

  • No explicit control of the proxy environment, as we inherit the environment from the target Pod.
  • Restrictive pod limit directives may result in OOM with mitmweb, which does not sync to disk and stores traffic in memory.

 

Explicit Deployment

A slightly different approach than sidecar injection, explicit deployment, would involve applying Kubetap Deployment and Service manifests exclusively for the proxy and then patching the target Service to redirect to the new Deployment.

This is fundamentally not much different from sidecar injection other than that the network is now duplicated over the wire and
I have (to deal with having) explicit control of the Pod environment.

Pros:

  • Does not require a local proxy (Burp, Zap, mitmproxy, etc.)
  • Proxy manifests can be configured and manipulated without worry of conflicting with the target Pod, more control over the environment.
  • Not very complex, more explicit and obvious deployment and management.
  • Traffic flows through proxy even without operator presence - operator VPN disconnects will not cause traffic disruption.

Cons:

  • Must manage a second Deployment, which if it dies, becomes unschedulable, becomes unreachable, etc., will silently break a Service.
  • Traffic must be sent over the wire again, doubling intra-cluster network load.

 

Ktunnel

Ktunnel is a utility created by omrikiei, likely born out of the same frustration that other watchers of kubernetes/kubernetes#20227 are experiencing, as basic reverse port-forward functionality is still not available natively. A quick review of the project indicates it is promising and good work. It’s a shame that a native implementation will eclipse this project.

Pros:

  • Until kubernetes/kubernetes#20227 is resolved, this is the only tooling available to support proxying connections to the operator’s machine.

Cons:

  • Mandated local proxy (Burp, Zap, mitmproxy, etc.)
  • Still requires modifying the Service to redirect the traffic to the proxy.
  • If the developer machine disconnects or dies, traffic will not flow to the destination Service, though it appears the “expose” functionality could be tweaked to resolve this.
  • Once kubernetes/kubernetes#20227 is resolved, the necessity for this tool is obviated by the native implementation.

 

Decision

While there were merits to all approaches, I opted for the sidecar injection approach as it was the least burdensome approach as both an operator and developer.

Two weeks later, Kubetap was born.

What's next?

Check out the TODOs on the project site.

Additionally, I anxiously await the resolution of kubernetes/kubernetes#20227. I will be the first to champion this change and offer Kubetap flags to forward traffic from the proxy sidecar to a local proxy, such as BurpSuite. If there is no movement on that issue after what is hopefully a renewed interest, I will create a custom implementation to proxy traffic to a developer machine’s local proxy.

You can proxy any HTTP Service in your cluster in under a minute by installing Kubetap and following the quick-start guide.

 

Feedback

 

  // ContactEriner about cool stuff you do with Kubetap!
  func ContactEriner(twitter bool) string {
      if twitter {
          return "@theEriner"
      }
      user := "matt"
      domain := "soluble.ai"
      return user + "@" + domain
  }

Cheers! 🍻

 

Topics: Kubernetes, proxy, mitmproxy, kubetap, open-source, kubectl-plugin

Matt Hamilton

Written by Matt Hamilton

Matt Hamilton (OSCP), is a principal security researcher at Soluble, where he focuses on Kubernetes security research. He was formerly with Bishop Fox, where he worked on black-box penetration testing, application assessments, source code review, and mobile application review for clients, which included large global organizations and high-tech start-ups. Matt is responsible for more than a dozen CVEs. Matt was a founding member of OpenToAll, an online team for security competitions whose purpose is to mentor newcomers to the security community. He is a responsible disclosure advocate, and loves the Go programming language.