Back

How to Enable TLS Auth on Amazon MSK

How to Enable TLS Auth on Amazon MSK

Apache Kafka® is a typical Java application, which means it has perfect documentation and ecosystem if you use it with official Kafka Java SDK. However once you decide to go with one of the alternative clients, you can have issues right from the beginning.

Why should somebody use these alternative clients at all? Well, there are several reasons. Firstly, there are many people (especially among the young developers) who just don’t like Java. Secondly, the official SDK is monstrous even for the Java world. It provides extremely powerful tools to proceed the data in a fault-tolerant way, but the cost sometimes is just too high, especially if you don’t really understand how it works. Thirdly, sometimes you already have some of your logic written with another language and you don’t want to waste your time on gluing it with Java.

This article is focused on using Amazon MSK with Confluent’s Golang Kafka Client, but the same logic will work for any client based on librdkafka.

Setup secured Amazon MSK Cluster

By default Amazon MSK uses TLS to encrypt data in transit, but in order to secure your cluster with client authentication you have to create the private CA first.

Go to the ACM PCA and create new Root CA (you can skip this step if you use subordinate CA). AWS will then show you a popup with proposal to install the CA.

Now you can create the cluster. Put the installed CA to the Authentication section.

Generate client certificate

Once you create the cluster you can see its bootstrap servers by clicking View client information button. You should have something like:

b-1.NAME.AWS.SUB.DOMAINS.amazonaws.com:9094,b-2.NAME.AWS.SUB.DOMAINS.amazonaws.com:9094

Go to the AWS Certificate Manager and request a private certificate with the CA from previous step and domain names*.NAME.AWS.SUB.DOMAINS.amazonaws.com.

Export the created certificate, you will get three ASCII-armored files: certificate body, certificate chain, and private key. Save certificate body as cert.pem and private key as cert.key.

Golang Kafka Client Config

In order to connect to the cluster your client should contain following fields:

&kafka.ConfigMap{
  "bootstrap.servers":        "b-1.NAME.AWS.SUB.DOMAINS.amazonaws.com:9094,b-2.NAME.AWS.SUB.DOMAINS.amazonaws.com:9094",
  "security.protocol":        "ssl",
  "ssl.certificate.location": "/path/to/cert.pem",
  "ssl.key.location":         "/path/to/cert.key",
  "ssl.key.password":         "cert password",
}

See documentation for details.

Bonus: Dockerfile

To dockerize the client you most likely want to compile the golang client into one binary without any special shared libraries required. The Dockerfile below compiles the client with statically-linked librdkafka during the first stage and then loads the result into the clean one.

FROM golang:1.12-alpine AS build

RUN apk add --no-cache git openssh openssl-dev pkgconf gcc g++ make libc-dev bash tar
RUN wget https://github.com/edenhill/librdkafka/archive/v1.1.0.tar.gz &&\
  tar xf v1.1.0.tar.gz
RUN cd librdkafka-1.1.0 && ./configure && make && make install

WORKDIR /root

COPY go.mod .
COPY go.sum .

RUN go mod download

COPY . .

RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o entrypoint -tags static_all

FROM alpine
RUN apk add --no-cache ca-certificates
COPY cert.* /root/

COPY --from=build /root/entrypoint /root/entrypoint
ENTRYPOINT /root/entrypoint