Skip to content

Security configurations

Version: 1.0.0

This section details the various security configurations that are possible to apply at the transport level. The security.protocol field allows to apply the following security mechanisms (in order of increased security):

  • PLAINTEXT: no encryption is made.
  • SASL/PLAINTEXT: username/password-based authentication using Simple Authentication and Security Layer (SASL). Using this option no encryption is made, hence the credentials and the data are exchanged in cleartext.
  • SASL/PLAINTEXT over TLS: username/password-based authentication using Simple Authentication and Security Layer (SASL). The encryption is made at transport level using TLS.
  • SSL: SSL-based two-way authentication. This is the most secure setting and the default one. Please read the following sections to understand how to setup SSL authentication.

The following sections detail how to set up SSL and SASL authentication.

Attention

Configuring the cloud connector to use a security protocol that is not advertised on the broker will lead to a java.lang.OutOfMemoryError: Java heap space. This is a known Kafka issue, refer to Apache Issue 4090 for more details. Hence, be careful to match the bootstrap.servers configuration with the security.protocol one. In case the error happens, it is sufficient to click on the "Connect/Disconnect" button of the cloud connection, correct the configuration, and then click again on "Connect/Disconnect".

Configure SSL two-way authentication

The Kafka Cloud Connector requires a keystore where to store the trusted certificates and the needed key pairs. ESF provides a Keystore Configuration service that can be used to create keystores and store certificates. Refer to Kafka Encryption and Authentication using SSL for further details.

1. Create test certificates

This section explains how to create a server and a client certificate signed by a CA. The server certificate is created so to allow hostname verification by the clients.

Certification Authority CA

From the machine running the test broker, create a x509 configuration file named openssl-ca.cnf from the following template:

HOME = .
RANDFILE = $ENV::HOME/.rnd

####################################################################
[ ca ]
default_ca = CA_default # The default ca section

[ CA_default ]

base_dir = .
certificate = $base_dir/cacert.pem # The CA certifcate
private_key = $base_dir/cakey.pem # The CA private key
new_certs_dir = $base_dir # Location for new certs after signing
database = $base_dir/index.txt # Database index file
serial = $base_dir/serial.txt # The current serial number

default_days = 1000 # How long to certify for
default_crl_days = 30 # How long before next CRL
default_md = sha256 # Use public key default MD
preserve = no # Keep passed DN ordering

x509_extensions = ca_extensions # The extensions to add to the cert

email_in_dn = no # Don't concat the email in the DN
copy_extensions = copy # Required to copy SANs from CSR to cert

####################################################################
[ req ]
default_bits = 4096
default_keyfile = cakey.pem
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
string_mask = utf8only

####################################################################
[ ca_distinguished_name ]
countryName = countryName
countryName_default = US

stateOrProvinceName = stateOrProvinceName
stateOrProvinceName_default = DEFAULT

localityName = Amaro
localityName_default = DEFAULT

organizationName = Eurotech
organizationName_default = DEFAULT

organizationalUnitName = ESF
organizationalUnitName_default = DEFAULT

commonName = <ipbroker>
commonName_default = IP:<ip-broker>

emailAddress = test@test.com
emailAddress_default = test@test.com

####################################################################
[ ca_extensions ]

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:true
keyUsage = keyCertSign, cRLSign

####################################################################
[ signing_policy ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional

####################################################################
[ signing_req ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment

Next, generate a Certification Authority key pair using the previously created configuration file:

echo 01 > serial.txt
touch index.txt
openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM

This command will create a CA private key cakey.pem and the CA public certificate cacert.pem. serial.txt and index.txt are used by the tool when multiple signatures are issued. Import the certificate into the truststore:

keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file cacert.pem

Broker certificate

Create a broker certificate to be signed later on with the CA:

keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA -storetype pkcs12 -ext SAN=IP:<ip-broker>
keytool -list -v -keystore kafka.server.keystore.jks

This generates a new keystore kafka.server.keystore.jks in PKCS12 format which contains a new key pair with alias localhost. The option -ext SAN=IP:<ip-broker> allows to include in the generated certificate the Subject Alternative Name, which can be used by the client for verifying the hostname.

Create a Certificate Signature Request by specifying the SAN for hostname verification:

keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file servercert.csr -ext SAN=IP:<ip-broker>

Create a signed server certificate from the previously created CSR:

openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out servercert-signed.pem -infiles servercert.csr
openssl x509 -in servercert-signed.pem -text -noout

Import the CA public certificate and the signed server public certificate into the server keystore:

keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file cacert.pem
keytool -keystore kafka.server.keystore.jks -alias localhost -import -file servercert-signed.pem
keytool -list -v -keystore kafka.server.keystore.jks

Client key pair

Generate a private key:

openssl genrsa -out client.key 4096

Create a CSR:

openssl req -new -key client.key -out client.csr

Sign it with the CA's key pair:

openssl x509 -req -CA cacert.pem -CAkey cakey.pem -in client.csr -out client-signed.pem -days 365 -CAcreateserial -passin pass:{ca-password}

Export the client.key private key in PKCS8 format for usage in ESF:

openssl pkcs8 -topk8 -inform PEM -outform PEM -in client.key -out client.pem -nocrypt

2. Configure kafka broker

The broker can be configured with the following properties (usually found in server.properties file):

ssl.truststore.location=<path/to/kafka.server.truststore.jks>
ssl.truststore.password=<password>
ssl.keystore.location=<path/to/kafka.server.keystore.jks>
ssl.keystore.password=<password>
ssl.key.password=<password>
listeners=PLAINTEXT://<ip-broker>:9092,SSL://<ip-broker>:9093
advertised.listeners=PLAINTEXT://<ip-broker>:9092,SSL://<ip-broker>:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=

This allows the broker to listen for unencrypted connections on <ip-broker>:9092 and requires SSL on <ip-broker>:9093. Configure the correct TLS protocol and leave ssl.endpoint.identification.algorithm= empty for disabling hostname verification by the broker.

3. Create a new keystore (optional)

In this tutorial, a new keystore specific for the Kafka client will be created. An existing keystore might be used as well. In the ESF UI, move to "Keystore Configuration" under the "Security" section. Add a new keystore specifying a file path and a password.

4. Import client key pair and CA public certificate in ESF

This can be done by adding the key pair and the trusted certificate from the ESF UI in the "Security" section. Make sure to select the previously created keystore.

5. Configure Kafka transport

In the KafkaClientDataTransport layer, set security.protocol to SSL and configure the Keystore Path as the path defined at step 3. Set the Enable hostname verification option to true to enable hostname verification. The certificates created in step 1 should enable this feature.

Configure SASL authentication

The Apache Kafka® Cloud Connector supports authentication via Simple Authentication and Security Layer (SASL). The security.protocol field allows to choose from two different SASL mechanisms:

  • simple SASL/PLAINTEXT, and
  • SASL/PLAINTEXT over TLS.

SASL/PLAINTEXT is a simple username/password authentication-based mechanism. Note that it is not encrypting the underlying connection, so the sent credentials are in clear. This option is intended to use mainly for debug purposes when the connecting Kafka broker is not securely configured. The SASL/PLAINTEXT over TLS option uses a username/password authentication-based mechanism but the connection is encrypted at transport level, using TLS. This option is safer than the previous.

The next sections are guides to configuring the Kafka broker and the cloud connector to use SASL authentication mechanisms. Refer to the Confluent documentation: Authentication with SASL using JAAS or Kafka Authentication using SASL for further details.

1. Configure the Kafka broker for SASL-PLAINTEXT

To configure the broker, edit server.properties file by adding PLAIN to the enabled SASL mechanisms:

sasl.enabled.mechanisms=PLAIN

Configure the broker to listen for SASL connections by adding the following listeners in the server.properties file:

listeners=SASL_PLAINTEXT://<broker-ip>:<port>
advertised.listeners=SASL_PLAINTEXT://<broker-ip>:<port>

In case SASL/PLAINTEXT over TLS is configured, replace SASL_PLAINTEXT with SASL_SSL and configure the keystore and truststore as in the previous sections. Next, create a JAAS configuration file as follows:

KafkaServer {
 org.apache.kafka.common.security.plain.PlainLoginModule required
 username="<username>"
 password="<password>"
 user_<username>="<password>";
};

Insert as many user_<username>=<password> as many credentials are wanted. This configuration file is passed as a JVM parameter. Modify bin/kafka-server-start.sh to specify the security configuration in EXTRA_ARGS:

-Djava.security.auth.login.config=/path/to/jaas/file

At this point, the cloud connector should be able to connect to the broker with the specified credentials in SASL username and SASL password.

2.Configure SASL-PLAINTEXT over TLS

In case SASL/PLAINTEXT over TLS is used, the Keystore Path property must be set to the keystore containing the CA certificate for setting up the underlying SSL connection. The same keystore as the one used in the previous section can be used.