DEV Community

Brice LEPORINI
Brice LEPORINI

Posted on

Using OpenId Connect with Confluent Cloud

I hope you've already read my previous post about the capability that was added in Kafka 3.1 to authenticate applications using an external OpenId Connect identity provider. Now you also can do the same with Confluent Cloud. Initially, the only way to authenticate applications was to use API keys and secret managed in Confluent Cloud, but offering the capability to manage centrally accounts, credentials and authentication flows in a single identity provider is a common expectation in many organizations.

To set it up, it's quite easy and resides in two steps. First you need to declare a new identity provider for your Confluent Cloud organization. Azure and Okta are completely integrated, but let's focus on vanilla OpenId Connect. One good thing with OIDC is that this standard is completely discoverable, as an example you can freely dump the configuration for the Google OIDC service:

$ curl https://accounts.google.com/.well-known/openid-configuration
{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "device_authorization_endpoint": "https://oauth2.googleapis.com/device/code",
 "token_endpoint": "https://oauth2.googleapis.com/token",
 "userinfo_endpoint": "https://openidconnect.googleapis.com/v1/userinfo",
 "revocation_endpoint": "https://oauth2.googleapis.com/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
[...]
}
Enter fullscreen mode Exit fullscreen mode

.well-known/openid-configuration is an endpoint implemented by all providers and this is the only thing you need to set the identity provider in Confluent Cloud :

Image description
As a result, with the configuration URL, Confluent Cloud is able to automatically gather the issuer URI, but more importantly the JWKS, which provides the public keys to verify the JWTs.

The second step is to declare an identity pool. In fact, it's a way to define how JWT tokens issued by the IDP are qualified to be authenticated to the Confluent Cloud service :

Image description

For this demo, let's keep it simple. The claims.sub default value for the identity claim field is perfectly fine as it's a registered claim to identify the principal. Here's an example payload of a JWT (modified, not really issued by Google 😉):

{
  "iss": "https://accounts.google.com",
  "sub": "dZJPsd9oVtAciRY8F5lHzk4yS0hfnBiE@clients",
  "aud": "https://kafka.auth",
  "iat": 1672817905,
  "exp": 1672904305,
  "azp": "dZJPsd9oVtAciRY8F5lHzk4yS0hfnBiE",
  "scope": "scope",
  "gty": "client-credentials"
}
Enter fullscreen mode Exit fullscreen mode

Then let's set that every JWT that comes with the https://kafka.auth value in the aud claim is valid. Notice that the audience claim can be an array of strings instead of a single valued field. This value is set in the IDP.

To finalize the creation, you need to bind roles and resources to this new identity pool, which is an usual operation for every Confluent Cloud administrator!

Now let's check that it's working with a dumb Kafka consumer. Thanks to the New Client wizard, getting a base configuration to start with is easy:

Image description

But you need to tweak it a bit to define how the Java application must request the JWT to provide Confluent Cloud, it's almost like what was showed in my previous post but in addition you need to set the JAAS sonfiguration with the logical cluster id and the identity pool id:

sasl.mechanism=OAUTHBEARER
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
sasl.login.connect.timeout.ms=15000
sasl.oauthbearer.token.endpoint.url=https://oauth2.googleapis.com/token
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="dZJPsd9oVtAciRY8F5lHzk4yS0hfnBiE" \
clientSecret="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
extension_logicalCluster="lkc-000000" \
extension_identityPoolId="pool-XXXXX" ;
Enter fullscreen mode Exit fullscreen mode

Then you can test it:

$ docker run --rm -ti -v $PWD:/work --workdir /work confluentinc/cp-kafka kafka-console-consumer --consumer.config config.properties --topic test --bootstrap-server pkc-xxxxxx.europe-west1.gcp.confluent.cloud:9092 --from-beginning
[2023-01-04 12:17:49,565] WARN These configurations '[basic.auth.credentials.source, acks, schema.registry.url, basic.auth.user.info]' were supplied but are not used yet. (org.apache.kafka.clients.consumer.ConsumerConfig)
{"ordertime":1497014222380,"orderid":18,"itemid":"Item_184","address":{"city":"Mountain View","state":"CA","zipcode":94041}}
{"ordertime":1497014222380,"orderid":18,"itemid":"Item_184","address":{"city":"Mountain View","state":"CA","zipcode":94041}}
^CProcessed a total of 2 messages
Enter fullscreen mode Exit fullscreen mode

Give it an automation flavour...

All of that was manually set up, using graphical user interfaces and wizards in order to walk you gradually through this process, however modern organization requires an automated way to provision resources. Guess what, you have multiple options to do that with Confluent Cloud. The low level one is to use the Confluent Cloud REST API but more probably you will opt for the Terraform option. That way, you have a real Infrastructure as Code approach and it's completely embeddable in a global infrastructure definition. So feel free to read the Confluent Cloud Terraform provider documentation and especially the sections about the identity provider and about the identity pool.

Obviously all of that is only a initial introductory to OIDC integration in Confluent Cloud and I recommend having a look to the comprehensive documentation.

Top comments (0)