Connect with Confluent for Confluent Cloud

The Connect with Confluent (CwC) program gives your application direct access to Confluent Cloud, the cloud-native and complete data streaming platform that processes more than an exabyte of data per year. When you join and integrate with Confluent, you become a part of the largest data streaming network—making it easier for your customers to access and use real-time data streams to drive more consumption for your business. Together with guidance from Confluent’s Apache Kafka® experts, you’ll build your data streams integration and verify it for scaled use across your entire customer base.

Connect with Confluent

Connect with Confluent partners can customize the end-user experience using multiple configuration options, including any of the client configurations available to Confluent clients and applications. This allows partners to further tailor a data streaming experience to their specific requirements.

For a list of partner integrations, see Native partner integrations.

Note

The following information is for Confluent partners. If you’re not yet a Confluent partner, start by visiting the Confluent Partner Portal and sign up to become a partner.

How it works

CwC integrations simplify connecting to Confluent Cloud from a partner’s native platform. You can implement this in a couple of ways:

  • New Organization: Develop an integration that creates a new Confluent Cloud organization and adds your customer.
  • Existing Organization: Launch the customer into an existing Confluent Cloud organization.

CwC integrations include the following:

  • The native partner platform UI provides the end-user experience
  • Integrates Confluent Cloud authentication
  • Unlocks Schema Registry and Stream Governance capabilities in Confluent Cloud
  • Includes Confluent Cloud APIs to manage topics, configurations, and other details
  • Integrates Producer and Consumer clients
  • Integrates Confluent Cloud topics

What great integrations look like

The most effective CwC integrations collect the necessary information from users as simply as possible. The integration should guide the user through how to obtain that information in Confluent Cloud.

The following information is typically collected though the UI:

  • Confluent Cloud cluster bootstrap server address
  • Confluent Cloud authentication details
  • Confluent Cloud topics to produce to or consume from
  • Schema Registry connection details

The following sample UI captures show the typical user experience.

  1. Click the Confluent tile.

    Connect with Confluent
  2. Provide the connection details.

    Provide connection details
  3. Select a Kafka topic.

    Provide connection details

Connect with Confluent integration basics

Refer to the following information to understand what development tasks are required to configure an integration.

Integration authentication

Security and authentication are critical aspects of a CwC integration. To connect to a customer’s Confluent Cloud environment, the integration must prompt the end-user (customer) to provide authentication credentials. It is important that the integration’s authentication to Confluent Cloud is thoughtfully designed to require the least permissions necessary to enable all features and ensure the customer’s environment remains secure at all times. Typically, there are two options for an integration to authenticate to Confluent Cloud:

API keys

Confluent Cloud API keys can be used to control access to Confluent Cloud components and resources. Each API key consists of a key and a secret. Confluent recommends users set Resource API keys.

How keys and secrets are stored is important. Do not store keys and secrets in plain text. A recommended practice is to dynamically inject secrets when connecting to Confluent Cloud using a secrets vault. For additional information, see Best Practices for Using API Keys on Confluent Cloud.

OAuth

Use OAuth-OIDC on Confluent Cloud supports short-lived credentials for an integration to authenticate with Confluent Cloud resources. OAuth 2.0 is an open-standard protocol that grants access to supported clients using a temporary access token. Supported clients use delegated authorization to attach and use Confluent Cloud resources and data on behalf of a user or application. It is important to be aware of OAuth limitations, notably usage limitations for cluster types and clients. For more information, see Limitations. For more information about authenticating to Confluent Cloud, see Security Protections for Authentication on Confluent Cloud.

End-user service account authentication

Confluent customers (end-users) in a production environment should use a dedicated service account. While a user account can be used for authentication, your integration should encourage authentication using a service account. Users can leave or change roles within a company (or user access may be revoked or changed) which results in an integration failure and loss of production time.

A partner integration does not know what RBAC roles and permissions are assigned for an authenticated service account. If the service account used for authentication has insufficient permissions, the partner integration should provide actionable error information so the Confluent customer knows what to do. Ideally, your integration should recommend proper roles or permissions that must be granted to the associated account.

At a minimum, the service account should have at least one of the following roles assigned to read, consume, write, and produce to Confluent Cloud. For a complete list of roles, see Predefined RBAC Roles on Confluent Cloud.

Role Scope Description
DeveloperRead Topic and Consumer Group To read and consume data from a Confluent Cloud topic, the service account must have at least the DeveloperRead role for the topic and a consumer group.
DeveloperWrite Topic To write and produce data to a Confluent Cloud topic, the service account must have at least the DeveloperWrite role for the topic.

If the integration has advanced features, additional roles may need to be assigned. For example, if an integration needs to create a topic to support a dead letter queue or for a new data stream, the integration may require the ability to create a new topic. In this case, the service account must have the DeveloperManage assigned role. For a complete list of roles, see Predefined RBAC Roles on Confluent Cloud.

Caution

When a Confluent Cloud customer deletes a user account or service account, all associated API keys are deleted. Any integrations using a deleted API key lose access.

Schema Registry and Stream Governance

Stream Governance on Confluent Cloud establishes trust in the data streams moving throughout your cloud environments and delivers an easy, self-service experience for more teams to put streaming data pipelines to work. CwC integrations are expected to connect to and use Schema Registry in order to fully unlock the capabilities of Stream Governance in Confluent Cloud.

At a minimum, this includes support for using Schema Registry with at least Avro schemas, preferably any schema supported by Confluent Cloud. Integrations are encouraged to provide support for the Confluent Stream Catalog, including tags and business metadata.

Confluent Cloud APIs

Basic integrations may simply connect to an existing topic and leverage existing schemas, but more feature-rich integrations may need to create and manage topics and schemas within Confluent Cloud.

For example, if an integration is connecting to Confluent Cloud to send data, it may be best for the integration to create and customize the target topic without requiring the user to do it separately. This creates a more seamless experience for the user and reduces the application switching to set up the integration.

For CRUD actions like this, you can leverage the Admin APIs within the Confluent Cloud Cluster or the REST API. For additional information, see Confluent Cloud APIs.

In a CwC integration, the integrated partner application is sending (producing) data to Confluent Cloud or reading (consuming) data from Confluent Cloud. To create this type of integration, the application uses Produce or Consume APIs with Confluent Cloud. For more information about setting up Producers and Consumers and other Apache Kafka® basics, see the related content.

Custom identifier

As part of your Confluent Cloud integration, additional custom configuration options are used help identify the integration. These custom configurations will be shared with you by your Partner Success Manager as part of the verification process. Typically, this involves a client.id prefix. Note that these additional configurations are unique identifiers to your CwC integration and are solely used to measure the adoption of the integration.

Verify an integration

When your integration with Confluent Cloud is built, the next step is to have it verified by Confluent. This verification ensures that your integration meets the security and governance expectations of any Confluent verified integration. To verify your integration, contact your Partner Success Manager or apply as a new partner today using the Confluent Partner Portal.

Publish an integration

After your native CwC integration is verified by Confluent, work with your Partner Success Manager to promote the integration and increase its visibility. Your Partner Success Manager will guide you through the options that both you and Confluent can do to support amplifying the integration.

Native partner integrations

The following table lists available native partner integrations available for CwC.

Partner Category Documentation
Advantco SAP and Oracle integrator SAP Kafka Integration: Real-time Data Flow for Business Agility
Aklivity Streaming-native API infrastructure Confluent Cloud Secure Public Access Proxy
Amazon Athena Analytics Amazon Athena Apache Kafka connector
AWS Lambda Serverless compute Using Lambda with self-managed Apache Kafka
Arcion CDC Destination Confluent
Arroyo Stream processing Confluent - Arroyo Documentation
Asapio SAP integrator Confluent Integration
Bicycle Analytics -
Census Reverse ETL Confluent Cloud | Census Docs
Clickhouse Analytics Integrating Kafka with ClickHouse
Datorios Data pipelines Confluent Cloud
Decodable Stream processing Confluent Cloud
EMQ MQTT Stream MQTT Data into Confluent | EMQX Docs
Gathr Analytics Confluent Cloud Connection
HiveMQ IOT Confluent Cloud Integration
Imply Real-time analytics Ingest from Confluent Cloud
Kinetica Real-time analytics platform Loading Data | Kinetica Docs
Materialize Real-time analytics Confluent Cloud
Nstream Real-time analytics Confluent Cloud Tutorial
Onehouse Lakehouse The Ultimate Data Lakehouse for Streaming Data Using Onehouse + Confluent
Onibex SAP integrator SAP ERP & Confluent Cloud
Pinecone Vector database Building real-time AI applications with Pinecone and Confluent Cloud
Precisely CDC -
Qlik CDC Using Confluent Cloud as a target
Quix App development Confluent Kafka
Redis In-memory database -
RisingWave Real-time analytics Ingest data from Confluent Cloud
Rockset Real-time analytics Confluent Cloud
Singlestore Real-time analytics Ingest data from Confluent Cloud (Kafka) - SingleStore Spaces
StarTree Real-time analytics Confluent Cloud connector
Squid Middle-tier platform Confluent integration | Squid Cloud docs
Superblocks Low-code development Confluent | Superblocks Docs
Timeplus Real-time analytics Load streaming data from Confluent Cloud
Tinybird Real-time analytics Confluent Connector
Upsolver Data pipelines Confluent Kafka
Waterstream IOT Integration with Confluent Cloud
Weaviate Vector database -
Zilliz Vector database -