![]() |
叛逆的长颈鹿 · offset commit failed ...· 3 周前 · |
![]() |
打酱油的小蝌蚪 · spark-sql-spark技术分享· 1 周前 · |
![]() |
至今单身的橙子 · 《变形金刚:超能勇士崛起》,新反派是谁?或事 ...· 1 月前 · |
![]() |
踢足球的伤痕 · python如何将科学计数法转换为数字 | ...· 3 月前 · |
![]() |
听话的帽子 · 时隔九年,门票秒罄,为何万青在台湾省这么火| ...· 5 月前 · |
![]() |
没人理的爆米花 · 优质的光模块需通过哪些测试关卡? | ...· 6 月前 · |
rbac topic kafka |
https://docs.confluent.io/platform/current/schema-registry/security/rbac-schema-registry.html |
![]() |
严肃的西红柿
5 月前 |
Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.
Learning pathways (24)Build a client app, explore use cases, and build on our demos and resources
Start BuildingConfluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems
Learn MoreVideo courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.
Learning pathways (24)Build a client app, explore use cases, and build on our demos and resources
Start BuildingConfluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems
Learn MoreConfluent Schema Registry supports Use Role-Based Access Control (RBAC) for Authorization in Confluent Platform (RBAC).
Users are granted access to manage, read, and write to particular topics and their associated schemas (contained in Schema Registry subjects ) based on RBAC roles . User access is scoped to specified resources and Schema Registry supported operations .
With RBAC enabled, Schema Registry can authenticate incoming requests and authorize them based on role bindings. This allows schema evolution management to be restricted to administrative users, while providing users and applications with different types of access to a subset of subjects for which they are authorized (such as, write access to relevant subjects for producers, read access for consumers).
RBAC makes it easier and more efficient to set up and manage user access to Schema Registry subjects and topics.
Without RBAC, an administrator must specify every subject or use * (for all) and specify each operation (SUBJECT_READ, SUBJECT_COMPATIBILITY_READ, and so forth) that a user needs. If you have 100 developers who need to read schemas, you must set up access 100 times.
Schema Registry before RBAC ¶
An RBAC-enabled environment addresses the following use cases:
DeveloperRead
role and specify a set of subjects with a prefix.
DeveloperRead
for “transactions-value” and “orders-value”, but assume a
DeveloperWrite
role for “customers-value”.
Schema Registry with RBAC ¶
When a client communicates to the Schema Registry HTTPS endpoint, Schema Registry passes the client credentials to Metadata Service (MDS) for authentication. MDS is a REST layer on the Kafka broker within Confluent Server, and it integrates with LDAP to authenticate end users on behalf of Schema Registry and other Confluent Platform services such as Connect, Confluent Control Center, and ksqlDB. As shown in Scripted Confluent Platform Demo , clients must have predefined LDAP entries.
Once a client is authenticated, you must enforce that only authorized entities have access to the permitted resources. You can use ACLs , RBAC, or both to do so. While ACLs and RBAC can be used together or independently, RBAC is the preferred solution as it provides finer-grained authorization and a unified method for managing access across Confluent Platform.
The combined authentication and authorization workflow for a Kafka client connecting to Schema Registry is shown in the diagram below.
To enable role-based access control (RBAC) on schemas, you must configure the
schema-registry.properties
file with connection information to a
metadata service (MDS)
running RBAC, and use the
Confluent CLI
to grant user and application access to subjects and other resources based on
roles
.
Typically, you can request account access and the MDS details needed for RBAC from your security administrator.
If you are in a security admin role experimenting with a fully local setup, you would first set up RBAC using MDS , then create a service principal for Schema Registry using the Confluent CLI.
Following that, you can create principal user accounts with various roles such as
ResourceOwner
,
DeveloperRead
,
DeveloperWrite
, and
DeveloperManage
bound to subjects (schemas associated with Kafka topics) and other resources.
After Schema Registry is configured and running in an RBAC-enabled environment, users can read and write schemas to subjects, based on their authorization for operation on a resource (
roles
and
rolebinding
).
RBAC supports all Schema Registry operations as listed in operations . For more details on these operations, see the Schema Registry API .
DeveloperManage
role on a subject resource named
__GLOBAL
.
developerRead
and
developerWrite
roles also need the
developerManage
role if they want to
view and work with schemas
on Schema Registry through the
Control Center for Confluent Platform
. (Previous to 5.4.x,
developerRead
and
developerWrite
roles were sufficient to interact with Schema Registry through the Control Center.)
Jack as a
ClusterAdmin
wants to set up a Schema Registry cluster for his organization.
Step 1. Jack as a cluster administrator contacts the RBAC security administrator with the following information.
Step 2.
UserAdmin
creates a service principal to represent the Schema Registry cluster and does the following.
ClusterAdmin
on the Schema Registry cluster
Step 3.
Jack configures the Schema Registry cluster to use the provided public key, the specified group ID (for example, “schema-registry-a”), and the service principal provided by the
UserAdmin
, then spins up the cluster.
Samantha as a developer needs READ access to two subjects, “transactions-value” and “orders-value”, to understand schemas her application needs to interact with.
Step 1. Samantha as a developer contacts the user administrator with the following information:
Step 2.
UserAdmin
grants Samantha access to the subjects.
Step 3.
When Samantha runs
GET
/subjects
, she will only see “transactions-value” and “orders-value”.
Step 4.
Accidental
POST
or
DELETE
operations on these subjects will be prevented.
This Quick Start describes how to configure Schema Registry for Role-Based Access Control to manage user and application authorization to topics and subjects (schemas), including how to:
schema-registry.properties
use the Confluent CLI to create roles
)
The examples assume a local install of Schema Registry and shared RBAC and MDS configuration. Your production environment may differ (for example, Confluent Cloud or remote Schema Registry).
If you were to use a local Kafka, ZooKeeper, and bootstrap server as might be the case for testing, these would also need authorization through RBAC, requiring additional prerequisite setup and credentials.
See also
To get started, try the automated RBAC example that showcases the RBAC functionality in Confluent Platform.
If you are new to Confluent Platform or to Schema Registry, consider first reading or working through these tutorials to get a baseline understanding of the platform, Schema Registry, and Role-Based Access Control across Confluent Platform.
To run a resource like Schema Registry in an RBAC environment you need a Schema Registry service principal (user account for the resource), credentials, and location of the Metadata Service (MDS) running RBAC. This enables you to configure Schema Registry properties to talk to the RBAC-enabled Kafka cluster, and grant various types of access to Schema Registry using the Confluent CLI.
Specifically, you need the following to get started.
In most cases, you will get this information from your Security administrator.
The next set of examples show how to connect a local Schema Registry to a remote
Metadata Service (MDS) running RBAC. The
schema.registry.properties
file
configurations reflect a remote Metadata Service (MDS) URL, location, and Kafka cluster
ID. Also, the examples assume you are using credentials you got from your
Security administrator for a pre-configured schema registry principal user
(“service principal”), as mentioned in the prerequisites.
\
) are used before carriage returns to
show multi-line property values. These line breaks and backslashes may cause errors
when used in the actual properties file. If so, remove the backslashes and join the
lines so as not to break up a property value across multiple lines with returns.
metadataServerUrls
or
confluent.metadata.bootstrap.server.urls
.)
Define these settings in
CONFLUENT_HOME/etc/schema-registry/schema-registry.properties
:
Configure Schema Registry authorization for communicating with the RBAC Kafka cluster.
The
username
and
password
are RBAC credentials for the Schema Registry service principal, and metadataServerUrls is the location of your RBAC Kafka cluster (for example, a URL to an
ec2
server).
# Authorize Schema Registry to talk to Kafka (security protocol may also be SASL_SSL if using TLS/SSL)
kafkastore.security.protocol=SASL_PLAINTEXT
kafkastore.sasl.mechanism=OAUTHBEARER
kafkastore.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
kafkastore.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
username="<username>" \
password="<password>" \
metadataServerUrls="<https>://<metadata_server_url>:<port>";
Configure RBAC authorization, and bearer/basic authentication, for the Schema Registry resource.
These settings can be used as-is, JETTY_AUTH is the recommended authentication mechanism.
# These properties install the Schema Registry security plugin, and configure it to use RBAC for
# authorization and OAuth for authentication
resource.extension.class=io.confluent.kafka.schemaregistry.security.SchemaRegistrySecurityResourceExtension
confluent.schema.registry.authorizer.class=io.confluent.kafka.schemaregistry.security.authorizer.rbac.RbacAuthorizer
rest.servlet.initializor.classes=io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler
confluent.schema.registry.auth.mechanism=JETTY_AUTH
- The above setting for
resource.extension.class
activates the security plugin.
- The above setting for
confluent.schema.registry.auth.mechanism
sets the authentication mechanism as Jetty, which is recommended for use with RBAC.
Tell Schema Registry how to communicate with the Kafka cluster running the Metadata Service (MDS) and how to authenticate requests using a public key.
- The value for
confluent.metadata.bootstrap.server.urls
can be the same as metadataServerUrls
, depending on your environment.
- In this step, you need a public key file to use to verify requests with token-based authorization, as mentioned in the prerequisites.
# The location of the metadata service
confluent.metadata.bootstrap.server.urls=<https>://<metadata_server_url>:<port>
# Credentials to use with the MDS, these should usually match those used for talking to Kafka
confluent.metadata.basic.auth.user.info=<username>:<password>
confluent.metadata.http.auth.credentials.provider=BASIC
# The path to public keys that should be used to verify json web tokens during authentication
public.key.path=<public_key_file_path.pem>
For additional configurations available to any client communicating with MDS, see also REST client configurations in the Confluent Platform Security documentation.
Specify the kafkastore.bootstrap.server
you want to use.
The default is a commented out line for a local server. If you do not change this or uncomment it, the default will be used.
#kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
Uncomment this line and set it to the address of your bootstrap server. This may be different from the MDS server URL. The standard port for the Kafka bootstrap server is 9092
.
kafkastore.bootstrap.servers=<rbac_kafka_bootstrap_server>:9092
(Optional, Legacy) Specify the kafkastore.connection.url you want to use to connect the Schema Registry Security Plugin for Confluent Platform to ZooKeeper.
The default is shown on the commented out line for a local ZooKeeper. If you do not change this or uncomment it, the default will be used.
#kafkastore.connection.url=localhost:2181
Uncomment this line and set it to the address of your ZooKeeper server. The standard port for the ZooKeeper server is 2181
.
kafkastore.connection.url is deprecated. It was previously needed in older versions (5.4.x and earlier) when the Schema Registry Security Plugin was installed and configured to use ACLs.
Starting with Confluent Platform 5.5.0, this is no longer the case, given Schema Registry ACL Authorizer for Confluent Platform. If you do not have the ACL Authorizer, upgrade to a Confluent Platform version that has it.
To upgrade to Kafka leader election, see Migration from ZooKeeper primary election to Kafka primary election.
(Optional) Specify a custom schema.registry.group.id
(to serve as Schema Registry cluster ID) which is different from the default, schema-registry.
In the example, schema.registry.group.id
is set to “schema-registry-cool-cluster”.
# Schema Registry group id, which is the cluster id
# The default for |sr| cluster ID is **schema-registry**
schema.registry.group.id=schema-registry-cool-cluster
The Schema Registry cluster ID is the same as schema-registry-group-id
, which defaults to schema-registry.
This is used to specify the target resource in rolebinding
commands on the Confluent CLI. You might
need to specify a custom cluster ID to differentiate your Schema Registry from others in the organization so as to
avoid overwriting roles and users in multiple registries.
(Optional) Specify a custom name for the Schema Registry default topic. (The default is _schemas.)
In the example, schema.registry.group.id
is set to _jax-schemas-topic
.
# The name of the topic to store schemas in
# The default schemas topic is **_schemas**
kafkastore.topic=_jax-schemas-topic
- The Schema Registry itself uses an internal topic to hold schemas. The default name for this topic is _schemas. You might
need to specify a custom name for the schemas internal topic to differentiate it from others
in the organization and avoid overwriting data.
- An underscore is not required in the name; this is a convention used to indicate an internal topic.
(Optional) Enable anonymous access to requests that occur without authentication.
Any requests that occur without authentication are automatically granted the principal User:ANONYMOUS
# This enables anonymous access with a principal of User:ANONYMOUS
confluent.schema.registry.anonymous.principal=true
authentication.skip.paths=/*
If you get the following error about not having authorization when you run
the curl
command to list subjects as described in
Start Schema Registry and test it, you can enable anonymous requests to bypass
the authentication temporarily while you troubleshoot credentials.
curl localhost:8081/subjects
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 401 Unauthorized</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /subjects. Reason:
<pre> Unauthorized</pre></p><hr><a href="https://eclipse.org/jetty">Powered by Jetty:// 9.4.18.v20190429</a><hr/>
</body>
</html>
- For the above curl command to be successful, you can configure rolebindings or ACLs for
User:ANONYMOUS
.
- The command bypasses the requirement to present valid credentials with a REST request, but not the authorization that is
then performed on that request to ensure that the user (or, if no credentials are provided, User:ANONYMOUS) has the proper
roles or ACLs to perform that action.
Get the Kafka cluster ID for the MDS server you plan to use¶
You will need this in order to specify the Kafka cluster to use in rolebinding
commands on the Confluent CLI.
- To get the Kafka cluster ID for a local host:
bin/zookeeper-shell localhost:2181 get /cluster/id
- To get the Kafka cluster ID on a remote host:
zookeeper-shell <host>:<port> get /cluster/id
For example, the output of this command currently shows the Kafka cluster ID: my-kafka-cluster-ID
:
zookeeper-shell <metadata_server_url>:2181 get /cluster/id
Your output should resemble:
Connecting to <metadata_server_url>:2181
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
{"version":"1","id":"my-kafka-cluster-ID"}
Grant roles for the Schema Registry service principal¶
In these steps, you use the Confluent CLI to log on to MDS and create the Schema Registry
service principal . After you have these roles set up, you can use the Confluent CLI to
manage Schema Registry users. For this example, assume the commands use the MDS server
credentials, URLs, and property values you set up on your local Schema Registry properties file.
(Optionally, you can use a registered cluster name in your role bindings.)
Log on to MDS.
confluent login --url <https>://<metadata_server_url>:<port>
As a prerequisite to granting additional access, grant permission to create the topic _schema_encoders
, which serves as the metadata.encoder.topic
as described in Schema Registry Configuration Reference for Confluent Platform.
confluent iam rbac role-binding create \
--principal User:<sr-user-id> \
--role ResourceOwner \
--resource Topic:<_schema_encoders> \
--kafka-cluster <kafka-cluster-id>
For example:
confluent iam rbac role-binding create \
--principal User:jack-sr \
--role ResourceOwner \
--resource Topic:_schema_encoders \
--kafka-cluster my-kafka-cluster-ID
Grant the user the role SecurityAdmin
on the Schema Registry cluster.
confluent iam rbac role-binding create \
--role SecurityAdmin \
--principal User:<service-account-id> \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster-id <schema-registry-group-id>
Use the command confluent iam rbac role-binding list <flags>
to view the role you just created.
confluent iam rbac role-binding list \
--principal User:<service-account-id> \
--kafka-cluster <kafka-cluster-id> \
--schema-registry-cluster-id <schema-registry-group-id>
For example, here is a listing for a user “jack-sr” granted SecurityAdmin
role on “schema-registry-cool-cluster”, connecting to MDS through a Kafka cluster my-kafka-cluster-ID
:
confluent iam rbac role-binding list \
--principal User:jack-sr \
--kafka-cluster my-kafka-cluster-ID \
--schema-registry-cluster-id schema-registry-cool-cluster
Role | ResourceType | Name | PatternType
+---------------+--------------+------+-------------+
SecurityAdmin | Cluster | |
Grant the user the role ResourceOwner
on the group that Schema Registry nodes use to coordinate across the cluster.
confluent iam rbac role-binding create \
--principal User:<sr-user-id> \
--role ResourceOwner \
--resource Group:<schema-registry-group-id> \
--kafka-cluster <kafka-cluster-id>
For example:
confluent iam rbac role-binding create \
--principal User:jack-sr \
--role ResourceOwner \
--resource Group:schema-registry-cool-cluster \
--kafka-cluster my-kafka-cluster-ID
Grant the user the role ResourceOwner
Kafka topic that Schema Registry uses to store its schemas.
confluent iam rbac role-binding create \
--principal User:<sr-user-id> \
--role ResourceOwner \
--resource Topic:<schemas-topic> \
--kafka-cluster <kafka-cluster-id>
For example:
confluent iam rbac role-binding create \
--principal User:jack-sr \
--role ResourceOwner \
--resource Topic:_jax-schemas-topic \
--kafka-cluster my-kafka-cluster-ID
Use the command confluent iam rbac role-binding list <flags>
to view the role you just created.
confluent iam rbac role-binding list \
--principal User:jack-sr \
--role ResourceOwner \
--kafka-cluster my-kafka-cluster-ID
For example:
confluent iam rbac role-binding list \
--principal User:jack-sr \
--role ResourceOwner \
--kafka-cluster my-kafka-cluster-ID
Role | ResourceType | Name | PatternType
+-------------+--------------+----------------------------------+-------------+
ResourceOwner | Topic | _jax-schemas-topic | LITERAL
ResourceOwner | Topic | __schema_encoders | LITERAL
ResourceOwner | Group | schema-registry-cool-cluster | LITERAL
ResourceOwner | Topic | _schemas | LITERAL
ResourceOwner | Group | schema-registry | LITERAL
Client authentication and authorization¶
- Configure license client authentication
When using principal propagation and the following security types, you must
configure client authentication for the license topic. For more information,
see the following documentation:
- SASL OAUTHBEARER (RBAC) client authentication
- SASL PLAIN client authentication
- SASL SCRAM client authentication
- mTLS client authentication
- Configure license client authorization
When using principal propagation and RBAC or ACLs, you must configure client
authorization for the license topic.
Starting with Confluent Platform 6.2.1, the _confluent-command
internal topic is available as the preferred
alternative to the _confluent-license
topic for components such as Schema Registry, REST Proxy, and Confluent Server
(which were previously using _confluent-license
). Both topics will be supported going
forward. Here are some guidelines:
- New deployments (Confluent Platform 6.2.1 and later) will default to using
_confluent-command
as shown below.
- Existing clusters will continue using the
_confluent-license
unless manually changed.
- Newly created clusters on Confluent Platform 6.2.1 and later will default to creating the
_confluent-command
topic, and only existing clusters that already have a
_confluent-license
topic will continue to use it.
RBAC authorization
Run this command to add ResourceOwner
for the component user for the
Confluent license topic resource (default name is _confluent-command
).
confluent iam rbac role-binding create \
--role ResourceOwner \
--principal User:<service-account-id> \
--resource Topic:_confluent-command \
--kafka-cluster <kafka-cluster-id>
ACL authorization
Run this command to configure Kafka authorization, where bootstrap server,
client configuration, service account ID is specified. This grants create,
read, and write on the _confluent-command
topic.
kafka-acls --bootstrap-server <broker-listener> --command-config <client conf> \
--add --allow-principal User:<service-account-id> --operation Create --operation Read --operation Write \
--topic _confluent-command
(Optional) Use a registered cluster name¶
Starting in Confluent Platform 6.0, you can register your Schema Registry Kafka cluster in the cluster registry and specify a
user-friendly cluster name, which makes it easier to create role bindings. In all of the example commands to Grant roles for the Schema Registry service principal,
you can use the registered cluster name instead of <schema-registry-group-id>
and <kafka-cluster-id>
.
For example, the role binding command for a non-registered cluster must include both the Schema Registry group ID and cluster ID:
confluent iam rbac role-binding create \
--principal User:<sr-user-id> \
--role ResourceOwner \
--resource Group:<schema-registry-group-id> \
--kafka-cluster <kafka-cluster-id>
Assuming your Schema Registry cluster has been registered in the Confluent Platform cluster registry, you can
replace the <schema-registry-group-id>
and <kafka-cluster-id>
with the user-friendly name of the registered cluster:
confluent iam rbac role-binding create \
--principal User:<sr-user-id> \
--role ResourceOwner \
--cluster-name <registered-cluster-name>
Start Schema Registry and test it¶
Make sure your local ZooKeeper and Kafka servers are shut down just to keep a clean slate. Remember, for this example you are running against a remote cluster, so now you only have to start Schema Registry.
Open a command window, change directories into your local install of Confluent Platform, and run the command to start Schema Registry.
./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Run the following command to view subjects.