Project Icon

kafdrop

直观的 Apache Kafka 集群管理工具

Kafdrop 是一个开源的 Apache Kafka 管理工具,提供直观的 Web 界面来查看集群信息。它支持浏览主题、分区、消费者组和消息内容,兼容 JSON、纯文本、Avro 和 Protobuf 格式。Kafdrop 能够创建新主题、查看 ACL,并支持 Azure Event Hubs。此工具配置简单,支持 SASL 和 TLS 安全连接,适用于 Kafka 集群的日常管理和监控。

logo Kafdrop – Kafka Web UI   Tweet

Price Release with mvn Docker Language grade: Java

Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.

Overview Screenshot

This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of Java 17+, Kafka 2.x, Helm and Kubernetes. It's a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers.

Features

  • View Kafka brokers — topic and partition assignments, and controller status
  • View topics — partition count, replication status, and custom configuration
  • Browse messages — JSON, plain text, Avro and Protobuf encoding
  • View consumer groups — per-partition parked offsets, combined and per-partition lag
  • Create new topics
  • View ACLs
  • Support for Azure Event Hubs

Requirements

  • Java 17 or newer
  • Kafka (version 0.11.0 or newer) or Azure Event Hubs

Optional, additional integration:

  • Schema Registry

Getting Started

You can run the Kafdrop JAR directly, via Docker, or in Kubernetes.

Running from JAR

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar target/kafdrop-<version>.jar \
    --kafka.brokerConnect=<host:port,host:port>,...

If unspecified, kafka.brokerConnect defaults to localhost:9092.

Note: As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API.

Open a browser and navigate to http://localhost:9000. The port can be overridden by adding the following config:

--server.port=<port> --management.server.port=<port>

Optionally, configure a schema registry connection with:

--schemaregistry.connect=http://localhost:8081

and if you also require basic auth for your schema registry connection you should add:

--schemaregistry.auth=username:password

Finally, a default message and key format (e.g. to deserialize Avro messages or keys) can optionally be configured as follows:

--message.format=AVRO
--message.keyFormat=DEFAULT

Valid format values are DEFAULT, AVRO, PROTOBUF. This can also be configured at the topic level via dropdown when viewing messages. If key format is unspecified, message format will be used for key too.

Configure Protobuf message type

Option 1: Using Protobuf Descriptor

In case of protobuf message type, the definition of a message could be compiled and transmitted using a descriptor file. Thus, in order for kafdrop to recognize the message, the application will need to access to the descriptor file(s). Kafdrop will allow user to select descriptor and well as specifying name of one of the message type provided by the descriptor at runtime.

To configure a folder with protobuf descriptor file(s) (.desc), follow:

--protobufdesc.directory=/var/protobuf_desc

Option 2 : Using Schema Registry

In case of no protobuf descriptor file being supplied the implementation will attempt to create the protobuf deserializer using the schema registry instead.

Defaulting to Protobuf

If preferred the message type could be set to default as follows:

--message.format=PROTOBUF

Running with Docker

Images are hosted at hub.docker.com/r/obsidiandynamics/kafdrop.

Launch container in background:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    obsidiandynamics/kafdrop

Launch container with some specific JVM options:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e JVM_OPTS="-Xms32M -Xmx64M" \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    obsidiandynamics/kafdrop

Launch container in background with protobuff definitions:

docker run -d --rm -v <path_to_protobuff_descriptor_files>:/var/protobuf_desc -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    -e CMD_ARGS="--message.format=PROTOBUF --protobufdesc.directory=/var/protobuf_desc" \
    obsidiandynamics/kafdrop

Then access the web UI at http://localhost:9000.

Hey there! We hope you really like Kafdrop! Please take a moment to the repo or Tweet about it.

Running in Kubernetes (using a Helm Chart)

Clone the repository (if necessary):

git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop

Apply the chart:

helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=<host:port,host:port> \
    --set server.servlet.contextPath="/" \
    --set cmdArgs="--message.format=AVRO --schemaregistry.connect=http://localhost:8080" \ #optional
    --set jvm.opts="-Xms32M -Xmx64M"

For all Helm configuration options, have a peek into chart/values.yaml.

Replace 3.x.x with the image tag of obsidiandynamics/kafdrop. Services will be bound on port 9000 by default (node port 30900).

Note: The context path must begin with a slash.

Proxy to the Kubernetes cluster:

kubectl proxy

Navigate to http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy.

Protobuf support via helm chart:

To install with protobuf support, a "facility" option is provided for the deployment, to mount the descriptor files folder, as well as passing the required CMD arguments, via option mountProtoDesc. Example:

helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=<host:port,host:port> \
    --set server.servlet.contextPath="/" \
    --set mountProtoDesc.enabled=true \
    --set mountProtoDesc.hostPath="<path/to/desc/folder>" \
    --set jvm.opts="-Xms32M -Xmx64M"

Building

After cloning the repository, building is just a matter of running a standard Maven build:

$ mvn clean package

The following command will generate a Docker image:

mvn assembly:single docker:build

Docker Compose

There is a docker-compose.yaml file that bundles a Kafka/ZooKeeper instance with Kafdrop:

cd docker-compose/kafka-kafdrop
docker-compose up

APIs

JSON endpoints

Starting with version 2.0.0, Kafdrop offers a set of Kafka APIs that mirror the existing HTML views. Any existing endpoint can be returned as JSON by simply setting the Accept: application/json header. Some endpoints are JSON only:

  • /topic: Returns a list of all topics.

OpenAPI Specification (OAS)

To help document the Kafka APIs, OpenAPI Specification (OAS) has been included. The OpenAPI Specification output is available by default at the following Kafdrop URL:

/v3/api-docs

It is also possible to access the Swagger UI (the HTML views) from the following URL:

/swagger-ui.html

This can be overridden with the following configuration:

springdoc.api-docs.path=/new/oas/path

You can disable OpenAPI Specification output with the following configuration:

springdoc.api-docs.enabled=false

CORS Headers

Starting in version 2.0.0, Kafdrop sets CORS headers for all endpoints. You can control the CORS header values with the following configurations:

cors.allowOrigins (default is *)
cors.allowMethods (default is GET,POST,PUT,DELETE)
cors.maxAge (default is 3600)
cors.allowCredentials (default is true)
cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization)

You can also disable CORS entirely with the following configuration:

cors.enabled=false

Topic Configuration

By default, you could delete a topic. If you don't want this feature, you could disable it with:

--topic.deleteEnabled=false

By default, you could create a topic. If you don't want this feature, you could disable it with:

--topic.createEnabled=false

Actuator

Health and info endpoints are available at the following path: /actuator

This can be overridden with the following configuration:

management.endpoints.web.base-path=<path>

Guides

Connecting to a Secure Broker

Kafdrop supports TLS (SSL) and SASL connections for encryption and authentication. This can be configured by providing a combination of the following files (placed into the Kafka root directory):

  • kafka.truststore.jks: specifying the certificate for authenticating brokers, if TLS is enabled.
  • kafka.keystore.jks: specifying the private key to authenticate the client to the broker, if mutual TLS authentication is required.
  • kafka.properties: specifying the necessary configuration, including key/truststore passwords, cipher suites, enabled TLS protocol versions, username/password pairs, etc. When supplying the truststore and/or keystore files, the ssl.truststore.location and ssl.keystore.location properties will be assigned automatically.

Using Docker

The three files above can be supplied to a Docker instance in base-64-encoded form via environment variables:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e KAFKA_PROPERTIES="$(cat kafka.properties | base64)" \
    -e KAFKA_TRUSTSTORE="$(cat kafka.truststore.jks | base64)" \   # optional
    -e KAFKA_KEYSTORE="$(cat kafka.keystore.jks | base64)" \       # optional
    obsidiandynamics/kafdrop

Rather than passing KAFKA_PROPERTIES as a base64-encoded string, you can also place a pre-populated KAFKA_PROPERTIES_FILE into the container:

cat << EOF > kafka.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="foo" password="bar"
EOF

docker run -d --rm -p 9000:9000 \
    -v $(pwd)/kafka.properties:/tmp/kafka.properties:ro \
    -v $(pwd)/kafka.truststore.jks:/tmp/kafka.truststore.jks:ro \
    -v $(pwd)/kafka.keystore.jks:/tmp/kafka.keystore.jks:ro \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e KAFKA_PROPERTIES_FILE=/tmp/kafka.properties \
    -e KAFKA_TRUSTSTORE_FILE=/tmp/kafka.truststore.jks \   # optional
    -e KAFKA_KEYSTORE_FILE=/tmp/kafka.keystore.jks \       # optional
    obsidiandynamics/kafdrop

It's sometimes needed to load extra classes, e.g. for a SASL client callback handler. To facilitate that, it is possible to mount a folder with extra JARs, like this:

cat << EOF > kafka.properties
security.protocol=SASL_SSL
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
EOF

mkdir extra-kafdrop-classes
wget --directory-prefix=extra-kafdrop-classes https://repo1.maven.org/maven2/software/amazon/msk/aws-msk-iam-auth/1.0.0/aws-msk-iam-auth-1.0.0.jar

docker run -d --rm -p 9000:9000 \
    -v $(pwd)/kafka.properties:/tmp/kafka.properties:ro \
    -v $(pwd)/kafka.truststore.jks:/tmp/kafka.truststore.jks:ro \
    -v $(pwd)/kafka.keystore.jks:/tmp/kafka.keystore.jks:ro \
    -v $(pwd)/extra-kafdrop-classes:/extra-classes:ro \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e KAFKA_PROPERTIES_FILE=/tmp/kafka.properties \
    -e KAFKA_TRUSTSTORE_FILE=/tmp/kafka.truststore.jks \   # optional
    -e KAFKA_KEYSTORE_FILE=/tmp/kafka.keystore.jks \       # optional
    obsidiandynamics/kafdrop

Environment Variables

Basic configuration
NameDescription
KAFKA_BROKERCONNECTBootstrap list of Kafka host/port pairs. Defaults to localhost:9092.
KAFKA_PROPERTIESAdditional properties to configure the broker connection (base-64 encoded).
KAFKA_TRUSTSTORECertificate for broker authentication (base-64 encoded). Required for TLS/SSL.
KAFKA_KEYSTOREPrivate key for mutual TLS authentication (base-64 encoded).
SERVER_SERVLET_CONTEXTPATHThe context path to serve requests on (must end with a /). Defaults to /.
SERVER_PORTThe web server port to listen on. Defaults to 9000.
MANAGEMENT_SERVER_PORTThe Spring Actuator server port to listen on. Defaults to 9000.
SCHEMAREGISTRY_CONNECT The endpoint of Schema Registry for Avro or Protobuf message
SCHEMAREGISTRY_AUTHOptional basic auth credentials in the form username:password.
CMD_ARGSCommand line arguments to Kafdrop, e.g. --message.format or --protobufdesc.directory or --server.port.
Advanced configuration
NameDescription
JVM_OPTSJVM options. E.g.JVM_OPTS: "-Xms16M -Xmx64M -Xss360K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
JMX_PORTPort to use for JMX. No default; if unspecified, JMX will not be exposed.
HOSTThe hostname to report for the RMI registry (used for JMX). Defaults to localhost.
KAFKA_PROPERTIES_FILEInternal location where the Kafka properties file will be written to (if KAFKA_PROPERTIES is set). Defaults to kafka.properties.
KAFKA_TRUSTSTORE_FILEInternal location where the truststore file will be written to (if KAFKA_TRUSTSTORE is set). Defaults to kafka.truststore.jks.
KAFKA_KEYSTORE_FILEInternal location where the keystore file will be written to (if KAFKA_KEYSTORE is
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号