添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
讲道义的企鹅  ·  dataframe ...·  11 月前    · 
豪情万千的芹菜  ·  parsing - How can I ...·  2 年前    · 
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

Getting below exception while starting Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

Kafka version: 9.0.0 Java 7

So you are trying to access offset( 29898318 ) in topic( test ) partition( 0 ) which is not available right now.

There could be two cases for this

  • Your topic partition 0 may not have those many messages
  • Your message at offset 29898318 might have already deleted by retention period
  • To avoid this you can do one of following:

  • Set auto.offset.reset config to either earliest or latest . You can find more info regarding this here
  • You can get smallest offset available for a topic partition by running following Kafka command line tool
  • command:

    bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
    

    Hope this helps!

    Thanks buddy , I try this but not works. auto.offset.reset should be latest , ` earliest` or none – basit raza May 23, 2016 at 7:17 Was getting this error while auto.offset.reset=latest. Had to configure a new group.id to clean Kafka's offset state then the consumer started working. – CᴴᴀZ Nov 25, 2019 at 9:30 @BdEngineer retention_period has no relation with group.id. Setting a new group.id refreshes the meta (at Broker) for the consumer group. Since this is an edge case, there is no permanent (configurable?) solution. – CᴴᴀZ Apr 16, 2020 at 3:09

    I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

  • cleanup.policy=compact,delete
  • retention of 4 days
  • If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

    Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.

    Check the ‘state.dir’ config setting of the kafka streams application; kafka.apache.org/10/documentation/streams/developer-guide/… – Tim Van Laer Oct 30, 2019 at 17:29

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.