Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.Potential breaking changes in 0.9.0.0 Java 1.6 is no longer supported.
Scala 2.9 is no longer supported.
Broker IDs above 1000 are now reserved by default to automatically assigned broker IDs. If your cluster has existing broker IDs above that threshold make sure to increase the reserved.broker.max.id broker configuration property accordingly.
Configuration parameter replica.lag.max.messages was removed. Partition leaders will no longer consider the number of lagging messages when deciding which replicas are in sync.
Configuration parameter replica.lag.time.max.ms now refers not just to the time passed since last fetch request from replica, but also to time since the replica last caught up. Replicas that are still fetching messages from leaders but did not catch up to the latest messages in replica.lag.time.max.ms will be considered out of sync.
Compacted topics no longer accept messages without key and an exception is thrown by the producer if this is attempted. In 0.8.x, a message without key would cause the log compaction thread to subsequently complain and quit (and stop compacting all compacted topics).
MirrorMaker no longer supports multiple target clusters. As a result it will only accept a single --consumer.config parameter. To mirror multiple source clusters, you will need at least one MirrorMaker instance per source cluster, each with its own consumer configuration.
Tools packaged under org.apache.kafka.clients.tools.* have been moved to org.apache.kafka.tools.*. All included scripts will still function as usual, only custom code directly importing these classes will be affected.
The default Kafka JVM performance options (KAFKA_JVM_PERFORMANCE_OPTS) have been changed in kafka-run-class.sh.
The kafka-topics.sh script (kafka.admin.TopicCommand) now exits with non-zero exit code on failure.
The kafka-topics.sh script (kafka.admin.TopicCommand) will now print a warning when topic names risk metric collisions due to the use of a '.' or '_' in the topic name, and error in the case of an actual collision.
The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) will use the Java producer instead of the old Scala producer be default, and users have to specify 'old-producer' to use the old producer.
By default, all command line tools will print all logging messages to stderr instead of stdout.
Notable changes in 0.9.0.1 The new broker id generation feature can be disabled by setting broker.id.generation.enable to false.
Configuration parameter log.cleaner.enable is now true by default. This means topics with a cleanup.policy=compact will now be compacted by default, and 128 MB of heap will be allocated to the cleaner process via log.cleaner.dedupe.buffer.size. You may want to review log.cleaner.dedupe.buffer.size and the other log.cleaner configuration values based on your usage of compacted topics.
Default value of configuration parameter fetch.min.bytes for the new consumer is now 1 by default.
Deprecations in 0.9.0.0 Altering topic configuration from the kafka-topics.sh script (kafka.admin.TopicCommand) has been deprecated. Going forward, please use the kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality.
The kafka-consumer-offset-checker.sh (kafka.tools.ConsumerOffsetChecker) has been deprecated. Going forward, please use kafka-consumer-groups.sh (kafka.admin.ConsumerGroupCommand) for this functionality.
The kafka.tools.ProducerPerformance class has been deprecated. Going forward, please use org.apache.kafka.tools.ProducerPerformance for this functionality (kafka-producer-perf-test.sh will also be changed to use the new class).
The producer config block.on.buffer.full has been deprecated and will be removed in future release. Currently its default value has been changed to false. The KafkaProducer will no longer throw BufferExhaustedException but instead will use max.block.ms value to block, after which it will throw a TimeoutException. If block.on.buffer.full property is set to true explicitly, it will set the max.block.ms to Long.MAX_VALUE and metadata.fetch.timeout.ms will not be honoured
Upgrading from 0.8.1 to 0.8.20.8.2 is fully compatible with 0.8.1. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.Upgrading from 0.8.0 to 0.8.10.8.1 is fully compatible with 0.8. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.Upgrading from 0.7Release 0.7 is incompatible with newer releases. Major changes were made to the API, ZooKeeper data structures, and protocol, and configuration in order to add replication (Which was missing in 0.7). The upgrade from 0.7 to later versions requires a special tool for migration. This migration can be done without downtime. 2. APIs Kafka includes five core apis:The Producer API allows applications to send streams of data to topics in the Kafka cluster.The Consumer API allows applications to read streams of data from topics in the Kafka cluster.The Streams API allows transforming streams of data from input topics to output topics.The Connect API allows implementing connectors that continually pull from some source system or application into Kafka or push from Kafka into some sink system or application.The AdminClient API allows managing and inspecting topics, brokers, and other Kafka objects.Kafka exposes all its functionality over a language independent protocol which has clients available in many programming languages. However only the Java clients are maintained as part of the main Kafka project, the others are available as independent open source projects. A list of non-Java clients is available here.2.1 Producer APIThe Producer API allows applications to send streams of data to topics in the Kafka cluster.Examples showing how to use the producer are given in thejavadocs.To use the producer, you can use the following maven dependency:org.apache.kafkakafka-clientsfullDotVersion2.2 Consumer APIThe Consumer API allows applications to read streams of data from topics in the Kafka cluster.Examples showing how to use the consumer are given in thejavadocs.To use the consumer, you can use the following maven dependency:org.apache.kafkakafka-clientsfullDotVersion2.3 Streams APIThe Streams API allows transforming streams of data from input topics to output topics.Examples showing how to use this library are given in thejavadocsAdditional documentation on using the Streams API is available here.To use Kafka Streams you can use the following maven dependency:org.apache.kafkakafka-streamsfullDotVersionWhen using Scala you may optionally include the kafka-streams-scala library. Additional documentation on using the Kafka Streams DSL for Scala is available in the developer guide.To use Kafka Streams DSL for Scala for Scala 2.11 you can use the following maven dependency:org.apache.kafkakafka-streams-scala_2.11fullDotVersion2.4 Connect APIThe Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.Many users of Connect won't need to use this API directly, though, they can use pre-built connectors without needing to write any code. Additional information on using Connect is available here.Those who want to implement custom connectors can see the javadoc.2.5 AdminClient APIThe AdminClient API supports managing and inspecting topics, brokers, acls, and other Kafka objects.To use the AdminClient API, add the following Maven dependency:org.apache.kafkakafka-clientsfullDotVersionFor more information about the AdminClient APIs, see the javadoc. 3. Configuration Kafka uses key-value pairs in the property file format for configuration. These values can be supplied either from a file or programmatically. 3.1 Broker Configs The essential configurations are the following: broker.id log.dirs zookeeper.connect Topic-level configurations and defaults are discussed in more detail below. NameDescriptionTypeDefaultValid ValuesImportanceDynamic Update Modezookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3.The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.stringhighread-onlyadvertised.host.nameDEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for host.name if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName().stringnullhighread-onlyadvertised.listenersListeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address.stringnullhighper-brokeradvertised.portDEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.intnullhighread-onlyauto.create.topics.enableEnable auto creation of topic on the serverbooleantruehighread-onlyauto.leader.rebalance.enableEnables auto leader balancing. A background thread checks and triggers leader balance if required at regular intervalsbooleantruehighread-onlybackground.threadsThe number of threads to use for various background processing tasksint10[1,...]highcluster-widebroker.idThe broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.int-1highread-onlycompression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.stringproducerhighcluster-widedelete.topic.enableEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned offbooleantruehighread-onlyhost.nameDEPRECATED: only used when listeners is not set. Use listeners instead. hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfacesstring""highread-onlyleader.imbalance.check.interval.secondsThe frequency with which the partition rebalance check is triggered by the controllerlong300highread-onlyleader.imbalance.per.broker.percentageThe ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.int10highread-onlylistenersListener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093stringnullhighper-brokerlog.dirThe directory in which the log data is kept (supplemental for log.dirs property)string/tmp/kafka-logshighread-onlylog.dirsThe directories in which the log data is kept. If not set, the value in log.dir is usedstringnullhighread-onlylog.flush.interval.messagesThe number of messages accumulated on a log partition before messages are flushed to disk long9223372036854775807[1,...]highcluster-widelog.flush.interval.msThe maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is usedlongnullhighcluster-widelog.flush.offset.checkpoint.interval.msThe frequency with which we update the persistent record of the last flush which acts as the log recovery pointint60000[0,...]highread-onlylog.flush.scheduler.interval.msThe frequency in ms that the log flusher checks whether any log needs to be flushed to disklong9223372036854775807highread-onlylog.flush.start.offset.checkpoint.interval.msThe frequency with which we update the persistent record of log start offsetint60000[0,...]highread-onlylog.retention.bytesThe maximum size of the log before deleting itlong-1highcluster-widelog.retention.hoursThe number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms propertyint168highread-onlylog.retention.minutesThe number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is usedintnullhighread-onlylog.retention.msThe number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is usedlongnullhighcluster-widelog.roll.hoursThe maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms propertyint168[1,...]highread-onlylog.roll.jitter.hoursThe maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms propertyint0[0,...]highread-onlylog.roll.jitter.msThe maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is usedlongnullhighcluster-widelog.roll.msThe maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is usedlongnullhighcluster-widelog.segment.bytesThe maximum size of a single log fileint1073741824[14,...]highcluster-widelog.segment.delete.delay.msThe amount of time to wait before deleting a file from the filesystemlong60000[0,...]highcluster-widemessage.max.bytesThe largest record batch size allowed by Kafka. If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large.
Auto SMS Scheduler Sender v7.5.3 [Paid] [Latest]
2ff7e9595c
Comments