Skip to content
This repository has been archived by the owner on Jun 16, 2023. It is now read-only.

Releases: alibaba/jstorm

Release 2.4.0

15 May 08:54
Compare
Choose a tag to compare

New features

  • Support exactly-once with async checkpoint via rocksdb and HDFS.
  • Introduce new window mechanism
    1. supports tumbling window and sliding window.
    2. supports count window, processing time window, event time window, session window.
    3. doesn't hold all data before a window is triggered, computes on data arrival.
  • Support gray upgrade
    1. supports per worker/component gray upgrade
    2. supports upgrade rollback
  • Add memory/rocksdb-based KV store.
  • HBase metrics plugin is open source
  • Support multiple metrics uploaders.
  • Add api in MetricClient to register topology-level metrics
  • Support component stream metrics, i.e., stream metrics aggregated in components

Improvements

  • Support deserialize for no-arg class in kryo
  • add getValue method in AsmMetric for quick assert so that unit tests/integration tests don't have to get metrics from
    nimbus

Bug Fix

Release 2.2.1

09 Jan 01:49
Compare
Choose a tag to compare

New features

  • Performance is improved by 200%~300%, compared to Release 2.1.1 and 0.9.8.1 in several testing scenarios, while
    120%~200% compared to Flink and 300%~400% compared to Storm.
    1. Restructure the batch solution
    2. Improve serialization and deserialization to reduce the cost of cpu and network
    3. Improve the cost of cpu on critical path and metrics
    4. Improve the strategy of netty client and netty server
    5. Support consume and publish of disruptor queue under batch mode
  • Introduce snapshot exactly once framework
    1. Compared to Trident solution, the performance of new framework is increased by several times. Besides it.
    2. The new framework also support "at least once" mode. Compared to the acker mechanism,it will reduce the cost
      of relative calculation in acker, and the cost of network, which will improve the performance singificantly.
  • Support JStorm on yarn
    1. Currently, jstorm cluster is capable of fast deployments,and fast scale-in/scale-out. It will improve the utility of resource.
  • Re-design the solution of backpressure. Currently, the flow control is stage by stage。
    1. The solution is simple and effective now. The response is much more faster when the exchange of switch on/off
      of backpressure.
    2. The performance and stability is improved significantly, compared to the original solution.
  • Introduce Window API
    1. Support tumbling window,sliding window
    2. window support two collection mode, count and duration.
    3. Support watermark mechanism
  • Introduce the support of Flux
    1. Flux is a programing framework or component which is aim to help create and deploy the topology of jstorm
      quickly.
  • Isolate the dependencies of jstorm and user topology by maven shade plugin to fix the conflict problem.
  • Improve Shuffle grouping solution
    1. Integrate shuffle, localOrShuffle and localFirst. The grouping solution will be auto adapted according to the assignment of topology.
    2. Introduce load aware in shuffle to ensure the load balance of downstreams.
  • Support to configure blacklist in Nimbus to exclude some problematic nodes
  • Support batch mode in trident
  • Supervisors will synchronize cluster configuration from nimbus master automatically
  • Add buildTs to supervisor info and heartbeats
  • Add ext module for nimbus and supervisor to support external plugins
  • Add jstorm-elasticsearch support, thanks to @elloray for your contribution

Improvements

  • Restructure nimbus metrics implementation. Currently, the topology metrics runnable is event-driven.
  • Restructure topology master. Currently, the processor in TM is event-drive.
  • Add some examples to cover more scenarios
  • Disable stream metrics to reduce the cost of sending metrics to Nimbus
  • Support metrics in local mode
  • Improve the implementation of gauge by changing the instantaneous value of each minute,to the average value of some sample values in each minute.
  • Introduce an approximate histogram calculation to reduce memory usage of histogram metrics
  • Add Full GC and supervisor network related metrics

Bug Fix

  • Fix message disorder bug
  • Fix the bug that some connections to zookeeper are not closed by expected when encountering exception in supervisor.
  • The deactivate might be called by mistake when task init
  • The rootId might be duplicated occasionally. It will cause the unexpected message failure.
  • Fix the bug when local mode
  • Fix logwriter's bug
  • Some task metrics(RecvTps ProcessLatency) might not be aggregated correctly.
  • Fix the racing condition of AsmCounter during flushing

Misc

Publish 2.1.1 to Maven Center repository

09 Mar 01:55
Compare
Choose a tag to compare

Publish 2.1.1 to Maven Center repository

For Chinese release notes, please refer to https://github.com/alibaba/jstorm/blob/master/history_cn.md

New features

  1. 1.5~6x performance boost from worst to best scenarios compared to JStorm-2.1.0
  2. Add application-level auto-batch
  3. Add independent control channel to separate control msgs from biz msgs to guarantee high priority for control msgs
  4. Dramatic performance boost in metrics, see "Improvements" section
  5. Support jdk1.8
  6. Add Nimbus hook and topology hook
  7. Metrics system:
    1. Support disable/enable metrics on the fly
    2. Add jstorm metrics design docs, see JSTORM-METRICS.md
  8. JStorm web UI:
    1. Add zookeeper viewer in web UI, thanks to @dingjun84
    2. Add log search and deep log search, support both backward search and forward search
    3. Support log file download
  9. Support changing log level on the fly
  10. Change error structure in zk, add errorLevel, errorCode and duration.
  11. Add supervisor health check
  12. Add -Dexclude.jars option to enable filtering jars manually

Improvements

  1. Metrics:
    1. use JHistogram/JMeter instead of Histogram/Meter, change internal Clock.tick to System.currentTimeMillis to improve performance (50+% boost in Meter and 25%+ boost in Histogram)
    2. add TupleLifeCycle metric
    3. add supervisor metrics: total_cpu_usage, total_mem_usage, disk_usage
    4. remove some unnecessary metrics like emitTime, etc.
    5. Use HeapByteBuffer instead of List to transmit metric data points, reduce 60+% metrics memory usage
    6. Change sample rate from 10% to 5% by default
    7. Remove AsmTimer and related code
  2. Log related:
    1. Use logback by default instead of log4j, exclude slf4j-log4j12 dependency
    2. Use jstorm.log.dir property instead of ${jstorm.home}/logs, see jstorm.logback.xml
    3. Change all log4j Logger's to slf4j Logger's
    4. Set default log page size(log.page.size) in defaults.yaml to 128KB (web UI)
    5. Change topology log structure, add ${topology.name} directory, see jstorm.logback.xml
    6. Add timestamp in supervisor/nimbus gc log files; backup worker gc log before launching a new worker;
    7. Set logback/log4j file encoding to UTF-8
  3. Refine backpressure stragety to avoid over-backpressure
  4. Change acker pending rotating map to single thread to improve performance
  5. Update RefreshConnections to avoid downloading assignments from zk frequently
  6. Change default memory of Supervisor to 1G (previous 512MB)
  7. Use ProcessLauncher to launch processes
  8. Add DefaultUncaughtExceptionHandler for supervisor and nimbus
  9. Change local ports to be different from 0.9.x versions (supervisor.slots.ports.base, nimbus.thrift.port,
    nimbus.deamon.logview.port, supervisor.deamon.logview.port)
  10. Change highcharts to echarts to avoid potential license violation
  11. Dependency upgrades:
    1. Upgrade kryo to 2.23.0
    2. Upgrade disruptor to 3.2.2

Bug fix

  1. Fix deadlock when starting workers
  2. Fix the bug that when localstate file is empty, supervisor can't start
  3. Fix kryo serialization for HeapByteBuffer in metrics
  4. Fix total memory usage calculation
  5. Fix the bug that empty worker is assigned when configured worker number is bigger than the actual number for user defined scheduler
  6. Fix UI log home directory
  7. Fix XSS security bug in web UI
  8. Don't start TopologyMetricsRunnable thread in local mode, thanks to @L-Donne
  9. Fix JSTORM-141, JSTORM-188 that TopologyMetricsRunnable consumes too much CPU
  10. Remove MaxTenuringThreshold JVM option support jdk1.8, thanks to @249550148
  11. Fix possible NPE in MkLocalShuffer

Deploy and scripts

  1. Add cleanup for core dumps
  2. Add supervisor health check in healthCheck.sh
  3. Change jstorm.py to terminate the original python process when starting nimbus/supervisor

Upgrade guide

  1. JStorm 2.1.1 is mostly compatible with 2.1.0, but it's better to restart your topologies to finish the upgrade.
  2. If you're using log4j, be cautious that we have switched default logging system to logback, if you still want to use log4j, please add "user.defined.log4j.conf: jstorm.log4j.properties" to your conf/storm.yaml.
  3. If you're using slf4j-api + log4j, please add slf4j-log4j12 dependency in your pom config.

Release 2.1.0

12 Nov 10:02
Compare
Choose a tag to compare

This version is for Alibaba Global Shopping Festival, November 11th 2015.

New features

  1. Totally redesign Web UI
    1. Make the UI more beatiful
    2. Improve Web UI speed much.
    3. Add Cluster/Topology Level Summarized Metrics in recent 30 minutes.
    4. Add DAG in the Web UI, support Uer Interaction to get key information such as emit, tuple lifecycle, tps
  2. Redesign Metrics/Monitor System
    1. New metrics core, support sample with more metric, avoid noise, merge metrics automatically for user.
    2. No metrics will be stored in ZK
    3. Support metrics HA
    4. Add more useful metrics, such as tuple lifecycle, netty metrics, disk space etc. accurately get worker memory
    5. Support external storage plugin to store metrics.
  3. Implement Smart BackPressure
    1. Smart Backpressure, the dataflow will be more stable, avoid noise to trigger
    2. Easy to manual control Backpressure
  4. Implement TopologyMaster
    1. Redesign hearbeat mechanism, easily support 6000+ tasks
    2. Collect all task's metrics, do merge job, release Nimbus pressure.
    3. Central Control Coordinator, issue control command
  5. Redesign ZK usage, one set of ZK support more 2000+ hardware nodes.
    1. No dynamic data in ZK, such as heartbeat, metrics, monitor status.
    2. Nimbus reduce visiting ZK frequence when serve thrift API.
    3. Reduce visiting ZK frequence, merge some task level ZK node.
    4. Reduce visiting ZK frequence, remove useless ZK node, such as empty taskerror node
    5. Tuning ZK cache
    6. Optimize ZK reconnect mechanism
  6. Tuning Executor Batch performance
    1. Add smart batch size setting
    2. Remove memory copy
    3. Directly issue tuple without batch for internal channel
    4. Set the default Serialize/Deserialize method as Kryo
  7. Set the default Serialized/Deserialized method as Kryo to improve performance.
  8. Support dynamic reload binary/configuration
  9. Tuning LocalShuffle performance, Set 3 level priority, local worker, local node, other node, add dynamic check queue status, connection status.
  10. Optimize Nimbus HA, only the highest priority nimbuses can be promoted as master

Improvement

  1. Supervisor automatically dump worker jstack/jmap, when worker's status is invalid.
  2. Supervisor can generate more ports according to memory.
  3. Supervisor can download binary more time.
  4. Support set logdir in configuration
  5. Add configuration "nimbus.host.start.supervisor"
  6. Add supervisor/nimbus/drpc gc log
  7. Adjust jvm parameter 1. set -Xmn 1/2 of heap memory 2. set PermSize to 1/32 and MaxPermSize 1/16 of heap memory; 3. set -Xms by "worker.memory.min.size"。
  8. Refine ZK error schema, when worker is dead, UI will report error
  9. Add function to zktool utility, support remove all topology znodes, support list
  10. Optimize netty client.
  11. Dynamic update connected task status by network connection, not by ZK znode.
  12. Add configuration "topology.enable.metrics".
  13. Classify all topology log into one directory by topologyName.

Bug fix

  1. Skip download same binary when assigment has been changed.
  2. Skip start worker when binary is invalid.
  3. Use correct configuration map in a lot of worker thread
  4. In the first step Nimbus will check topologyName or not when submit topology
  5. Support fieldGrouping for Object[]
  6. For drpc single instance under one configuration
  7. In the client topologyNameExists interface,directly use trhift api
  8. Fix failed to restart due to topology cleanup thread's competition

Deploy and scripts

  1. Optimize cleandisk.sh, avoid delete useful worker log

Merge for Apache

05 Aug 12:57
Compare
Choose a tag to compare
Merge for Apache Pre-release
Pre-release

Release 2.0.4-SNAPSHOT

New features

  1. Redesign Metric/Monitor system, new RollingWindow/Metrics/NettyMetrics, all data will send/recv through thrift
  2. Redesign Web-UI, the new Web-UI code is clear and clean
  3. Add NimbusCache Layer, using RocksDB and TimeCacheWindow
  4. Refactoring all ZK structure and ZK operation
  5. Refactoring all thrift structure
  6. Merge jstorm-client/jstorm-client-extension/jstorm-core 3 modules into jstorm-core
  7. Set the dependency version same as storm
  8. Sync apache-storm-0.10.0-beta1 all java code
  9. Switch log system to logback
  10. Upgrade thrift to apache thrift 0.9.2
  11. Performance tuning Huge topology more than 600 workers or 2000 tasks
  12. Require jdk7 or higher

Release 0.9.7.1

New Features

  1. Batch the tuples whose target task is same, before sending out(task.batch.tuple=true,task.msg.batch.size=4).
  2. LocalFirst grouping is updated. If all local tasks are busy, the tasks of outside nodes will be chosen as target task instead of waiting on the busy local task.
  3. Support user to reload the application config when topology is running.
  4. Support user to define the task heartbeat timeout and task cleanup timeout for topology.
  5. Update the wait strategy of disruptor queue to no-blocking mode "TimeoutBlockingWaitStrategy"
  6. Support user to define the timeout of discarding messages that are pending for a long time in netty buffer.
  7. Update the message processing structure. The virtualPortDispatch and drainer thread are removed to reduce the unnecessary cost of cpu and the transmitting of tuples
  8. Add jstorm parameter "--include-jars" when submit topology, add these jar to classpath
  9. Nimbus or Supervisor suicide when the local ip is 127.0.0.0
  10. Add user-define-scheduler example
  11. Merge Supervisor's syncSupervisor and syncProcess

Bug Fix

  1. Improve the GC setting.
  2. Fix the bug that task heartbeat might not be updated timely in some scenarioes.
  3. Fix the bug that the reconnection operation might be stick for a unexpected period when the connection to remote worker is shutdown and some messages are buffer in netty.
  4. Reuse thrift client when submit topology
  5. Avoid repeatedly download binary when failed to start worker.

Changed setting

  1. Change task's heartbeat timeout to 4 minutes
  2. Set the netty client thread pool(clientScheduleService) size as 5

Deploy and scripts

  1. Improve cleandisk.sh, avoid delete current directory and /tmp/hsperfdata_admin
  2. Add executable attribute for the script under example
  3. Add parameter to stat.sh, which can be used to start supervisor or not. This is useful under virtual

Release 0.9.7

New Features

  1. Support dynamic scale-out/scale-in of worker, spout, bolt or acker without stopping the service of topology.
  2. When enable cgroup, Support the upper limit control of cpu core usage. Default setting is 3 cpu cores.
  3. Update the mechanism of task heartbeats to make heartbeat to track the status of spout/bolt execute thread correctly.
  4. Support to add jstorm prefix info(clusterName, topologyName, ip:port, componentName, taskId, taskIndex) for worker/task log
  5. Check the heartbeat of supervisor when topology assignment to ensure no worker will be assigned into a dead supervisor
  6. Add api to query the task/worker's metric info, e.g. load status of task queue, worker cpu usage, worker mem usage...
  7. Try to re-download jars when staring worker fails several times to avoid potential corruption of jars
  8. Add Nimbus ZK cache, accelerate nimbus read zk
  9. Add thrift api getVersion, it will be used check between the client jstorm version and the server jstorm version.
  10. Update the metrics' structure to Alimonitor
  11. Add exclude-jar parameter into jstorm.py, which avoid class conflict when submit topology

Bug Fix

  1. Fix the no response problem of supervisor process when subimtting big amout topologys in a short time
  2. When submitting two or more topologys at the same time, the later one might be failed.
  3. TickTuple does not need to be acked. Fix the incorrect count of failure message.
  4. Fix the potential incorrect assignment when use.old.assignment=true
  5. Fix failed to remove some zk nodes when kill topology
  6. Fix failed to restart topology, when nimbus do assignment job.
  7. Fix NPE when register metrics
  8. Fix failed to read ZK monitor znode through zktool
  9. Fix exception when enable classload and local mode
  10. Fix duplicate log when enable user-defined logback in local mode

Changed Setting

  1. Set Nimbus jvm memory size as 4G
  2. Set hearbeat from supervisor to nimbus timeout from 60s to 180s
  3. In order to avoid OOM, set storm.messaging.netty.max.pending as 4
  4. Set task queue size as 1024, worker's total send/receive queue size as 2048

Deploy and scripts

  1. Add rpm build spec
  2. Add deploy files of jstorm for rpm package building
  3. Enable the cleandisk cronjob every hour, reserve coredump for only one hour.

0.9.6.3

16 Feb 10:09
Compare
Choose a tag to compare

New features

  1. Implement tick tuple
  2. Support logback
  3. Support to load the user defined configuration file of log4j
  4. Enable the display of user defined metrics in web UI
  5. Add "topologyName" parameter for "jstorm list" command
  6. Support the use of ip and hostname at the same for user defined schedule
  7. Support junit test for local mode
  8. Enable client command(e.g. jstorm jar) to load self-defined storm.yaml

Bug fix

  1. Add activate and deactivate api of spout, which are used in nextTuple prepare phase
  2. Update the support of multi language
  3. Check the worker's heartbeat asynchronously to speed up the lunch of worker
  4. Add the check of worker's pid to speed up the detect of dead worker
  5. Fix the high cpu load of disruptor producer when disruptor queue is full
  6. Remove the confused exception reported by disruptor queue when killing worker
  7. Fix the failure problem of "jstorm restart" client command
  8. Report error when user submits the jar built on a incompatible jstorm release
  9. Fix the problem that one log will printed twice when user define a configuration of log4j or logback on local mode
  10. Fix the potential exception when killing topology on local mode
  11. Forbid user to change the log level of jstorm log
  12. Add a configuration template of logback
  13. Fix the problem that process the upload of lib jar as application jar
  14. Makesure the clean of ZK node for a topology which is removed
  15. Add the information of topology name when java core dump
  16. Fix the incorrect value of -XX:MaxTenuringThreshold. Currently, the default value of jstorm is 20, but the max value in JDK8 is 15.
  17. Fix the potential reading failure of cpu core number, which may cause the supervisor slot to be set to 0
  18. Fix the "Address family not supported by protocol family" error on local mode
  19. Do not start logview http server on local mode
  20. Add the creation of log dir in supervisor alive checking scription
  21. Check the correctness of ip specified in configuration file before starting nimbus
  22. Check the correctness of env variable $JAVA_HOME/$JSTORM_HOME/$JSTORM_CONF_DIR before starting jstorm service
  23. Specify the log dir for rpm installation
  24. Add reading permission of /home/admin/jstorm and /home/admin/logs for all users after rpm installation
  25. Config local temporay ports when rpm installation
  26. Add noarch rpm package

0.9.6.2

01 Dec 07:23
Compare
Choose a tag to compare
  1. Add option to switch between BlockingQueue and Disruptor
  2. Fix the bug which under sync netty mode, client failed to send message to server
  3. Fix the bug let web UI can dispaly 0.9.6.1 cluster
  4. Fix the bug topology can be submited without main jar but a lot of little jar
  5. Fix the bug restart command
  6. Fix the bug trident bug
  7. Add the validation of topology name, component name... Only A-Z, a-z, 0-9, '_', '-', '.' are valid now.
  8. Fix the bug close thrift client

0.9.6.2-rc

17 Nov 02:42
Compare
Choose a tag to compare
0.9.6.2-rc Pre-release
Pre-release
  1. Improve user experience from Web UI
    1. Add jstack link
    2. Add worker log link in supervisor page
    3. Add Web UI log encode setting "gbk" or "utf-8"
    4. Show starting tasks in component page
    5. Show dead task's information in UI
    6. Fix the bug that error info can not be displayed in UI when task is restarting
  2. Add restart command, with this command, user can reload configuration, reset worker/task parallism
  3. Upgrade curator/disruptor/guava version
  4. Revert json lib to google-simple json, wrap all json operation into two utility method
  5. Add new storm submit api, supporting submit topology under java
  6. Enable launch process with backend method
  7. Set "spout.pending.full.sleep" default value as true
  8. Fix the bug user define sceduler not support a list of workers
  9. Add disruptor/JStormUtils junit test
  10. Enable user to configure the name of monitor name of alimonitor
  11. Add tcp option "reuseAddress" in netty framework
  12. Fix the bug: When spout does not implement the ICommitterTrident interface, MasterCoordinatorSpout will stick on commit phase.

0.9.6.1

11 Oct 12:54
Compare
Choose a tag to compare
  1. Add management of multiclusters in web UI.
  2. Merge trident part from storm-0.9.3
  3. Use fastjson replace gson
  4. Reorganization the code generating metrics json
  5. Get jstorm version from $JSTORM_HOME/RELEASE instead of hardcode
  6. Change task deserialize thread's SingleThreadDisruptorQueue to MultiThreadDisruptorQueue
  7. Fix web ui display wrong number of workers in Supervisor page
  8. Fix taskheart beat thread competition in accessing task map
  9. Fix null pointer exception when killing worker and read worker's hearbeat object
  10. Netty client connect to server only in NettyClient module.
  11. Add break loop operation when netty client connection is closed
  12. Fix the bug that topology warning flag present in cluster page is not consistent with error information present in topology page
  13. Add recovery function when the data of task error information is corrupted
  14. Fix the bug that the metric data can not be uploaded onto Alimonitor when ugrading from pre-0.9.6 to 0.9.6 and executing pkill java without restart the topologying
  15. Fix the bug that zeroMq failed to receive data
  16. Add interface to easily setting worker's memory
  17. Set default value of topology.alimonitor.metrics.post to false
  18. Only start NETTY_SERVER_DECODE_TIME for netty server
  19. Keep compatible with Storm for local mode
  20. Print rootId when tuple failed
  21. In order to keep compatible with Storm, add submitTopologyWithProgressBar interface
  22. Upgrade netty version from 3.2.7 to 3.9.0
  23. Support assign topology to user-defined supervosors

0.9.6 release

23 Sep 03:18
Compare
Choose a tag to compare
  1. Update UI
    • Display the metrics information of task and worker
    • warning flag when errors occur for a topology
    • Add link from supervisor page to task page
  2. Send metrics data to Alimonitor
  3. Add metrics interface for user
  4. Add task.cleanup.timeout.sec setting to let task gently cleanup
  5. Set the worker's log name as topologyName-worker-port.log
  6. Add setting "worker.redirect.output.file", so worker can redirect System.out/System.err to one setting file
  7. Add storm list command
  8. Add closing channel check in netty client to avoid double close
  9. Add connecting check in netty client to avoid connecting one server twice at one time