Openjob is a new distributed task scheduling framework based on Akka. Supports a variety of cronjob, delayed jobs, and workflow. Uses a consistency sharding algorithm and supports unlimited horizontal scaling.
Openjob not only supports basic cronjob, but also provides delayed jobs, distributed computing, and workflow
- Cronjob，support Unix Crontab expression
- Second，execution cycle less than 60 seconds
- Fixed rate，execute tasks at a fixed frequency with minute unit
- Distributed, high-performance delay task based on Redis, and providing rich reports and statistics
Standalone， execute on a worker client
Broadcast， execute on all worker clients
Map，a map function can distribute big data to multiple machines for execution, like Hadoop map
MapReduce， MapReduce is an extension of the Map.After all map sub-tasks are completed, the Reduce method is executed, which can process the results and data of the task execution in the Reduce method.
Sharding， like Elastic-Job model, configure sharding numbers on the management, which can be scheduled to different client by sharding, and supports multiple languages.
- Processor, execute by function or class(support Java/Golang/PH)
- HTTP, http request, used to periodically request an HTTP
- Shell,shell script
- Dashboard，rich task statistics and reports
- Task history，task execution history records
- Task log，complete task log, and suppport storage (H2/Mysql/Elasticsearch).
- Task running stack，detailed recording of task execution stack information
- Provides task event monitoring alarms, detailed alarm histories, and support notifications with WeChat, Feishu, and webhook triggers.
- Designed with namespace, support button-level access and easy to manage complex project.
- Java java and its frameworks, with native support.
Go golang support use
- PHP PHP support use Golang agent to execute task by shell mode 。Swoole frameworks support composer install.
- Python python support use Golang agent to execute task by shell mode
Openjob is well-suited for business scenarios that have task schedule and delay task. such as every day to clean data and report generation. It is also suitable for lightweight computing, and Map/MapReduce can process big data computing. For complex task flows or workflow, it can design workflow with UI
* Fixed rate
|Delay task||No||No||No||Distributed, high-performance delay task based on Redis|
|Workflow||No||No||No||Workflow design with UI|
|Distributed Computing||No||Sharding||Sharding||* Broadcast
|Multiple languages||Java||* Java
|Visualization||No||Weak||* Task history
* Task log（Not support storage）
|* Task history
* Task log（support H2/Mysql/Elasticsearch）
* Full permissions
* Task log stack
|Manageable||No||enable、disable task||* enable、disable task
* execute once
|* enable、disable task
* execute once
|Alarms||No||* custom event
|Performance||Every task scheduling try to acquire a lock through the database, causes a high pressure on the database||ZooKeeper is performance bottleneck||Task scheduling is only by master, causes a high pressure on master||Uses sharding algorithm, each node can be scheduled without lock, supports unlimited horizontal scaling, and supports big task scheduling|