偷偷摘套内射激情视频,久久精品99国产国产精,中文字幕无线乱码人妻,中文在线中文a,性爽19p

支持10w級調(diào)度!新鮮出爐的SnailJob性能壓測報告

開發(fā) 架構(gòu)
當(dāng)下企業(yè)業(yè)務(wù)系統(tǒng)復(fù)雜,任務(wù)調(diào)度、任務(wù)失敗重試、安全控制、監(jiān)控告警等需求層出不窮,許多傳統(tǒng)方案都面臨接入復(fù)雜、擴展成本高、失敗重試機制單一等痛點。

當(dāng)下企業(yè)業(yè)務(wù)系統(tǒng)復(fù)雜,任務(wù)調(diào)度、任務(wù)失敗重試、安全控制、監(jiān)控告警等需求層出不窮,許多傳統(tǒng)方案都面臨接入復(fù)雜、擴展成本高、失敗重試機制單一等痛點。

SnailJob的誕生正是為了解決這些難題。

平臺概述

SnailJob 是一個專注于分布式任務(wù)調(diào)度與重試的平臺,采用分區(qū)分桶架構(gòu)具備極高的伸縮性和容錯性,無需依賴外部中間件即可實現(xiàn)秒級調(diào)度和復(fù)雜重試策略,同時擁有現(xiàn)代化 UI 和完善的權(quán)限與告警機制。

SnailJob 性能壓測報告

  • 報告日期: 2025-08-25
  • 版本: 1.7.2
  • 提供者: rpei

測試目標(biāo)

本次壓測的目標(biāo)是驗證 單個 SnailJob 服務(wù)節(jié)點在穩(wěn)定條件下可支持的最大定時任務(wù)數(shù)量,并評估系統(tǒng)在高并發(fā)任務(wù)調(diào)度下的整體性能表現(xiàn)。

測試環(huán)境

?? 數(shù)據(jù)庫

  • 類型: 阿里云 RDS MySQL 8.0
  • 實例規(guī)格: mysql.n2.xlarge.1(8 vCPU,16 GB 內(nèi)存)
  • 存儲: 100 GB,InnoDB 引擎
  • 版本: MySQL_InnoDB_8.0_Default

?? 應(yīng)用部署

  • 服務(wù)器信息: 阿里云 ECS g6.4xlarge
  • SnailJob Server: 單實例(4 vCPU,8 GB 內(nèi)存)
  • SnailJob Client: 16 個實例(每個 1 vCPU,1 GB 內(nèi)存)

服務(wù)端配置

pekko配置(snail-job-server-starter/src/main/resources/snailjob.conf)

pekko {
  actor {
    common-log-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 16
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    common-scan-task-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 64
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    netty-receive-request-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }




    retry-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 512
      }
      throughput = 10
    }
  }
}

系統(tǒng)配置文件(snail-job-server-starter/src/main/resources/application.yml)

server:
  port: 8080
  servlet:
    context-path: /snail-job


spring:
  main:
    banner-mode: off
  profiles:
    active: dev
  datasource:
    name: snail_job
    ## mysql
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://ex-snailjob-mysql-svc:3306/snail_job?useSSL=false&characterEncoding=utf8&useUnicode=true
    username: root
    password: Ab1234567
    type: com.zaxxer.hikari.HikariDataSource
    hikari:
      connection-timeout: 30000
      minimum-idle: 16
      maximum-pool-size: 256
      auto-commit: true
      idle-timeout: 30000
      pool-name: snail_job
      max-lifetime: 1800000
  web:
    resources:
      static-locations: classpath:admin/


mybatis-plus:
  typeAliasesPackage: com.aizuda.snailjob.template.datasource.persistence.po
  global-config:
    db-config:
      where-strategy: NOT_EMPTY
      capital-mode: false
      logic-delete-value: 1
      logic-not-delete-value: 0
  configuration:
    map-underscore-to-camel-case: true
    cache-enabled: true
logging:
  config: /usr/snailjob/config/logback.xml
snail-job:
  retry-pull-page-size: 2000 # 拉取重試數(shù)據(jù)的每批次的大小
  job-pull-page-size: 2000 # 拉取重試數(shù)據(jù)的每批次的大小
  server-port: 17888  # 服務(wù)器端口
log-storage: 7 # 日志保存時間(單位: day)
  rpc-type: grpc
  summary-day: 0
  server-rpc:
    keep-alive-time: 45s                # 心跳間隔45秒
    keep-alive-timeout: 15s             # 心跳超時15秒
    permit-keep-alive-time: 30s         # 允許心跳間隔30秒  
    dispatcher-tp:                      # 調(diào)度線程池配置
      core-pool-size: 100
      maximum-pool-size: 100


  client-rpc:
    keep-alive-time: 45s                # 心跳間隔45秒
    keep-alive-timeout: 15s             # 心跳超時15秒  
    client-tp:                         # 客戶端線程池配置
      core-pool-size: 100
      maximum-pool-size: 100

測試場景

  • 每個定時任務(wù)的執(zhí)行周期:60 秒
  • 單個任務(wù)平均執(zhí)行耗時:200 毫秒
  • 測試目標(biāo):測量單節(jié)點 SnailJob Server 可穩(wěn)定調(diào)度的任務(wù)數(shù)量

測試結(jié)果

在單節(jié)點(4C/8G)環(huán)境下,SnailJob Server 能夠穩(wěn)定承載 30,000 個定時任務(wù),并保證任務(wù)在每 60 秒 內(nèi)按時執(zhí)行。此時數(shù)據(jù)庫負(fù)載率僅 20%,表明系統(tǒng)具備良好的可擴展性。通過水平擴展服務(wù)端節(jié)點,理論上可輕松支持 100,000+ 任務(wù)調(diào)度,滿足絕大多數(shù)企業(yè)的業(yè)務(wù)場景。 同時,SnailJob Pro 版本引入 Redis 緩存改造與日志剝離(基于 Mongo 存儲),進一步提升了系統(tǒng)的調(diào)度能力與穩(wěn)定性。

資源消耗情況(受公司保密限制,截圖無法公開,這里僅分享壓測的結(jié)果數(shù)據(jù))

指標(biāo)

數(shù)據(jù)

SnailJob服務(wù)端CPU使用率

均值:71%  峰值:82%

SnailJob服務(wù)端內(nèi)存

32%

數(shù)據(jù)庫實例IOPS使用率

采樣間隔5秒峰值:40
 采樣間隔30秒峰值:50%

數(shù)據(jù)庫實例CPU使用率

20%

數(shù)據(jù)庫實例內(nèi)存使用率

55%

總結(jié)

SnailJob 的性能瓶頸主要來源于 數(shù)據(jù)庫存儲。由于調(diào)度過程中存在大量任務(wù)批次與日志寫入操作,對數(shù)據(jù)庫 IOPS 會產(chǎn)生較大壓力。因此在部署 SnailJob 時,建議:

  • 數(shù)據(jù)庫獨立部署,避免與其他業(yè)務(wù)服務(wù)共享實例;
  • 優(yōu)先選擇高性能磁盤,以提升寫入效率;
  • 開啟異步寫盤,進一步降低數(shù)據(jù)庫寫入延遲。
責(zé)任編輯:武曉燕 來源: 程序員wayn
相關(guān)推薦

2013-05-09 10:30:44

開源軟件開源項目

2013-11-14 10:34:57

Android 4.4特性

2021-10-16 07:15:40

勒索軟件攻擊數(shù)據(jù)泄露

2012-04-26 10:56:05

jQuery效果

2020-12-09 09:38:29

前端開發(fā)技術(shù)

2011-06-07 09:22:43

jQueryjQuery插件

2015-04-13 18:29:54

H3 BPM

2019-03-15 15:37:51

自動駕駛排名企業(yè)

2010-08-02 11:09:45

Flex4

2011-11-21 11:27:30

品牌營銷

2011-05-19 17:00:56

Web框架

2009-09-08 13:46:13

CCNA中文版

2013-04-11 13:59:39

2010-08-12 14:23:05

Flexbuilder

2018-08-08 05:38:12

云計算云服務(wù)

2012-12-21 10:32:42

網(wǎng)易電影票客戶端

2012-09-26 09:26:21

2018-08-06 11:47:07

云計算挑戰(zhàn)混合云

2013-04-18 15:42:35

OS X 10.8.4
點贊
收藏

51CTO技術(shù)棧公眾號