千萬(wàn)級(jí)大表如何刪除數(shù)據(jù)?
前言
今天我們來(lái)聊聊一個(gè)讓很多DBA和開(kāi)發(fā)者頭疼的話題——千萬(wàn)級(jí)大表的數(shù)據(jù)刪除。
有些小伙伴在工作中,一遇到大表數(shù)據(jù)刪除就手足無(wú)措,要么直接DELETE導(dǎo)致數(shù)據(jù)庫(kù)卡死,要么畏手畏腳不敢操作。
我見(jiàn)過(guò)太多因?yàn)榇蟊韯h除操作不當(dāng)導(dǎo)致的"血案":數(shù)據(jù)庫(kù)長(zhǎng)時(shí)間鎖表、業(yè)務(wù)系統(tǒng)癱瘓、甚至主從同步延遲。
今天跟大家一起專(zhuān)門(mén)聊聊千萬(wàn)級(jí)大表數(shù)據(jù)刪除的話題,希望對(duì)你會(huì)有所幫助。
一、為什么大表刪除這么難?
在深入技術(shù)方案之前,我們先搞清楚為什么千萬(wàn)級(jí)大表的數(shù)據(jù)刪除會(huì)如此困難。
有些小伙伴可能會(huì)想:"不就是個(gè)DELETE語(yǔ)句嗎,有什么難的?"
其實(shí)這里面大有學(xué)問(wèn)。
數(shù)據(jù)庫(kù)刪除操作的底層原理
為了更直觀地理解數(shù)據(jù)庫(kù)刪除操作的工作原理,我畫(huà)了一個(gè)刪除操作的底層流程圖:
圖片
從這張圖可以看出,一個(gè)簡(jiǎn)單的DELETE語(yǔ)句背后隱藏著這么多復(fù)雜的操作。
讓我們?cè)敿?xì)分析每個(gè)環(huán)節(jié)的挑戰(zhàn):
1. 事務(wù)和鎖的挑戰(zhàn)
-- 一個(gè)看似簡(jiǎn)單的刪除操作
DELETE FROM user_operation_log
WHERE create_time < '2023-01-01';
-- 實(shí)際上MySQL會(huì)這樣處理:
-- 1. 獲取表的寫(xiě)鎖
-- 2. 逐行掃描10,000,000條記錄
-- 3. 對(duì)每條匹配的記錄:
-- - 寫(xiě)入undo log(用于回滾)
-- - 寫(xiě)入redo log(用于恢復(fù))
-- - 更新所有相關(guān)索引
-- - 標(biāo)記記錄為刪除狀態(tài)
-- 4. 事務(wù)提交后才真正釋放空間2. 資源消耗問(wèn)題
- 磁盤(pán)I/O:undo log、redo log、數(shù)據(jù)文件、索引文件的大量寫(xiě)入
- CPU:索引維護(hù)、條件判斷、事務(wù)管理
- 內(nèi)存:Buffer Pool管理、鎖信息維護(hù)
- 網(wǎng)絡(luò):主從同步數(shù)據(jù)量巨大
3. 業(yè)務(wù)影響風(fēng)險(xiǎn)
- 鎖等待超時(shí):其他查詢被阻塞
- 主從延遲:從庫(kù)同步跟不上
- 磁盤(pán)空間:undo log暴增導(dǎo)致磁盤(pán)寫(xiě)滿
- 性能下降:數(shù)據(jù)庫(kù)整體性能受影響
有些小伙伴可能會(huì)問(wèn):"我們用的是云數(shù)據(jù)庫(kù),這些問(wèn)題還存在嗎?"
我的經(jīng)驗(yàn)是:云數(shù)據(jù)庫(kù)只是降低了運(yùn)維復(fù)雜度,但底層原理和限制依然存在。
二、方案一:分批刪除(最常用)
分批刪除是最基礎(chǔ)也是最常用的方案,核心思想是"化整為零",將大操作拆分成多個(gè)小操作。
實(shí)現(xiàn)原理
圖片
具體實(shí)現(xiàn)
方法1:基于主鍵分批
-- 存儲(chǔ)過(guò)程實(shí)現(xiàn)分批刪除
DELIMITER $$
CREATE PROCEDURE batch_delete_by_id()
BEGIN
DECLARE done INTDEFAULTFALSE;
DECLARE batch_size INTDEFAULT1000;
DECLARE max_id BIGINT;
DECLARE min_id BIGINT;
DECLARE current_id BIGINTDEFAULT0;
-- 獲取需要?jiǎng)h除的數(shù)據(jù)范圍
SELECTMIN(id), MAX(id) INTO min_id, max_id
FROM user_operation_log
WHERE create_time < '2023-01-01';
WHILE current_id < max_id DO
-- 每次刪除一個(gè)批次
DELETEFROM user_operation_log
WHEREidBETWEEN current_id AND current_id + batch_size - 1
AND create_time < '2023-01-01';
-- 提交事務(wù),釋放鎖
COMMIT;
-- 休眠一下,讓數(shù)據(jù)庫(kù)喘口氣
DOSLEEP(0.1);
-- 更新進(jìn)度
SET current_id = current_id + batch_size;
-- 記錄日志(可選)
INSERTINTO delete_progress_log
VALUES (NOW(), current_id, batch_size);
ENDWHILE;
END$$
DELIMITER ;方法2:基于時(shí)間分批
// Java代碼實(shí)現(xiàn)基于時(shí)間的分批刪除
@Service
@Slf4j
public class BatchDeleteService {
@Autowired
private JdbcTemplate jdbcTemplate;
/**
* 基于時(shí)間范圍的分批刪除
*/
public void batchDeleteByTime(String tableName, String timeColumn,
Date startTime, Date endTime,
int batchDays) {
Calendar calendar = Calendar.getInstance();
calendar.setTime(startTime);
int totalDeleted = 0;
long startMs = System.currentTimeMillis();
while (calendar.getTime().before(endTime)) {
Date batchStart = calendar.getTime();
calendar.add(Calendar.DAY_OF_YEAR, batchDays);
Date batchEnd = calendar.getTime();
// 確保不超過(guò)結(jié)束時(shí)間
if (batchEnd.after(endTime)) {
batchEnd = endTime;
}
String sql = String.format(
"DELETE FROM %s WHERE %s BETWEEN ? AND ? LIMIT 1000",
tableName, timeColumn
);
int deleted = jdbcTemplate.update(sql, batchStart, batchEnd);
totalDeleted += deleted;
log.info("批次刪除完成: {}-{}, 刪除{}條, 總計(jì){}條",
batchStart, batchEnd, deleted, totalDeleted);
// 控制刪除頻率,避免對(duì)數(shù)據(jù)庫(kù)造成過(guò)大壓力
if (deleted > 0) {
try {
Thread.sleep(500); // 休眠500ms
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
} else {
// 沒(méi)有數(shù)據(jù)可刪,跳到下一個(gè)時(shí)間段
continue;
}
// 每刪除10000條記錄一次進(jìn)度
if (totalDeleted % 10000 == 0) {
logProgress(totalDeleted, startMs);
}
}
log.info("刪除任務(wù)完成! 總計(jì)刪除{}條記錄, 耗時(shí){}秒",
totalDeleted, (System.currentTimeMillis() - startMs) / 1000);
}
private void logProgress(int totalDeleted, long startMs) {
long costMs = System.currentTimeMillis() - startMs;
double recordsPerSecond = totalDeleted * 1000.0 / costMs;
log.info("刪除進(jìn)度: {}條, 速率: {}/秒, 耗時(shí): {}秒",
totalDeleted, String.format("%.2f", recordsPerSecond), costMs / 1000);
}
}方法3:使用LIMIT分批刪除
-- 簡(jiǎn)單的LIMIT分批刪除
DELIMITER $$
CREATE PROCEDURE batch_delete_with_limit()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE batch_size INT DEFAULT 1000;
DECLARE total_deleted INT DEFAULT 0;
WHILE done = 0 DO
-- 每次刪除1000條
DELETE FROM user_operation_log
WHERE create_time < '2023-01-01'
LIMIT batch_size;
-- 檢查是否還有數(shù)據(jù)
SET done = ROW_COUNT() = 0;
SET total_deleted = total_deleted + ROW_COUNT();
-- 提交釋放鎖
COMMIT;
-- 休眠控制頻率
DOSLEEP(0.1);
-- 每刪除10000條輸出日志
IF total_deleted % 10000 = 0 THEN
SELECT CONCAT('已刪除: ', total_deleted, ' 條記錄') AS progress;
END IF;
END WHILE;
SELECT CONCAT('刪除完成! 總計(jì): ', total_deleted, ' 條記錄') ASresult;
END$$
DELIMITER ;分批刪除的最佳實(shí)踐
- 批次大小選擇
- 小表:1000-5000條/批次
- 大表:100-1000條/批次
- 需要根據(jù)實(shí)際情況調(diào)整
- 休眠時(shí)間控制
- 業(yè)務(wù)高峰期:休眠1-2秒
- 業(yè)務(wù)低峰期:休眠100-500毫秒
- 夜間維護(hù):可不休眠或短暫休眠
- 監(jiān)控和調(diào)整
- 監(jiān)控?cái)?shù)據(jù)庫(kù)負(fù)載
- 觀察主從同步延遲
- 根據(jù)實(shí)際情況動(dòng)態(tài)調(diào)整參數(shù)
三、方案二:創(chuàng)建新表+重命名
當(dāng)需要?jiǎng)h除表中大部分?jǐn)?shù)據(jù)時(shí),創(chuàng)建新表然后重命名的方式往往更高效。
實(shí)現(xiàn)原理
圖片
具體實(shí)現(xiàn)
-- 步驟1: 創(chuàng)建新表(結(jié)構(gòu)同原表)
CREATE TABLE user_operation_log_new LIKE user_operation_log;
-- 步驟2: 導(dǎo)入需要保留的數(shù)據(jù)
INSERT INTO user_operation_log_new
SELECT * FROM user_operation_log
WHERE create_time >= '2023-01-01';
-- 步驟3: 創(chuàng)建索引(在數(shù)據(jù)導(dǎo)入后創(chuàng)建,效率更高)
ALTER TABLE user_operation_log_new ADDINDEX idx_create_time(create_time);
ALTER TABLE user_operation_log_new ADDINDEX idx_user_id(user_id);
-- 步驟4: 數(shù)據(jù)驗(yàn)證
SELECT
(SELECT COUNT(*) FROM user_operation_log_new) as new_count,
(SELECT COUNT(*) FROM user_operation_log WHERE create_time >= '2023-01-01') as expected_count;
-- 步驟5: 原子切換(需要很短的表鎖)
RENAME TABLE
user_operation_log TO user_operation_log_old,
user_operation_log_new TO user_operation_log;
-- 步驟6: 刪除舊表(可選立即刪除或延后刪除)
DROP TABLE user_operation_log_old;Java代碼輔助實(shí)現(xiàn)
@Service
@Slf4j
public class TableRebuildService {
@Autowired
private JdbcTemplate jdbcTemplate;
/**
* 重建表方式刪除數(shù)據(jù)
*/
public void rebuildTableForDeletion(String sourceTable, String condition) {
String newTable = sourceTable + "_new";
String oldTable = sourceTable + "_old";
try {
// 1. 創(chuàng)建新表
log.info("開(kāi)始創(chuàng)建新表: {}", newTable);
jdbcTemplate.execute("CREATE TABLE " + newTable + " LIKE " + sourceTable);
// 2. 導(dǎo)入需要保留的數(shù)據(jù)
log.info("開(kāi)始導(dǎo)入保留數(shù)據(jù)");
String insertSql = String.format(
"INSERT INTO %s SELECT * FROM %s WHERE %s",
newTable, sourceTable, condition
);
int keptCount = jdbcTemplate.update(insertSql);
log.info("成功導(dǎo)入{}條保留數(shù)據(jù)", keptCount);
// 3. 創(chuàng)建索引(可選,在導(dǎo)入后創(chuàng)建索引效率更高)
log.info("開(kāi)始創(chuàng)建索引");
createIndexes(newTable);
// 4. 數(shù)據(jù)驗(yàn)證
log.info("開(kāi)始數(shù)據(jù)驗(yàn)證");
if (!validateData(sourceTable, newTable, condition)) {
throw new RuntimeException("數(shù)據(jù)驗(yàn)證失敗");
}
// 5. 原子切換
log.info("開(kāi)始表切換");
switchTables(sourceTable, newTable, oldTable);
// 6. 刪除舊表(可選立即或延后)
log.info("開(kāi)始刪除舊表");
dropTableSafely(oldTable);
log.info("表重建刪除完成!");
} catch (Exception e) {
log.error("表重建過(guò)程發(fā)生異常", e);
// 清理臨時(shí)表
cleanupTempTable(newTable);
throw e;
}
}
private void createIndexes(String tableName) {
// 根據(jù)業(yè)務(wù)需要?jiǎng)?chuàng)建索引
String[] indexes = {
"CREATE INDEX idx_create_time ON " + tableName + "(create_time)",
"CREATE INDEX idx_user_id ON " + tableName + "(user_id)"
};
for (String sql : indexes) {
jdbcTemplate.execute(sql);
}
}
private boolean validateData(String sourceTable, String newTable, String condition) {
// 驗(yàn)證新表數(shù)據(jù)量是否正確
Integer newCount = jdbcTemplate.queryForObject(
"SELECT COUNT(*) FROM " + newTable, Integer.class);
Integer expectedCount = jdbcTemplate.queryForObject(
"SELECT COUNT(*) FROM " + sourceTable + " WHERE " + condition, Integer.class);
return newCount.equals(expectedCount);
}
private void switchTables(String sourceTable, String newTable, String oldTable) {
// 原子性的表重命名操作
String sql = String.format(
"RENAME TABLE %s TO %s, %s TO %s",
sourceTable, oldTable, newTable, sourceTable
);
jdbcTemplate.execute(sql);
}
private void dropTableSafely(String tableName) {
try {
jdbcTemplate.execute("DROP TABLE " + tableName);
} catch (Exception e) {
log.warn("刪除表失敗: {}, 需要手動(dòng)清理", tableName, e);
}
}
private void cleanupTempTable(String tableName) {
try {
jdbcTemplate.execute("DROP TABLE IF EXISTS " + tableName);
} catch (Exception e) {
log.warn("清理臨時(shí)表失敗: {}", tableName, e);
}
}
}適用場(chǎng)景
- 需要?jiǎng)h除表中超過(guò)50%的數(shù)據(jù)
- 業(yè)務(wù)允許短暫的寫(xiě)停頓(重命名時(shí)需要)
- 有足夠的磁盤(pán)空間存儲(chǔ)新舊兩個(gè)表
四、方案三:分區(qū)表刪除
如果表已經(jīng)做了分區(qū),或者可以改造為分區(qū)表,那么刪除數(shù)據(jù)就會(huì)變得非常簡(jiǎn)單。
實(shí)現(xiàn)原理
圖片
具體實(shí)現(xiàn)
方法1:使用現(xiàn)有分區(qū)表
-- 查看表的分區(qū)情況
SELECT table_name, partition_name, table_rows
FROM information_schema.partitions
WHERE table_name = 'user_operation_log';
-- 直接刪除整個(gè)分區(qū)(秒級(jí)完成)
ALTER TABLE user_operation_log DROPPARTITION p202201, p202202;
-- 定期刪除過(guò)期分區(qū)的存儲(chǔ)過(guò)程
DELIMITER $$
CREATE PROCEDURE auto_drop_expired_partitions()
BEGIN
DECLARE expired_partition VARCHAR(64);
DECLARE done INT DEFAULT FALSE;
-- 查找需要?jiǎng)h除的分區(qū)(保留最近12個(gè)月)
DECLARE cur CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
WHERE table_name = 'user_operation_log'
AND partition_name LIKE'p%'
AND STR_TO_DATE(REPLACE(partition_name, 'p', ''), '%Y%m') < DATE_SUB(NOW(), INTERVAL12MONTH);
DECLARE CONTINUE HANDLER FOR NOT FOUNDS ET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO expired_partition;
IF done THEN
LEAVE read_loop;
ENDIF;
-- 刪除過(guò)期分區(qū)
SET @sql = CONCAT('ALTER TABLE user_operation_log DROP PARTITION ', expired_partition);
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
-- 記錄日志
INSERT INTO partition_clean_log
VALUES (NOW(), expired_partition, 'DROPPED');
END LOOP;
CLOSE cur;
END$$
DELIMITER ;方法2:改造普通表為分區(qū)表
-- 將普通表改造成分區(qū)表
-- 步驟1: 創(chuàng)建分區(qū)表
CREATE TABLE user_operation_log_partitioned (
id BIGINT AUTO_INCREMENT,
user_id BIGINT,
operation VARCHAR(100),
create_time DATETIME,
PRIMARY KEY (id, create_time) -- 分區(qū)鍵必須包含在主鍵中
) PARTITION BY RANGE (YEAR(create_time)*100 + MONTH(create_time)) (
PARTITION p202201 VALUES LESS THAN (202202),
PARTITION p202202 VALUES LESS THAN (202203),
PARTITION p202203 VALUES LESS THAN (202204),
PARTITION p202204 VALUES LESS THAN (202205),
PARTITION pfuture VALUES LESS THAN MAXVALUE
);
-- 步驟2: 導(dǎo)入數(shù)據(jù)
INSERTINTO user_operation_log_partitioned
SELECT * FROM user_operation_log;
-- 步驟3: 切換表
RENAME TABLE
user_operation_log TO user_operation_log_old,
user_operation_log_partitioned TO user_operation_log;
-- 步驟4: 定期維護(hù):添加新分區(qū)
ALTER TABLE user_operation_log REORGANIZE PARTITION pfuture INTO (
PARTITION p202205 VALUESLESSTHAN (202206),
PARTITION p202206 VALUESLESSTHAN (202207),
PARTITION pfuture VALUESLESSTHAN MAXVALUE
);Java代碼實(shí)現(xiàn)分區(qū)管理
@Service
@Slf4j
public class PartitionManagerService {
@Autowired
private JdbcTemplate jdbcTemplate;
/**
* 自動(dòng)管理分區(qū)
*/
@Scheduled(cron = "0 0 2 * * ?") // 每天凌晨2點(diǎn)執(zhí)行
public void autoManagePartitions() {
log.info("開(kāi)始分區(qū)維護(hù)任務(wù)");
try {
// 1. 刪除過(guò)期分區(qū)(保留最近12個(gè)月)
dropExpiredPartitions();
// 2. 創(chuàng)建未來(lái)分區(qū)
createFuturePartitions();
log.info("分區(qū)維護(hù)任務(wù)完成");
} catch (Exception e) {
log.error("分區(qū)維護(hù)任務(wù)失敗", e);
}
}
private void dropExpiredPartitions() {
String sql = "SELECT partition_name " +
"FROM information_schema.partitions " +
"WHERE table_name = 'user_operation_log' " +
"AND partition_name LIKE 'p%' " +
"AND STR_TO_DATE(REPLACE(partition_name, 'p', ''), '%Y%m') < DATE_SUB(NOW(), INTERVAL 12 MONTH)";
List<String> expiredPartitions = jdbcTemplate.queryForList(sql, String.class);
for (String partition : expiredPartitions) {
try {
jdbcTemplate.execute("ALTER TABLE user_operation_log DROP PARTITION " + partition);
log.info("成功刪除分區(qū): {}", partition);
// 記錄操作日志
logPartitionOperation("DROP", partition, "SUCCESS");
} catch (Exception e) {
log.error("刪除分區(qū)失敗: {}", partition, e);
logPartitionOperation("DROP", partition, "FAILED: " + e.getMessage());
}
}
}
private void createFuturePartitions() {
// 創(chuàng)建未來(lái)3個(gè)月的分區(qū)
for (int i = 1; i <= 3; i++) {
LocalDate futureDate = LocalDate.now().plusMonths(i);
String partitionName = "p" + futureDate.format(DateTimeFormatter.ofPattern("yyyyMM"));
int partitionValue = futureDate.getYear() * 100 + futureDate.getMonthValue();
int nextPartitionValue = partitionValue + 1;
try {
String sql = String.format(
"ALTER TABLE user_operation_log REORGANIZE PARTITION pfuture INTO (" +
"PARTITION %s VALUES LESS THAN (%d), " +
"PARTITION pfuture VALUES LESS THAN MAXVALUE)",
partitionName, nextPartitionValue
);
jdbcTemplate.execute(sql);
log.info("成功創(chuàng)建分區(qū): {}", partitionName);
logPartitionOperation("CREATE", partitionName, "SUCCESS");
} catch (Exception e) {
log.warn("創(chuàng)建分區(qū)失?。赡芤汛嬖冢? {}", partitionName, e);
}
}
}
private void logPartitionOperation(String operation, String partition, String status) {
jdbcTemplate.update(
"INSERT INTO partition_operation_log(operation, partition_name, status, create_time) VALUES (?, ?, ?, NOW())",
operation, partition, status
);
}
}分區(qū)表的優(yōu)勢(shì)
- 刪除效率極高:直接刪除分區(qū)文件
- 不影響業(yè)務(wù):無(wú)鎖表風(fēng)險(xiǎn)
- 管理方便:可以自動(dòng)化管理
- 查詢優(yōu)化:分區(qū)裁剪提升查詢性能
五、方案四:使用臨時(shí)表同步
對(duì)于需要在線刪除且不能停止服務(wù)的場(chǎng)景,可以使用臨時(shí)表同步的方式。
實(shí)現(xiàn)原理
圖片
具體實(shí)現(xiàn)
@Service
@Slf4j
public class OnlineTableMigrationService {
@Autowired
private JdbcTemplate jdbcTemplate;
/**
* 在線表遷移刪除
*/
public void onlineMigrationDelete(String sourceTable, String condition) {
String newTable = sourceTable + "_new";
String tempTable = sourceTable + "_temp";
try {
// 階段1: 準(zhǔn)備階段
log.info("=== 階段1: 準(zhǔn)備階段 ===");
prepareMigration(sourceTable, newTable, tempTable);
// 階段2: 雙寫(xiě)階段
log.info("=== 階段2: 雙寫(xiě)階段 ===");
enableDoubleWrite(sourceTable, newTable);
// 階段3: 數(shù)據(jù)同步階段
log.info("=== 階段3: 數(shù)據(jù)同步階段 ===");
syncExistingData(sourceTable, newTable, condition);
// 階段4: 驗(yàn)證階段
log.info("=== 階段4: 驗(yàn)證階段 ===");
if (!validateDataSync(sourceTable, newTable)) {
thrownew RuntimeException("數(shù)據(jù)同步驗(yàn)證失敗");
}
// 階段5: 切換階段
log.info("=== 階段5: 切換階段 ===");
switchToNewTable(sourceTable, newTable, tempTable);
// 階段6: 清理階段
log.info("=== 階段6: 清理階段 ===");
cleanupAfterSwitch(sourceTable, tempTable);
log.info("在線遷移刪除完成!");
} catch (Exception e) {
log.error("在線遷移過(guò)程發(fā)生異常", e);
// 回滾雙寫(xiě)
disableDoubleWrite();
throw e;
}
}
private void prepareMigration(String sourceTable, String newTable, String tempTable) {
// 備份原表
jdbcTemplate.execute("CREATE TABLE " + tempTable + " LIKE " + sourceTable);
jdbcTemplate.execute("INSERT INTO " + tempTable + " SELECT * FROM " + sourceTable);
// 創(chuàng)建新表
jdbcTemplate.execute("CREATE TABLE " + newTable + " LIKE " + sourceTable);
}
private void enableDoubleWrite(String sourceTable, String newTable) {
// 這里需要修改應(yīng)用層代碼,實(shí)現(xiàn)雙寫(xiě)
// 或者在數(shù)據(jù)庫(kù)層使用觸發(fā)器(不推薦,影響性能)
log.info("請(qǐng)配置應(yīng)用層雙寫(xiě): 同時(shí)寫(xiě)入 {} 和 {}", sourceTable, newTable);
// 等待雙寫(xiě)配置生效
sleep(5000);
}
private void syncExistingData(String sourceTable, String newTable, String condition) {
log.info("開(kāi)始同步存量數(shù)據(jù)");
// 同步符合條件的數(shù)據(jù)到新表
String syncSql = String.format(
"INSERT IGNORE INTO %s SELECT * FROM %s WHERE %s",
newTable, sourceTable, condition
);
int syncedCount = jdbcTemplate.update(syncSql);
log.info("存量數(shù)據(jù)同步完成: {} 條記錄", syncedCount);
// 等待雙寫(xiě)追平增量數(shù)據(jù)
log.info("等待增量數(shù)據(jù)追平...");
sleep(30000); // 等待30秒,根據(jù)業(yè)務(wù)調(diào)整
// 檢查數(shù)據(jù)一致性
checkDataConsistency(sourceTable, newTable);
}
private void checkDataConsistency(String sourceTable, String newTable) {
// 檢查關(guān)鍵業(yè)務(wù)數(shù)據(jù)的一致性
Integer sourceCount = jdbcTemplate.queryForObject(
"SELECT COUNT(*) FROM " + sourceTable, Integer.class);
Integer newCount = jdbcTemplate.queryForObject(
"SELECT COUNT(*) FROM " + newTable, Integer.class);
log.info("數(shù)據(jù)一致性檢查: 原表{}條, 新表{}條", sourceCount, newCount);
// 這里可以添加更詳細(xì)的一致性檢查
}
private boolean validateDataSync(String sourceTable, String newTable) {
// 驗(yàn)證數(shù)據(jù)同步的正確性
// 這里可以實(shí)現(xiàn)更復(fù)雜的驗(yàn)證邏輯
log.info("數(shù)據(jù)同步驗(yàn)證通過(guò)");
returntrue;
}
private void switchToNewTable(String sourceTable, String newTable, String tempTable) {
// 短暫停寫(xiě)(根據(jù)業(yè)務(wù)情況,可能不需要)
log.info("開(kāi)始停寫(xiě)切換...");
sleep(5000); // 停寫(xiě)5秒
// 原子切換
jdbcTemplate.execute("RENAME TABLE " +
sourceTable + " TO " + sourceTable + "_backup, " +
newTable + " TO " + sourceTable);
log.info("表切換完成");
}
private void cleanupAfterSwitch(String sourceTable, String tempTable) {
// 關(guān)閉雙寫(xiě)
disableDoubleWrite();
// 延遲刪除備份表(保留一段時(shí)間)
log.info("備份表保留: {}_backup", sourceTable);
log.info("臨時(shí)表已刪除: {}", tempTable);
jdbcTemplate.execute("DROP TABLE " + tempTable);
}
private void disableDoubleWrite() {
log.info("請(qǐng)關(guān)閉應(yīng)用層雙寫(xiě)配置");
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}六、方案五:使用專(zhuān)業(yè)工具
對(duì)于特別大的表或者復(fù)雜的刪除需求,可以使用專(zhuān)業(yè)的數(shù)據(jù)庫(kù)工具。
1. pt-archiver(Percona Toolkit)
# 安裝Percona Toolkit
# Ubuntu/Debian:
sudo apt-get install percona-toolkit
# 使用pt-archiver歸檔刪除數(shù)據(jù)
pt-archiver \
--source h=localhost,D=test,t=user_operation_log \
--where"create_time < '2023-01-01'" \
--limit 1000 \
--commit-each \
--sleep 0.1 \
--statistics \
--progress 10000 \
--why-not \
--dry-run # 先試運(yùn)行,確認(rèn)無(wú)誤后移除此參數(shù)
# 實(shí)際執(zhí)行刪除
pt-archiver \
--source h=localhost,D=test,t=user_operation_log \
--where"create_time < '2023-01-01'" \
--limit 1000 \
--commit-each \
--sleep 0.1 \
--purge2. 自定義工具類(lèi)
@Component
@Slf4j
public class SmartDeleteTool {
@Autowired
private JdbcTemplate jdbcTemplate;
/**
* 智能刪除決策
*/
public void smartDelete(String tableName, String condition) {
try {
// 1. 分析表狀態(tài)
TableAnalysisResult analysis = analyzeTable(tableName, condition);
// 2. 根據(jù)分析結(jié)果選擇最佳方案
DeleteStrategy strategy = chooseBestStrategy(analysis);
// 3. 執(zhí)行刪除
executeDelete(strategy, tableName, condition);
} catch (Exception e) {
log.error("智能刪除失敗", e);
throw e;
}
}
private TableAnalysisResult analyzeTable(String tableName, String condition) {
TableAnalysisResult result = new TableAnalysisResult();
// 分析表大小
result.setTotalRows(getTableRowCount(tableName));
result.setDeleteRows(getDeleteRowCount(tableName, condition));
result.setDeleteRatio(result.getDeleteRows() * 1.0 / result.getTotalRows());
// 分析表結(jié)構(gòu)
result.setHasPartition(isTablePartitioned(tableName));
result.setHasPrimaryKey(hasPrimaryKey(tableName));
result.setIndexCount(getIndexCount(tableName));
// 分析系統(tǒng)負(fù)載
result.setSystemLoad(getSystemLoad());
return result;
}
private DeleteStrategy chooseBestStrategy(TableAnalysisResult analysis) {
if (analysis.isHasPartition() && analysis.getDeleteRatio() > 0.3) {
return DeleteStrategy.PARTITION_DROP;
}
if (analysis.getDeleteRatio() > 0.5) {
return DeleteStrategy.TABLE_REBUILD;
}
if (analysis.getTotalRows() > 10_000_000) {
return DeleteStrategy.BATCH_DELETE_WITH_PAUSE;
}
return DeleteStrategy.BATCH_DELETE;
}
private void executeDelete(DeleteStrategy strategy, String tableName, String condition) {
switch (strategy) {
case PARTITION_DROP:
executePartitionDrop(tableName, condition);
break;
case TABLE_REBUILD:
executeTableRebuild(tableName, condition);
break;
case BATCH_DELETE_WITH_PAUSE:
executeBatchDeleteWithPause(tableName, condition);
break;
default:
executeBatchDelete(tableName, condition);
}
}
// 各種策略的具體實(shí)現(xiàn)...
private long getTableRowCount(String tableName) {
String sql = "SELECT COUNT(*) FROM " + tableName;
return jdbcTemplate.queryForObject(sql, Long.class);
}
private long getDeleteRowCount(String tableName, String condition) {
String sql = "SELECT COUNT(*) FROM " + tableName + " WHERE " + condition;
return jdbcTemplate.queryForObject(sql, Long.class);
}
private boolean isTablePartitioned(String tableName) {
String sql = "SELECT COUNT(*) FROM information_schema.partitions " +
"WHERE table_name = ? AND partition_name IS NOT NULL";
Integer count = jdbcTemplate.queryForObject(sql, Integer.class, tableName);
return count != null && count > 0;
}
// 其他分析方法...
}
enum DeleteStrategy {
BATCH_DELETE, // 普通分批刪除
BATCH_DELETE_WITH_PAUSE, // 帶休眠的分批刪除
TABLE_REBUILD, // 重建表
PARTITION_DROP, // 刪除分區(qū)
ONLINE_MIGRATION // 在線遷移
}
@Data
class TableAnalysisResult {
private long totalRows;
private long deleteRows;
private double deleteRatio;
private boolean hasPartition;
private boolean hasPrimaryKey;
private int indexCount;
private double systemLoad;
}七、方案對(duì)比與選擇指南
為了幫助大家選擇合適的方案,我整理了詳細(xì)的對(duì)比表:
方案對(duì)比矩陣
方案 | 適用場(chǎng)景 | 優(yōu)點(diǎn) | 缺點(diǎn) | 風(fēng)險(xiǎn)等級(jí) |
分批刪除 | 小批量刪除, | 實(shí)現(xiàn)簡(jiǎn)單, | 執(zhí)行時(shí)間長(zhǎng), | 中 |
重建表 | 刪除比例>50%, | 執(zhí)行速度快, | 需要停寫(xiě), | 高 |
分區(qū)刪除 | 表已分區(qū)或可分區(qū) | 秒級(jí)完成, | 需要前期規(guī)劃, | 低 |
在線同步 | 要求零停機(jī), | 業(yè)務(wù)無(wú)感知, | 實(shí)現(xiàn)復(fù)雜, | 中 |
專(zhuān)業(yè)工具 | 復(fù)雜場(chǎng)景, | 功能強(qiáng)大, | 學(xué)習(xí)成本, | 中 |
選擇決策流程圖
圖片
實(shí)戰(zhàn)建議
- 測(cè)試環(huán)境驗(yàn)證:任何刪除方案都要先在測(cè)試環(huán)境驗(yàn)證
- 備份優(yōu)先:刪除前一定要備份數(shù)據(jù)
- 業(yè)務(wù)低峰期:選擇業(yè)務(wù)低峰期執(zhí)行刪除操作
- 監(jiān)控告警:實(shí)時(shí)監(jiān)控?cái)?shù)據(jù)庫(kù)狀態(tài),設(shè)置告警閾值
- 回滾預(yù)案:準(zhǔn)備完善的回滾方案
總結(jié)
經(jīng)過(guò)上面的詳細(xì)分析,我們來(lái)總結(jié)一下千萬(wàn)級(jí)大表數(shù)據(jù)刪除的核心要點(diǎn)。
核心原則
- 安全第一:任何刪除操作都要確保數(shù)據(jù)安全
- 影響最小:盡量減少對(duì)業(yè)務(wù)的影響
- 效率優(yōu)先:選擇最適合的高效方案
- 可監(jiān)控:整個(gè)過(guò)程要可監(jiān)控、可控制
技術(shù)選型口訣
根據(jù)多年的實(shí)戰(zhàn)經(jīng)驗(yàn),我總結(jié)了一個(gè)簡(jiǎn)單的選型口訣:
看分區(qū),判比例,定方案
- 有分區(qū):直接刪除分區(qū)最快
- 刪的少:分批刪除最穩(wěn)妥
- 刪的多:重建表最高效
- 不能停:在線同步最安全
最后的建議
大表數(shù)據(jù)刪除是一個(gè)需要謹(jǐn)慎對(duì)待的操作,我建議大家:
- 預(yù)防優(yōu)于治療:通過(guò)數(shù)據(jù)生命周期管理,定期清理數(shù)據(jù)
- 架構(gòu)要合理:在設(shè)計(jì)階段就考慮數(shù)據(jù)清理策略
- 工具要熟練:掌握各種刪除工具的使用方法
- 經(jīng)驗(yàn)要積累:每次操作后都要總結(jié)經(jīng)驗(yàn)教訓(xùn)
記?。?/span>沒(méi)有最好的方案,只有最適合的方案。

























