再也不用等!Spring Boot 3.3 搞定大文件分塊上傳 + 文件“秒傳”,速度飛起!
作者:編程疏影 
  通過本文,我們構建了一個完整的支持大文件上傳系統(tǒng),具備高效、穩(wěn)定、可擴展的特性,適用于企業(yè)級系統(tǒng)中的文檔上傳、視頻管理、素材收集等場景。
 - 在現(xiàn)代化的文件上傳場景中,用戶往往會面臨上傳大文件、網(wǎng)絡中斷、重復上傳浪費帶寬等挑戰(zhàn)。為了解決這些問題,本文基于 Spring Boot 3.3 搭建一個高性能、可擴展的文件上傳系統(tǒng),支持:結合前后端完整示例,本文將帶你從零構建一套實用的上傳方案,助力各類業(yè)務系統(tǒng)高效接入大文件處理能力。
 
文件秒傳(通過 MD5 實現(xiàn))
分塊上傳(支持大文件斷點續(xù)傳)
分塊合并(支持服務端合并)
構建 Spring Boot 3.3 項目
pom.xml 關鍵依賴配置如下:
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>cn.hutool</groupId>
        <artifactId>hutool-all</artifactId>
        <version>5.8.25</version>
    </dependency>
</dependencies>定義文件信息實體類 FileInfo
package com.icoderoad.model;
import cn.hutool.core.util.IdUtil;
public class FileInfo {
    private String id = IdUtil.fastUUID();
    private String fileName;
    private String fileMd5;
    private Long fileSize;
    private String filePath;
    public FileInfo(String fileName, String fileMd5, Long fileSize, String filePath) {
        this.fileName = fileName;
        this.fileMd5 = fileMd5;
        this.fileSize = fileSize;
        this.filePath = filePath;
    }
}核心服務類 FileService
package com.icoderoad.service;
import cn.hutool.crypto.digest.DigestUtil;
import com.icoderoad.model.FileInfo;
import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;
import java.io.*;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
@Service
public class FileService {
    private final Map<String, FileInfo> fileStore = new ConcurrentHashMap<>();
    private final String tempDir = System.getProperty("java.io.tmpdir") + File.separator + "chunks";
    public FileInfo findByMd5(String md5) {
        return fileStore.get(md5);
    }
    public FileInfo saveFile(String fileName, String fileMd5, Long fileSize, String filePath) {
        FileInfo info = new FileInfo(fileName, fileMd5, fileSize, filePath);
        fileStore.put(fileMd5, info);
        return info;
    }
    public String calculateMD5(MultipartFile file) throws IOException {
        return DigestUtil.md5Hex(file.getInputStream());
    }
    public void saveChunk(MultipartFile chunk, String identifier, int index) throws IOException {
        File dir = new File(tempDir + File.separator + identifier);
        if (!dir.exists()) dir.mkdirs();
        chunk.transferTo(new File(dir, index + ".part"));
    }
    public File mergeChunks(String identifier, int totalChunks, String fileName) throws IOException {
        File dir = new File(tempDir + File.separator + identifier);
        File merged = new File(System.getProperty("java.io.tmpdir"), fileName);
        try (FileOutputStream out = new FileOutputStream(merged)) {
            for (int i = 0; i < totalChunks; i++) {
                File chunk = new File(dir, i + ".part");
                try (FileInputStream in = new FileInputStream(chunk)) {
                    byte[] buffer = new byte[1024 * 1024];
                    int len;
                    while ((len = in.read(buffer)) > 0) {
                        out.write(buffer, 0, len);
                    }
                }
            }
        }
        return merged;
    }
}通用返回結構 Result
package com.icoderoad.common;
public class Result {
    private boolean success;
    private Object data;
    private String message;
    public Result(boolean success, Object data, String message) {
        this.success = success;
        this.data = data;
        this.message = message;
    }
    public static Result success(Object data) {
        return new Result(true, data, "成功");
    }
    public static Result error(String message) {
        return new Result(false, null, message);
    }
}控制器 FileController
package com.icoderoad.controller;
import com.icoderoad.common.Result;
import com.icoderoad.model.FileInfo;
import com.icoderoad.service.FileService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;
import java.io.File;
@RestController
@RequestMapping("/api/file")
public class FileController {
    @Autowired
    private FileService fileService;
    @PostMapping("/check")
    public Result check(@RequestParam("md5") String md5) {
        FileInfo exist = fileService.findByMd5(md5);
        return Result.success(exist);
    }
    @PostMapping("/upload")
    public Result upload(@RequestParam("file") MultipartFile file) {
        try {
            String md5 = fileService.calculateMD5(file);
            FileInfo exist = fileService.findByMd5(md5);
            if (exist != null) return Result.success(exist);
            String path = System.getProperty("java.io.tmpdir") + File.separator + file.getOriginalFilename();
            file.transferTo(new File(path));
            FileInfo saved = fileService.saveFile(file.getOriginalFilename(), md5, file.getSize(), path);
            return Result.success(saved);
        } catch (Exception e) {
            return Result.error("上傳失敗: " + e.getMessage());
        }
    }
    @PostMapping("/chunk")
    public Result uploadChunk(@RequestParam("chunk") MultipartFile chunk,
                              @RequestParam("identifier") String identifier,
                              @RequestParam("index") int index) {
        try {
            fileService.saveChunk(chunk, identifier, index);
            return Result.success("分塊上傳成功");
        } catch (Exception e) {
            return Result.error("上傳分塊失敗: " + e.getMessage());
        }
    }
    @PostMapping("/merge")
    public Result mergeChunks(@RequestParam("identifier") String identifier,
                               @RequestParam("total") int total,
                               @RequestParam("fileName") String fileName) {
        try {
            File merged = fileService.mergeChunks(identifier, total, fileName);
            String md5 = DigestUtil.md5Hex(merged);
            FileInfo info = fileService.saveFile(fileName, md5, merged.length(), merged.getAbsolutePath());
            return Result.success(info);
        } catch (Exception e) {
            return Result.error("合并失敗: " + e.getMessage());
        }
    }
}前端: 基于 Bootstrap + SparkMD5 + Axios 實現(xiàn)分塊上傳與秒傳功能
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<title>大文件上傳 Demo</title>
<link  rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/spark-md5/spark-md5.min.js"></script>
</head>
<body class="container py-5">
<div class="card shadow-lg">
    <div class="card-header bg-primary text-white">
      <h4>大文件分塊上傳 + 秒傳 Demo</h4>
    </div>
    <div class="card-body">
      <div class="mb-3">
        <input type="file" class="form-control" id="fileInput">
      </div>
      <button class="btn btn-success" onclick="upload()">開始上傳</button>
      <div class="mt-3">
        <div class="progress">
          <div id="progressBar" class="progress-bar" role="progressbar" style="width: 0%">0%</div>
        </div>
      </div>
    </div>
</div>
<script>
    constCHUNK_SIZE=2*1024*1024;// 每塊 2MB
    let file =null;
    document.getElementById('fileInput').addEventListener('change',function(e){
      file = e.target.files[0];
    });
    asyncfunctionupload(){
      if(!file)returnalert('請選擇文件');
      const fileMD5 =awaitcalculateMD5(file);
      const chunkCount =Math.ceil(file.size/CHUNK_SIZE);
      const checkRes =await axios.get('/api/file/check',{
        params:{md5: fileMD5,fileName: file.name}
      });
      if(checkRes.data.code===200&& checkRes.data.data.exists){
        alert('服務器已存在該文件,已秒傳成功!');
        updateProgressBar(100);
        return;
      }
      for(let i =0; i < chunkCount; i++){
        const start = i *CHUNK_SIZE;
        const end =Math.min(file.size, start +CHUNK_SIZE);
        const chunk = file.slice(start, end);
        const formData =newFormData();
        formData.append('file', chunk);
        formData.append('md5', fileMD5);
        formData.append('chunk', i);
        formData.append('total', chunkCount);
        formData.append('fileName', file.name);
        await axios.post('/api/file/chunk', formData);
        updateProgressBar(Math.round(((i +1)/ chunkCount)*100));
      }
      await axios.post('/api/file/merge',null,{
        params:{md5: fileMD5,fileName: file.name}
      });
      alert('上傳并合并完成!');
    }
    asyncfunctioncalculateMD5(file){
      returnnewPromise((resolve, reject)=>{
        const chunkSize =CHUNK_SIZE;
        const chunks =Math.ceil(file.size/ chunkSize);
        let currentChunk =0;
        const spark =newSparkMD5.ArrayBuffer();
        const fileReader =newFileReader();
        fileReader.onload=e=>{
          spark.append(e.target.result);
          currentChunk++;
          if(currentChunk < chunks){
            loadNext();
          }else{
            resolve(spark.end());
          }
        };
        fileReader.onerror=()=>reject('讀取失敗');
        functionloadNext(){
          const start = currentChunk * chunkSize;
          const end =Math.min(start + chunkSize, file.size);
          fileReader.readAsArrayBuffer(file.slice(start, end));
        }
        loadNext();
      });
    }
    functionupdateProgressBar(percent){
      const bar =document.getElementById('progressBar');
      bar.style.width= percent +'%';
      bar.innerText= percent +'%';
    }
  </script>
</body>
</html>結語
通過本文,我們構建了一個完整的支持大文件上傳系統(tǒng),具備高效、穩(wěn)定、可擴展的特性,適用于企業(yè)級系統(tǒng)中的文檔上傳、視頻管理、素材收集等場景。其核心優(yōu)勢在于:
- ?? 秒傳機制:避免重復上傳,節(jié)省資源
 - ?? 分塊傳輸:支持大文件,提升上傳穩(wěn)定性
 - ?? 擴展性強:可進一步結合 Redis、MQ 等組件提升并發(fā)處理能力
 
責任編輯:武曉燕 
                    來源:
                    路條編程
 














 
 
 













 
 
 
 