文件上传——springboot大文件分片多线程上传功能,前端显示弹出上传进度框

avatar
作者
猴君
阅读量:0

 一、项目搭建

  1. 创建 Spring Boot 项目: 创建一个新的 Spring Boot 项目,添加 Web 依赖。

  2. 添加依赖: 在 pom.xml 文件中添加以下依赖:

<dependency>     <groupId>commons-fileupload</groupId>     <artifactId>commons-fileupload</artifactId>     <version>1.4</version> </dependency> <dependency>     <groupId>commons-io</groupId>     <artifactId>commons-io</artifactId>     <version>2.11.0</version> </dependency> 

二、后端实现

  1. 配置 MultipartResolver: 在 Spring Boot 配置类中添加以下代码:

@Configuration public class MyWebAppConfigurer implements WebMvcConfigurer {      @Bean     public MultipartResolver multipartResolver() {         CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver();         multipartResolver.setMaxUploadSize(-1); // 设置最大上传大小,-1 表示无限制         return multipartResolver;     } } 
  1. 创建 FileUploadService: 创建一个服务类,用于处理文件上传逻辑:

@Service public class FileUploadService {      private String uploadDir = "upload/"; // 设置上传目录      public String initUpload(String fileName, long fileSize, int chunkSize) {         // 1. 生成任务 ID (UUID)         String fileId = UUID.randomUUID().toString();//这里根据实际情况考虑到断点续传的功能,同一个文件生成的标识要一样,后期可以根据这个判断文件上传进度         // 2. 创建临时目录: uploadDir/fileId         File dir = new File(uploadDir, fileId);         if (!dir.exists()) {             dir.mkdirs();         }         // 3. 返回 fileId         return fileId;     }      public String uploadChunk(String fileId, int chunkIndex, int totalChunks, MultipartFile file) throws IOException {         String fileUrl="";         // 1. 获取分片文件         String fileName = file.getOriginalFilename();         // 2. 保存分片到临时目录: uploadDir/fileId/chunkIndex         File chunkFile = new File(uploadDir, fileId + "/" + chunkIndex);         file.transferTo(chunkFile);          // 检查所有分片是否已上传完成         if (allChunksUploaded(fileId, totalChunks)) {             String fileName=datePath()+"/"+fileId;//这里可以不用文件名,如果需要用,则要在controller上传请求增加一个fileName参数             // 合并分片             fileUrl = mergeChunks(fileId, fileName);         }         // 3. 校验分片 MD5 (可选)         return fileUrl;     }      //判断所有的分片是否都上传完毕     private boolean allChunksUploaded(String fileId, int totalChunks) {         for (int i = 0; i < totalChunks; i++) {             File chunkFile = new File(uploadDir + fileName + ".chunk" + i);             if (!chunkFile.exists()) {                 return false;             }         }         return true;     }      public String mergeChunks(String fileId, String fileName) throws IOException {         // 1. 获取所有分片文件         File dir = new File(uploadDir, fileId);         File[] chunkFiles = dir.listFiles();         // 2. 按顺序合并分片         File mergedFile = new File(uploadDir, fileName);         try (FileOutputStream fos = new FileOutputStream(mergedFile, true)) {             for (File chunkFile : chunkFiles) {                 try (FileInputStream fis = new FileInputStream(chunkFile)) {                     IOUtils.copy(fis, fos);                 }             }         }         // 3. 删除临时目录         FileUtils.deleteDirectory(dir);         // 4. 校验文件 MD5 (可选)         // 5. 返回文件存储路径         return uploadDir + fileName;     }      /**      * 日期路径 即年/月/日 如2018/08/08      */     public static final String datePath()     {         Date now = new Date();         return DateFormatUtils.format(now, "yyyy/MM/dd");     } } 
  1. 创建 FileUploadController: 创建一个控制器类,用于处理文件上传请求:

        要在分片上传的基础上实现断点续传,需要在服务端记录每个文件的上传进度,并在客户端请求上传时返回已上传的分片信息。

  • 使用数据库或其他存储机制记录每个文件的上传进度。
  • 可以使用以下信息标识一个上传任务:
    • fileId: 全局唯一标识符,例如 UUID
    • 同一个文件的标识是一样的,这样保证接这上次的进度继续上传。

        

@RestController public class FileUploadController {      @Autowired     private FileUploadService fileUploadService;     private final ExecutorService executorService = Executors.newFixedThreadPool(5); // 线程池大小可配置     private final Map<String, Set<Integer>> uploadProgress = new HashMap<>(); // 使用内存存储上传进度,实际应用中建议使用数据库      @PostMapping("/upload/init")     public ResponseEntity<String> initUpload(@RequestParam("fileName") String fileName,                                               @RequestParam("fileSize") long fileSize,                                               @RequestParam("chunkSize") int chunkSize) {         String fileId = fileUploadService.initUpload(fileName, fileSize, chunkSize);         return ResponseEntity.ok(fileId);     }      @PostMapping("/upload/chunk")     public ResponseEntity<Void> uploadChunk(@RequestParam("fileId") String fileId,                                              @RequestParam("chunkIndex") int chunkIndex,                                               @RequestParam("totalChunks") int totalChunks,                                              @RequestParam("file") MultipartFile file) throws IOException {         String filePath = "";         try {             // 获取或创建上传进度记录             Set<Integer> uploadedChunks = uploadProgress.computeIfAbsent(fileId, k -> new HashSet<>());              // 如果分片已上传,则跳过             if (uploadedChunks.contains(chunkIndex)) {                 return ResponseEntity.ok(new UploadResponse(uploadedChunks));             }              // 使用线程池处理每个分片上传             executorService.execute(() -> {                 try {                     filePath = fileUploadService.uploadChunk(fileId, chunkIndex, totalChunks, file);                 } catch (IOException e) {                     // 处理异常,例如记录日志或返回错误信息                     e.printStackTrace();                 }             });              return ResponseEntity.status(HttpStatus.ACCEPTED).body(new UploadResponse(uploadedChunks, filePath));         } catch (Exception e) {             return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error uploading chunk.");         }         //return ResponseEntity.ok().build();     }  }  // 用于响应上传请求,携带已上传分片信息 class UploadResponse {     Set<Integer> uploadedChunks;     String filePath;      public UploadResponse(Set<Integer> uploadedChunks, String filePath) {         this.uploadedChunks = uploadedChunks;         this.filePath = filePath;     }      public UploadResponse(Set<Integer> uploadedChunks) {         this.uploadedChunks = uploadedChunks;     }      // ... getter setter 方法 ... }

三、前端实现

  1. HTML 页面: 创建一个简单的 HTML 页面,包含文件选择按钮、上传进度条和相关信息展示区域。

  2. JavaScript 代码: 使用 JavaScript 实现文件分割、分片上传、合并请求和上传进度展示等功能。

// 选择文件 const fileInput = document.getElementById('fileInput'); fileInput.addEventListener('change', (event) => {     const file = event.target.files[0];     // ... 文件分割、上传逻辑 });  // 文件分割 const chunkSize = 4 * 1024 * 1024; // 4MB const chunks = sliceFile(file, chunkSize);  const totalChunks = Math.ceil(file.size / chunkSize);  // 初始化上传 const fileId = await initUpload(file.name, file.size, chunkSize);  // 并发上传分片 const uploadPromises = chunks.map((chunk, index) =>      uploadChunk(fileId, index, totalChunks, chunk) );  // 上传完成合并得到文件url await Promise.all(uploadPromises);  async function uploadChunk(fileId, chunkIndex, totalChunks, chunk) {   // ... 创建 FormData ...   const formData = new FormData();   formData.append('file', chunk);   formData.append('chunkIndex', chunkIndex);   formData.append('totalChunks', totalChunks);   formData.append('fileId', fileId); // 添加 fileId 参数    const response = await fetch('/upload', {     method: 'POST',     body: formData,   });   //...处理响应数据 }  function sliceFile(file, chunkSize) {   let chunks = [];   let count = Math.ceil(file.size / chunkSize);     for (let i = 0; i < count; i++) {     let offset = i * chunkSize;     let chunk = file.slice(offset, offset + chunkSize + 1);     chunks.push(chunk);   }     return chunks; }

四、上传进度展示

  1. 后端: 在 FileUploadService 中添加方法,根据 fileId 返回已上传分片数量或计算上传进度百分比。

  2. 前端: 使用 setInterval 定时请求后端获取上传进度,并更新进度条。

前端html代码使用示例:

<!DOCTYPE html> <html lang="en"> <head>     <meta charset="UTF-8">     <title>文件上传</title> </head> <body> <h1>大文件分片上传</h1> <input type="file" id="fileInput"> <button id="uploadBtn">上传</button> <div>上传进度:<progress id="progressBar" value="0" max="100"></progress> <span id="progressText">0%</span></div> <script>     const fileInput = document.getElementById('fileInput');     const uploadBtn = document.getElementById('uploadBtn');     const progressBar = document.getElementById('progressBar');     const progressText = document.getElementById('progressText');     uploadBtn.addEventListener('click', uploadFile);     async function uploadFile() {         const file = fileInput.files[0];         if (!file) {             alert('请选择文件');             return;         }         const chunkSize = 4 * 1024 * 1024; // 4MB         const fileId = await initUpload(file.name, file.size, chunkSize);         const chunks = sliceFile(file, chunkSize);         let uploadedChunks = 0;         const uploadPromises = chunks.map((chunk, index) => {             return uploadChunk(fileId, index, chunks.length, chunk)                 .then(() => {                     uploadedChunks++;                     updateProgress(uploadedChunks / chunks.length);                 });         });         await Promise.all(uploadPromises);         await mergeChunks(fileId);         alert('上传完成!');     }     function sliceFile(file, chunkSize) {         const chunks = [];         let offset = 0;         while (offset < file.size) {             chunks.push(file.slice(offset, offset + chunkSize));             offset += chunkSize;         }         return chunks;     }     async function initUpload(fileName, fileSize, chunkSize) {         const response = await fetch('/upload/init', {             method: 'POST',             headers: {                 'Content-Type': 'application/json'             },             body: JSON.stringify({ fileName, fileSize, chunkSize })         });         return await response.text();     }     async function uploadChunk(fileId, chunkIndex, totalChunks, chunk) {         const formData = new FormData();         formData.append('fileId', fileId);         formData.append('chunkIndex', chunkIndex);         formData.append('totalChunks', totalChunks);         formData.append('file', chunk);         await fetch('/upload/chunk', {             method: 'POST',             body: formData         });     }     async function mergeChunks(fileId) {         await fetch(`/upload/merge?fileId=${fileId}`);     }     function updateProgress(progress) {         progressBar.value = progress * 100;         progressText.textContent = Math.round(progress * 100) + '%';     } </script> </body> </html>

五、存储上传进度

使用 Redis 存储上传进度

  1. Redis 数据结构:

    • 使用 Hash 结构存储每个文件的上传进度,key 为 fileId和chunkIndex,chunkIndex 为分片索引,value 为 true 或 false,表示分片是否已上传。
  2. 代码实现:

  • 注入 RedisTemplate:
@Autowired private RedisTemplate<String, String> redisTemplate; 
  • 修改 uploadChunk 方法: 
private void uploadChunk(String fileId, int chunkIndex, int totalChunks, MultipartFile file) throws IOException {     // ... 保存分片文件 ...      // 更新上传进度到 Redis     String chunkKey = fileId + ":" + chunkIndex;     redisTemplate.opsForValue().set(chunkKey, "true");      // 检查所有分片是否已上传完成     if (redisTemplate.opsForHash().size(fileId) == totalChunks) {         // ... 合并分片 ...         // 清除上传进度         redisTemplate.delete(fileId);     } } 
  • 添加 /upload/progress 接口: 
@GetMapping("/upload/progress") public ResponseEntity<UploadResponse> getUploadProgress(         @RequestParam("identifier") String identifier ) {     Set<String> uploadedChunks = redisTemplate.keys(identifier + ":*");     Set<Integer> uploadedChunkIndices = uploadedChunks.stream()             .map(s -> Integer.parseInt(s.substring((identifier + ":").length())))             .collect(Collectors.toSet());     return ResponseEntity.ok(new UploadResponse(uploadedChunkIndices)); } 
  • 前端js使用
// ... 其他代码 ... // 选择文件 const fileInput = document.getElementById('fileInput'); fileInput.addEventListener('change', (event) => {     const file = event.target.files[0];     // ... 文件分割、上传逻辑 });  // 文件分割 const chunkSize = 4 * 1024 * 1024; // 4MB const chunks = sliceFile(file, chunkSize); const totalChunks = Math.ceil(file.size / chunkSize);  // 初始化上传 const fileId = await initUpload(file.name, file.size, chunkSize);  async function uploadFile(chunks) {   // 获取已上传的分片信息   const uploadedChunks = await getUploadedChunks(fileId, file.name);    // ... 根据 uploadedChunks 调整分片上传逻辑 ...   if(...){//获取uploadedChunks为分片的索引,表示当前文件上传的进度,根据uploadedChunks的具体数据类型去取     for (let i = (uploadedChunks); i < chunks.length; i++) {       //const start = i * chunkSize;       //const end = Math.min(start + chunkSize, file.size);       //const chunk = file.slice(start, end);       const chunk = chunks[i];        await uploadChunk(fileId, i, totalChunks, chunk);     }   }else{     // 并发上传分片     chunks.map((chunk, index) =>          uploadChunk(fileId, index, totalChunks, chunk)     );   }   }  async function getUploadedChunks(fileId, fileName) {   const response = await fetch(`/upload/progress?identifier=${fileId}&fileName=${fileName}`);   const data = await response.json();   return data.uploadedChunks || []; }  async function uploadChunk(fileId, chunkIndex, totalChunks, chunk) {   // ... 创建 FormData ...   const formData = new FormData();   formData.append('file', chunk);   formData.append('chunkIndex', chunkIndex);   formData.append('totalChunks', totalChunks);   formData.append('fileId', fileId); // 添加 fileId 参数    const response = await fetch('/upload', {     method: 'POST',     body: formData,   });   //...处理响应数据 }  function sliceFile(file, chunkSize) {   let chunks = [];   let count = Math.ceil(file.size / chunkSize);     for (let i = 0; i < count; i++) {     let offset = i * chunkSize;     let chunk = file.slice(offset, offset + chunkSize + 1);     chunks.push(chunk);   }     return chunks; }

 重新上传获取进度:

  • 用户重新选择同一个文件上传时,需要生成相同的 fileId
  • 在前端上传前,调用 /upload/progress 接口,传入 fileId 获取已上传的分片信息。
  • 根据返回的已上传分片信息,跳过已上传的分片,继续上传剩余分片。

六、注意事项

  • 以上代码示例省略了部分细节,例如异常处理、MD5 校验等,请根据实际情况进行完善。
  • 前端代码需要根据您使用的 JavaScript 框架进行调整。
  • 建议您先学习 Spring Boot 文件上传、JavaScript 文件操作和 AJAX 等前端相关知识。

希望这些更详细的步骤和代码片段能够帮助您更好地理解和实现 Spring Boot 断点续传、多线程分片上传功能! 如果您还有其他问题,请随时提出。

广告一刻

为您即时展示最新活动产品广告消息,让您随时掌握产品活动新动态!