AWS-S3实现Minio分片上传、断点续传、秒传、分片下载、暂停下载

avatar
作者
筋斗云
阅读量:13

文章目录

前言

Amazon Simple Storage Service(S3),简单存储服务,是一个公开的云存储服务。Web应用程序开发人员可以使用它存储数字资产,包括图片、视频、音乐和文档。S3提供一个RESTful API以编程方式实现与该服务的交互。目前市面上主流的存储厂商都支持S3协议接口。

本文借鉴风希落https://www.cnblogs.com/jsonq/p/18186340大佬的文章及代码修改而来。

项目采用前后端分离模式:
前端:vue3 + element-plus + axios + spark-md5
后端:Springboot 3X + minio+aws-s3 + redis + mysql + mybatisplus

本文全部代码以上传gitee:https://gitee.com/luzhiyong_erfou/learning-notes/tree/master/aws-s3-upload

一、功能展示

上传功能点

  • 大文件分片上传
  • 文件秒传
  • 断点续传
  • 上传进度

下载功能点

  • 分片下载
  • 暂停下载
  • 下载进度

效果展示

在这里插入图片描述

二、思路流程

上传流程

一个文件的上传,对接后端的请求有三个

  • 点击上传时,请求 <检查文件 md5> 接口,判断文件的状态(已存在、未存在、传输部分)
  • 根据不同的状态,通过 <初始化分片上传地址>,得到该文件的分片地址
  • 前端将分片地址和分片文件一一对应进行上传,直接上传至对象存储
  • 上传完毕,调用 <合并文件> 接口,合并文件,文件数据入库
    在这里插入图片描述

整体步骤:

  • 前端计算文件 md5,并发请求查询此文件的状态
  • 若文件已上传,则后端直接返回上传成功,并返回 url 地址
  • 若文件未上传,则前端请求初始化分片接口,返回上传地址。循环将分片文件和分片地址一一对一应 若文件上传一部分,后端会返回该文件的uploadId (minio中的文件标识)和listParts(已上传的分片索引),前端请求初始化分片接口,后端重新生成上传地址。前端循环将已上传的分片过滤掉,未上传的分片和分片地址一一对应。
  • 前端通过分片地址将分片文件一一上传
  • 上传完毕后,前端调用合并分片接口
  • 后端判断该文件是单片还是分片,单片则不走合并,仅信息入库,分片则先合并,再信息入库。删除 redis 中的文件信息,返回文件地址。

下载流程

整体步骤:

  • 前端计算分片下载的请求次数并设置每次请求的偏移长度
  • 循环调用后端接口
  • 后端判断文件是否缓存并获取文件信息,根据前端传入的便宜长度和分片大小获取文件流返回前端
  • 前端记录每片的blob
  • 根据文件流转成的 blob 下载文件

在这里插入图片描述

三、代码示例

service

import cn.hutool.core.bean.BeanUtil; import cn.hutool.core.date.DateUtil; import cn.hutool.core.io.FileUtil; import cn.hutool.core.util.StrUtil; import cn.hutool.json.JSONUtil; import cn.superlu.s3uploadservice.common.R; import cn.superlu.s3uploadservice.config.FileProperties; import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum; import cn.superlu.s3uploadservice.mapper.SysFileUploadMapper; import cn.superlu.s3uploadservice.model.bo.FileUploadInfo; import cn.superlu.s3uploadservice.model.entity.SysFileUpload; import cn.superlu.s3uploadservice.model.vo.BaseFileVo; import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO; import cn.superlu.s3uploadservice.service.SysFileUploadService; import cn.superlu.s3uploadservice.utils.AmazonS3Util; import cn.superlu.s3uploadservice.utils.MinioUtil; import cn.superlu.s3uploadservice.utils.RedisUtil; import com.amazonaws.services.s3.model.S3Object; import com.amazonaws.services.s3.model.S3ObjectInputStream; import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Service;  import java.io.BufferedOutputStream; import java.io.IOException; import java.nio.charset.StandardCharsets; import java.time.LocalDateTime; import java.util.List; import java.util.concurrent.TimeUnit;  @Service @Slf4j @RequiredArgsConstructor public class SysFileUploadServiceImpl extends ServiceImpl<SysFileUploadMapper, SysFileUpload> implements SysFileUploadService {      private static final Integer BUFFER_SIZE = 1024 * 64; // 64KB      private final RedisUtil redisUtil;      private final MinioUtil minioUtil;      private final AmazonS3Util amazonS3Util;     private final FileProperties fileProperties;      /**      * 检查文件是否存在      * @param md5      * @return      */     @Override     public R<BaseFileVo<FileUploadInfo>> checkFileByMd5(String md5) {         log.info("查询md5: <{}> 在redis是否存在", md5);         FileUploadInfo fileUploadInfo = (FileUploadInfo)redisUtil.get(md5);          if (fileUploadInfo != null) {             log.info("查询到md5:在redis中存在:{}", JSONUtil.toJsonStr(fileUploadInfo));             if(fileUploadInfo.getChunkCount()==1){                 return R.ok( BaseFileVo.builder(FileHttpCodeEnum.NOT_UPLOADED, null));             }else{                 List<Integer> listParts = minioUtil.getListParts(fileUploadInfo.getObject(), fileUploadInfo.getUploadId()); //              List<Integer> listParts = amazonS3Util.getListParts(fileUploadInfo.getObject(), fileUploadInfo.getUploadId());                 fileUploadInfo.setListParts(listParts);                 return R.ok( BaseFileVo.builder(FileHttpCodeEnum.UPLOADING, fileUploadInfo));             }         }         log.info("redis中不存在md5: <{}> 查询mysql是否存在", md5);         SysFileUpload file = baseMapper.selectOne(new LambdaQueryWrapper<SysFileUpload>().eq(SysFileUpload::getMd5, md5));         if (file != null) {             log.info("mysql中存在md5: <{}> 的文件 该文件已上传至minio 秒传直接过", md5);             FileUploadInfo dbFileInfo = BeanUtil.toBean(file, FileUploadInfo.class);             return R.ok( BaseFileVo.builder(FileHttpCodeEnum.UPLOAD_SUCCESS, dbFileInfo));         }         return R.ok( BaseFileVo.builder(FileHttpCodeEnum.NOT_UPLOADED, null));     }      /**      * 初始化文件分片地址及相关数据      * @param fileUploadInfo      * @return      */     @Override     public R<BaseFileVo<UploadUrlsVO>> initMultipartUpload(FileUploadInfo fileUploadInfo) {         log.info("查询md5: <{}> 在redis是否存在", fileUploadInfo.getMd5());         FileUploadInfo redisFileUploadInfo = (FileUploadInfo)redisUtil.get(fileUploadInfo.getMd5());         // 若 redis 中有该 md5 的记录,以 redis 中为主         String object;         if (redisFileUploadInfo != null) {             fileUploadInfo = redisFileUploadInfo;             object = redisFileUploadInfo.getObject();         } else {             String originFileName = fileUploadInfo.getOriginFileName();             String suffix = FileUtil.extName(originFileName);             String fileName = FileUtil.mainName(originFileName);             // 对文件重命名,并以年月日文件夹格式存储             String nestFile = DateUtil.format(LocalDateTime.now(), "yyyy/MM/dd");             object = nestFile + "/" + fileName + "_" + fileUploadInfo.getMd5() + "." + suffix;              fileUploadInfo.setObject(object).setType(suffix);         }         UploadUrlsVO urlsVO;         // 单文件上传         if (fileUploadInfo.getChunkCount() == 1) {             log.info("当前分片数量 <{}> 单文件上传", fileUploadInfo.getChunkCount()); //            urlsVO = minioUtil.getUploadObjectUrl(fileUploadInfo.getContentType(), object);             urlsVO=amazonS3Util.getUploadObjectUrl(fileUploadInfo.getContentType(), object);         } else {             // 分片上传             log.info("当前分片数量 <{}> 分片上传", fileUploadInfo.getChunkCount()); //            urlsVO = minioUtil.initMultiPartUpload(fileUploadInfo, object);             urlsVO = amazonS3Util.initMultiPartUpload(fileUploadInfo, object);         }         fileUploadInfo.setUploadId(urlsVO.getUploadId());         // 存入 redis (单片存 redis 唯一用处就是可以让单片也入库,因为单片只有一个请求,基本不会出现问题)         redisUtil.set(fileUploadInfo.getMd5(), fileUploadInfo, fileProperties.getOss().getBreakpointTime(), TimeUnit.DAYS);         return R.ok(BaseFileVo.builder(FileHttpCodeEnum.SUCCESS, urlsVO));     }      /**      * 合并分片      * @param md5      * @return      */     @Override     public R<BaseFileVo<String>> mergeMultipartUpload(String md5) {         FileUploadInfo redisFileUploadInfo = (FileUploadInfo)redisUtil.get(md5);          String url = StrUtil.format("{}/{}/{}", fileProperties.getOss().getEndpoint(), fileProperties.getBucketName(), redisFileUploadInfo.getObject());         SysFileUpload files = BeanUtil.toBean(redisFileUploadInfo, SysFileUpload.class);         files.setUrl(url)                 .setBucket(fileProperties.getBucketName())                 .setCreateTime(LocalDateTime.now());          Integer chunkCount = redisFileUploadInfo.getChunkCount();         // 分片为 1 ,不需要合并,否则合并后看返回的是 true 还是 false         boolean isSuccess = chunkCount == 1 || minioUtil.mergeMultipartUpload(redisFileUploadInfo.getObject(), redisFileUploadInfo.getUploadId()); //        boolean isSuccess = chunkCount == 1 || amazonS3Util.mergeMultipartUpload(redisFileUploadInfo.getObject(), redisFileUploadInfo.getUploadId());         if (isSuccess) {             baseMapper.insert(files);             redisUtil.del(md5);             return R.ok(BaseFileVo.builder(FileHttpCodeEnum.SUCCESS, url));         }         return R.ok(BaseFileVo.builder(FileHttpCodeEnum.UPLOAD_FILE_FAILED, null));     }      /**      * 分片下载      * @param id      * @param request      * @param response      * @return      * @throws IOException      */     @Override     public ResponseEntity<byte[]> downloadMultipartFile(Long id, HttpServletRequest request, HttpServletResponse response) throws IOException {         // redis 缓存当前文件信息,避免分片下载时频繁查库         SysFileUpload file = null;         SysFileUpload redisFile = (SysFileUpload)redisUtil.get(String.valueOf(id));         if (redisFile == null) {             SysFileUpload dbFile = baseMapper.selectById(id);             if (dbFile == null) {                 return null;             } else {                 file = dbFile;                 redisUtil.set(String.valueOf(id), file, 1, TimeUnit.DAYS);             }         } else {             file = redisFile;         }          String range = request.getHeader("Range");         String fileName = file.getOriginFileName();         log.info("下载文件的 object <{}>", file.getObject());         // 获取 bucket 桶中的文件元信息,获取不到会抛出异常 //        StatObjectResponse objectResponse = minioUtil.statObject(file.getObject());         S3Object s3Object = amazonS3Util.statObject(file.getObject());         long startByte = 0; // 开始下载位置 //        long fileSize = objectResponse.size();         long fileSize = s3Object.getObjectMetadata().getContentLength();         long endByte = fileSize - 1; // 结束下载位置         log.info("文件总长度:{},当前 range:{}", fileSize, range);          BufferedOutputStream os = null; // buffer 写入流 //        GetObjectResponse stream = null; // minio 文件流          // 存在 range,需要根据前端下载长度进行下载,即分段下载         // 例如:range=bytes=0-52428800         if (range != null && range.contains("bytes=") && range.contains("-")) {             range = range.substring(range.lastIndexOf("=") + 1).trim(); // 0-52428800             String[] ranges = range.split("-");             // 判断range的类型             if (ranges.length == 1) {                 // 类型一:bytes=-2343 后端转换为 0-2343                 if (range.startsWith("-")) endByte = Long.parseLong(ranges[0]);                 // 类型二:bytes=2343- 后端转换为 2343-最后                 if (range.endsWith("-")) startByte = Long.parseLong(ranges[0]);             } else if (ranges.length == 2) { // 类型三:bytes=22-2343                 startByte = Long.parseLong(ranges[0]);                 endByte = Long.parseLong(ranges[1]);             }         }          // 要下载的长度         // 确保返回的 contentLength 不会超过文件的实际剩余大小         long contentLength = Math.min(endByte - startByte + 1, fileSize - startByte);         // 文件类型         String contentType = request.getServletContext().getMimeType(fileName);          // 解决下载文件时文件名乱码问题         byte[] fileNameBytes = fileName.getBytes(StandardCharsets.UTF_8);         fileName = new String(fileNameBytes, 0, fileNameBytes.length, StandardCharsets.ISO_8859_1);          // 响应头设置---------------------------------------------------------------------------------------------         // 断点续传,获取部分字节内容:         response.setHeader("Accept-Ranges", "bytes");         // http状态码要为206:表示获取部分内容,SC_PARTIAL_CONTENT,若部分浏览器不支持,改成 SC_OK         response.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);         response.setContentType(contentType); //        response.setHeader("Last-Modified", objectResponse.lastModified().toString());         response.setHeader("Last-Modified", s3Object.getObjectMetadata().getLastModified().toString());         response.setHeader("Content-Disposition", "attachment;filename=" + fileName);         response.setHeader("Content-Length", String.valueOf(contentLength));         // Content-Range,格式为:[要下载的开始位置]-[结束位置]/[文件总大小]         response.setHeader("Content-Range", "bytes " + startByte + "-" + endByte + "/" + fileSize); //        response.setHeader("ETag", "\"".concat(objectResponse.etag()).concat("\""));         response.setHeader("ETag", "\"".concat(s3Object.getObjectMetadata().getETag()).concat("\""));         response.setContentType("application/octet-stream;charset=UTF-8");          S3ObjectInputStream objectInputStream=null;         try {             // 获取文件流             String object = s3Object.getKey();             S3Object currentObject = amazonS3Util.getObject(object, startByte, contentLength);             objectInputStream = currentObject.getObjectContent(); //            stream = minioUtil.getObject(objectResponse.object(), startByte, contentLength);             os = new BufferedOutputStream(response.getOutputStream());             // 将读取的文件写入到 OutputStream             byte[] bytes = new byte[BUFFER_SIZE];             long bytesWritten = 0;             int bytesRead = -1;             while ((bytesRead = objectInputStream.read(bytes)) != -1) { //            while ((bytesRead = stream.read(bytes)) != -1) {                 if (bytesWritten + bytesRead >= contentLength) {                     os.write(bytes, 0, (int)(contentLength - bytesWritten));                     break;                 } else {                     os.write(bytes, 0, bytesRead);                     bytesWritten += bytesRead;                 }             }             os.flush();             response.flushBuffer();             // 返回对应http状态             return new ResponseEntity<>(bytes, HttpStatus.OK);         } catch (Exception e) {             e.printStackTrace();         } finally {             if (os != null) os.close(); //            if (stream != null) stream.close();             if (objectInputStream != null) objectInputStream.close();         }         return null;     }      @Override     public R<List<SysFileUpload>> getFileList() {         List<SysFileUpload> filesList = this.list();         return R.ok(filesList);     }  }  

AmazonS3Util

 import cn.hutool.core.util.IdUtil; import cn.superlu.s3uploadservice.config.FileProperties; import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum; import cn.superlu.s3uploadservice.model.bo.FileUploadInfo; import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO; import com.amazonaws.ClientConfiguration; import com.amazonaws.HttpMethod; import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.AWSCredentialsProvider; import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.BasicAWSCredentials; import com.amazonaws.client.builder.AwsClientBuilder; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.*; import com.google.common.collect.HashMultimap; import io.minio.GetObjectArgs; import io.minio.GetObjectResponse; import io.minio.StatObjectArgs; import io.minio.StatObjectResponse; import jakarta.annotation.PostConstruct; import jakarta.annotation.Resource; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component;  import java.net.URL; import java.util.*; import java.util.stream.Collectors;  @Slf4j @Component public class AmazonS3Util {      @Resource     private FileProperties fileProperties;      private AmazonS3 amazonS3;      // spring自动注入会失败     @PostConstruct     public void init() {         ClientConfiguration clientConfiguration = new ClientConfiguration();         clientConfiguration.setMaxConnections(100);         AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(                 fileProperties.getOss().getEndpoint(), fileProperties.getOss().getRegion());         AWSCredentials awsCredentials = new BasicAWSCredentials(fileProperties.getOss().getAccessKey(),                 fileProperties.getOss().getSecretKey());         AWSCredentialsProvider awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials);         this.amazonS3 = AmazonS3ClientBuilder.standard()                 .withEndpointConfiguration(endpointConfiguration)                 .withClientConfiguration(clientConfiguration)                 .withCredentials(awsCredentialsProvider)                 .disableChunkedEncoding()                 .withPathStyleAccessEnabled(true)                 .build();     }      /**      * 获取 Minio 中已经上传的分片文件      * @param object 文件名称      * @param uploadId 上传的文件id(由 minio 生成)      * @return List<Integer>      */     @SneakyThrows     public List<Integer> getListParts(String object, String uploadId) {         ListPartsRequest listPartsRequest = new ListPartsRequest( fileProperties.getBucketName(), object, uploadId);         PartListing listParts = amazonS3.listParts(listPartsRequest);         return listParts.getParts().stream().map(PartSummary::getPartNumber).collect(Collectors.toList());     }       /**      * 单文件签名上传      * @param object 文件名称(uuid 格式)      * @return UploadUrlsVO      */     public UploadUrlsVO getUploadObjectUrl(String contentType, String object) {         try {             log.info("<{}> 开始单文件上传<>", object);             UploadUrlsVO urlsVO = new UploadUrlsVO();             List<String> urlList = new ArrayList<>();             // 主要是针对图片,若需要通过浏览器直接查看,而不是下载,需要指定对应的 content-type             HashMultimap<String, String> headers = HashMultimap.create();             if (contentType == null || contentType.equals("")) {                 contentType = "application/octet-stream";             }             headers.put("Content-Type", contentType);              String uploadId = IdUtil.simpleUUID();             Map<String, String> reqParams = new HashMap<>();             reqParams.put("uploadId", uploadId);             //生成预签名的 URL             GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(                     fileProperties.getBucketName(),                     object, HttpMethod.PUT);             generatePresignedUrlRequest.addRequestParameter("uploadId", uploadId);             URL url = amazonS3.generatePresignedUrl(generatePresignedUrlRequest);             urlList.add(url.toString());             urlsVO.setUploadId(uploadId).setUrls(urlList);             return urlsVO;         } catch (Exception e) {             log.error("单文件上传失败: {}", e.getMessage());             throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());         }     }      /**      * 初始化分片上传      * @param fileUploadInfo 前端传入的文件信息      * @param object object      * @return UploadUrlsVO      */     public UploadUrlsVO initMultiPartUpload(FileUploadInfo fileUploadInfo, String object) {         Integer chunkCount = fileUploadInfo.getChunkCount();         String contentType = fileUploadInfo.getContentType();         String uploadId = fileUploadInfo.getUploadId();          log.info("文件<{}> - 分片<{}> 初始化分片上传数据 请求头 {}", object, chunkCount, contentType);         UploadUrlsVO urlsVO = new UploadUrlsVO();         try {             // 如果初始化时有 uploadId,说明是断点续传,不能重新生成 uploadId             if (uploadId == null || uploadId.equals("")) {                 // 第一步,初始化,声明下面将有一个 Multipart Upload                 // 设置文件类型                 ObjectMetadata metadata = new ObjectMetadata();                 metadata.setContentType(contentType);                 InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(fileProperties.getBucketName(),                         object, metadata);                 uploadId = amazonS3.initiateMultipartUpload(initRequest).getUploadId();                 log.info("没有uploadId,生成新的{}",uploadId);             }             urlsVO.setUploadId(uploadId);              List<String> partList = new ArrayList<>();             for (int i = 1; i <= chunkCount; i++) {                 //生成预签名的 URL                 //设置过期时间,例如 1 小时后                 Date expiration = new Date(System.currentTimeMillis() + 3600 * 1000);                 GeneratePresignedUrlRequest generatePresignedUrlRequest =                         new GeneratePresignedUrlRequest(fileProperties.getBucketName(), object,HttpMethod.PUT)                                 .withExpiration(expiration);                 generatePresignedUrlRequest.addRequestParameter("uploadId", uploadId);                 generatePresignedUrlRequest.addRequestParameter("partNumber", String.valueOf(i));                 URL url = amazonS3.generatePresignedUrl(generatePresignedUrlRequest);                 partList.add(url.toString());             }             log.info("文件初始化分片成功");             urlsVO.setUrls(partList);             return urlsVO;         } catch (Exception e) {             log.error("初始化分片上传失败: {}", e.getMessage());             // 返回 文件上传失败             throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());         }     }      /**      * 合并文件      * @param object object      * @param uploadId uploadUd      */     @SneakyThrows     public boolean mergeMultipartUpload(String object, String uploadId) {         log.info("通过 <{}-{}-{}> 合并<分片上传>数据", object, uploadId, fileProperties.getBucketName());         //构建查询parts条件         ListPartsRequest listPartsRequest = new ListPartsRequest(                 fileProperties.getBucketName(),                 object,                 uploadId);         listPartsRequest.setMaxParts(1000);         listPartsRequest.setPartNumberMarker(0);         //请求查询         PartListing partList=amazonS3.listParts(listPartsRequest);         List<PartSummary> parts = partList.getParts();         if (parts==null|| parts.isEmpty()) {             // 已上传分块数量与记录中的数量不对应,不能合并分块             throw new RuntimeException("分片缺失,请重新上传");         }         // 合并分片         CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(                 fileProperties.getBucketName(),                 object,                 uploadId,                 parts.stream().map(partSummary -> new PartETag(partSummary.getPartNumber(), partSummary.getETag())).collect(Collectors.toList()));         amazonS3.completeMultipartUpload(compRequest);         return true;     }       /**      * 获取文件内容和元信息,该文件不存在会抛异常      * @param object object      * @return StatObjectResponse      */     @SneakyThrows     public S3Object statObject(String object) {         return amazonS3.getObject(fileProperties.getBucketName(), object);     }      @SneakyThrows     public S3Object getObject(String object, Long offset, Long contentLength) {         GetObjectRequest request = new GetObjectRequest(fileProperties.getBucketName(), object);         request.setRange(offset, offset + contentLength - 1);  // 设置偏移量和长度         return amazonS3.getObject(request);     }     }  

minioUtil

import cn.hutool.core.util.IdUtil; import cn.superlu.s3uploadservice.config.CustomMinioClient; import cn.superlu.s3uploadservice.config.FileProperties; import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum; import cn.superlu.s3uploadservice.model.bo.FileUploadInfo; import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO; import com.google.common.collect.HashMultimap; import io.minio.*; import io.minio.http.Method; import io.minio.messages.Part; import jakarta.annotation.PostConstruct; import jakarta.annotation.Resource; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component;  import java.util.*; import java.util.concurrent.TimeUnit; import java.util.stream.Collectors;  @Slf4j @Component public class MinioUtil {      private CustomMinioClient customMinioClient;      @Resource     private FileProperties fileProperties;      // spring自动注入会失败     @PostConstruct     public void init() {         MinioAsyncClient minioClient = MinioAsyncClient.builder()                 .endpoint(fileProperties.getOss().getEndpoint())                 .credentials(fileProperties.getOss().getAccessKey(), fileProperties.getOss().getSecretKey())                 .build();         customMinioClient = new CustomMinioClient(minioClient);     }      /**      * 获取 Minio 中已经上传的分片文件      * @param object 文件名称      * @param uploadId 上传的文件id(由 minio 生成)      * @return List<Integer>      */     @SneakyThrows     public List<Integer> getListParts(String object, String uploadId) {         ListPartsResponse partResult = customMinioClient.listMultipart(fileProperties.getBucketName(), null, object, 1000, 0, uploadId, null, null);         return partResult.result().partList().stream()                 .map(Part::partNumber)                 .collect(Collectors.toList());     }      /**      * 单文件签名上传      * @param object 文件名称(uuid 格式)      * @return UploadUrlsVO      */     public UploadUrlsVO getUploadObjectUrl(String contentType, String object) {         try {             log.info("<{}> 开始单文件上传<minio>", object);             UploadUrlsVO urlsVO = new UploadUrlsVO();             List<String> urlList = new ArrayList<>();             // 主要是针对图片,若需要通过浏览器直接查看,而不是下载,需要指定对应的 content-type             HashMultimap<String, String> headers = HashMultimap.create();             if (contentType == null || contentType.equals("")) {                 contentType = "application/octet-stream";             }             headers.put("Content-Type", contentType);              String uploadId = IdUtil.simpleUUID();             Map<String, String> reqParams = new HashMap<>();             reqParams.put("uploadId", uploadId);             String url = customMinioClient.getPresignedObjectUrl(GetPresignedObjectUrlArgs.builder()                     .method(Method.PUT)                     .bucket(fileProperties.getBucketName())                     .object(object)                     .extraHeaders(headers)                     .extraQueryParams(reqParams)                     .expiry(fileProperties.getOss().getExpiry(), TimeUnit.DAYS)                     .build());             urlList.add(url);             urlsVO.setUploadId(uploadId).setUrls(urlList);             return urlsVO;         } catch (Exception e) {             log.error("单文件上传失败: {}", e.getMessage());             throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());         }     }      /**      * 初始化分片上传      * @param fileUploadInfo 前端传入的文件信息      * @param object object      * @return UploadUrlsVO      */     public UploadUrlsVO initMultiPartUpload(FileUploadInfo fileUploadInfo, String object) {         Integer chunkCount = fileUploadInfo.getChunkCount();         String contentType = fileUploadInfo.getContentType();         String uploadId = fileUploadInfo.getUploadId();          log.info("文件<{}> - 分片<{}> 初始化分片上传数据 请求头 {}", object, chunkCount, contentType);         UploadUrlsVO urlsVO = new UploadUrlsVO();         try {             HashMultimap<String, String> headers = HashMultimap.create();             if (contentType == null || contentType.equals("")) {                 contentType = "application/octet-stream";             }             headers.put("Content-Type", contentType);              // 如果初始化时有 uploadId,说明是断点续传,不能重新生成 uploadId             if (fileUploadInfo.getUploadId() == null || fileUploadInfo.getUploadId().equals("")) {                 uploadId = customMinioClient.initMultiPartUpload(fileProperties.getBucketName(), null, object, headers, null);             }             urlsVO.setUploadId(uploadId);              List<String> partList = new ArrayList<>();             Map<String, String> reqParams = new HashMap<>();             reqParams.put("uploadId", uploadId);             for (int i = 1; i <= chunkCount; i++) {                 reqParams.put("partNumber", String.valueOf(i));                 String uploadUrl = customMinioClient.getPresignedObjectUrl(GetPresignedObjectUrlArgs.builder()                         .method(Method.PUT)                         .bucket(fileProperties.getBucketName())                         .object(object)                         .expiry(1, TimeUnit.DAYS)                         .extraQueryParams(reqParams)                         .build());                 partList.add(uploadUrl);             }              log.info("文件初始化分片成功");             urlsVO.setUrls(partList);             return urlsVO;         } catch (Exception e) {             log.error("初始化分片上传失败: {}", e.getMessage());             // 返回 文件上传失败             throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());         }     }      /**      * 合并文件      * @param object object      * @param uploadId uploadUd      */     @SneakyThrows     public boolean mergeMultipartUpload(String object, String uploadId) {         log.info("通过 <{}-{}-{}> 合并<分片上传>数据", object, uploadId, fileProperties.getBucketName());         //目前仅做了最大1000分片         Part[] parts = new Part[1000];         // 查询上传后的分片数据         ListPartsResponse partResult = customMinioClient.listMultipart(fileProperties.getBucketName(), null, object, 1000, 0, uploadId, null, null);         int partNumber = 1;         for (Part part : partResult.result().partList()) {             parts[partNumber - 1] = new Part(partNumber, part.etag());             partNumber++;         }         // 合并分片         customMinioClient.mergeMultipartUpload(fileProperties.getBucketName(), null, object, uploadId, parts, null, null);         return true;     }      /**      * 获取文件内容和元信息,该文件不存在会抛异常      * @param object object      * @return StatObjectResponse      */     @SneakyThrows     public StatObjectResponse statObject(String object) {         return customMinioClient.statObject(StatObjectArgs.builder()                 .bucket(fileProperties.getBucketName())                 .object(object)                 .build())                 .get();     }      @SneakyThrows     public GetObjectResponse getObject(String object, Long offset, Long contentLength) {         return customMinioClient.getObject(GetObjectArgs.builder()                 .bucket(fileProperties.getBucketName())                 .object(object)                 .offset(offset)                 .length(contentLength)                 .build())                 .get();     }  }  

四、疑问

我在全部使用aws-s3上传时出现一个问题至今没有办法解决。只能在查询分片的时候用minio的包进行。

分片后调用amazonS3.listParts()一直超时

这个问题我在
https://gitee.com/Gary2016/minio-upload/issues/I8H8GM
也看到有人跟我有相同的问题

有解决的朋友麻烦评论区告知下方法。

广告一刻

为您即时展示最新活动产品广告消息,让您随时掌握产品活动新动态!