Class BlobAsyncClient
This client is instantiated through BlobClientBuilder or retrieved via getBlobAsyncClient.
For operations on a specific blob type (i.e. append, block, or page) use getAppendBlobAsyncClient, getBlockBlobAsyncClient, or getPageBlobAsyncClient to construct a client that allows blob specific operations.
Please refer to the Azure Docs for more information.
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intIf a blob is known to be greater than 100MB, using a larger block size will trigger some server-side optimizations.static final intThe number of buffers to use if none is specified on the buffered upload method.static final intThe block size to use if none is specified in parallel operations.Fields inherited from class com.azure.storage.blob.specialized.BlobAsyncClientBase
accountName, azureBlobStorage, blobName, containerName, encryptionScope, serviceVersion -
Constructor Summary
ConstructorsModifierConstructorDescriptionprotectedBlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey) Protected constructor for use byBlobClientBuilder.protectedBlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, com.azure.storage.blob.implementation.models.EncryptionScope encryptionScope) Protected constructor for use byBlobClientBuilder.protectedBlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, com.azure.storage.blob.implementation.models.EncryptionScope encryptionScope, String versionId) Protected constructor for use byBlobClientBuilder. -
Method Summary
Modifier and TypeMethodDescriptionCreates a newAppendBlobAsyncClientassociated with this blob.Creates a newBlockBlobAsyncClientassociated with this blob.getCustomerProvidedKeyAsyncClient(CustomerProvidedKey customerProvidedKey) Creates a newBlobAsyncClientwith the specifiedcustomerProvidedKey.getEncryptionScopeAsyncClient(String encryptionScope) Creates a newBlobAsyncClientwith the specifiedencryptionScope.Creates a newPageBlobAsyncClientassociated with this blob.getSnapshotClient(String snapshot) Creates a newBlobAsyncClientlinked to thesnapshotof this blob resource.getVersionClient(String versionId) Creates a newBlobAsyncClientlinked to theversionIdof this blob resource.upload(com.azure.core.util.BinaryData data) Creates a new block blob.upload(com.azure.core.util.BinaryData data, boolean overwrite) Creates a new block blob, or updates the content of an existing block blob.upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions) Creates a new block blob.upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new block blob, or updates the content of an existing block blob.protected AsynchronousFileChanneluploadFileResourceSupplier(String filePath) Deprecated.due to refactoring code to be in the common storage library.uploadFromFile(String filePath) Creates a new block blob with the content of the specified file.uploadFromFile(String filePath, boolean overwrite) Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String, String> metadata, AccessTier tier, BlobRequestConditions requestConditions) Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.Mono<com.azure.core.http.rest.Response<BlockBlobItem>> Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.Mono<com.azure.core.http.rest.Response<BlockBlobItem>> Creates a new block blob, or updates the content of an existing block blob.Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadWithResponse(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String, String> metadata, AccessTier tier, BlobRequestConditions requestConditions) Creates a new block blob, or updates the content of an existing block blob.Methods inherited from class com.azure.storage.blob.specialized.BlobAsyncClientBase
abortCopyFromUrl, abortCopyFromUrlWithResponse, beginCopy, beginCopy, beginCopy, copyFromUrl, copyFromUrlWithResponse, copyFromUrlWithResponse, createSnapshot, createSnapshotWithResponse, delete, deleteIfExists, deleteIfExistsWithResponse, deleteImmutabilityPolicy, deleteImmutabilityPolicyWithResponse, deleteWithResponse, download, downloadContent, downloadContentWithResponse, downloadStream, downloadStreamWithResponse, downloadToFile, downloadToFile, downloadToFileWithResponse, downloadToFileWithResponse, downloadToFileWithResponse, downloadWithResponse, exists, existsWithResponse, generateSas, generateSas, generateSas, generateUserDelegationSas, generateUserDelegationSas, generateUserDelegationSas, getAccountInfo, getAccountInfoWithResponse, getAccountName, getAccountUrl, getBlobName, getBlobUrl, getContainerAsyncClient, getContainerName, getCustomerProvidedKey, getEncryptionScope, getHttpPipeline, getProperties, getPropertiesWithResponse, getServiceVersion, getSnapshotId, getTags, getTagsWithResponse, getVersionId, isSnapshot, query, queryWithResponse, setAccessTier, setAccessTierWithResponse, setAccessTierWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setImmutabilityPolicy, setImmutabilityPolicyWithResponse, setLegalHold, setLegalHoldWithResponse, setMetadata, setMetadataWithResponse, setTags, setTagsWithResponse, undelete, undeleteWithResponse
-
Field Details
-
BLOB_DEFAULT_UPLOAD_BLOCK_SIZE
public static final int BLOB_DEFAULT_UPLOAD_BLOCK_SIZEThe block size to use if none is specified in parallel operations.- See Also:
-
BLOB_DEFAULT_NUMBER_OF_BUFFERS
public static final int BLOB_DEFAULT_NUMBER_OF_BUFFERSThe number of buffers to use if none is specified on the buffered upload method.- See Also:
-
BLOB_DEFAULT_HTBB_UPLOAD_BLOCK_SIZE
public static final int BLOB_DEFAULT_HTBB_UPLOAD_BLOCK_SIZEIf a blob is known to be greater than 100MB, using a larger block size will trigger some server-side optimizations. If the block size is not set and the size of the blob is known to be greater than 100MB, this value will be used.- See Also:
-
-
Constructor Details
-
BlobAsyncClient
protected BlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey) Protected constructor for use byBlobClientBuilder.- Parameters:
pipeline- The pipeline used to send and receive service requests.url- The endpoint where to send service requests.serviceVersion- The version of the service to receive requests.accountName- The storage account name.containerName- The container name.blobName- The blob name.snapshot- The snapshot identifier for the blob, passnullto interact with the blob directly.customerProvidedKey- Customer provided key used during encryption of the blob's data on the server, passnullto allow the service to use its own encryption.
-
BlobAsyncClient
protected BlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, com.azure.storage.blob.implementation.models.EncryptionScope encryptionScope) Protected constructor for use byBlobClientBuilder.- Parameters:
pipeline- The pipeline used to send and receive service requests.url- The endpoint where to send service requests.serviceVersion- The version of the service to receive requests.accountName- The storage account name.containerName- The container name.blobName- The blob name.snapshot- The snapshot identifier for the blob, passnullto interact with the blob directly.customerProvidedKey- Customer provided key used during encryption of the blob's data on the server, passnullto allow the service to use its own encryption.encryptionScope- Encryption scope used during encryption of the blob's data on the server, passnullto allow the service to use its own encryption.
-
BlobAsyncClient
protected BlobAsyncClient(com.azure.core.http.HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, com.azure.storage.blob.implementation.models.EncryptionScope encryptionScope, String versionId) Protected constructor for use byBlobClientBuilder.- Parameters:
pipeline- The pipeline used to send and receive service requests.url- The endpoint where to send service requests.serviceVersion- The version of the service to receive requests.accountName- The storage account name.containerName- The container name.blobName- The blob name.snapshot- The snapshot identifier for the blob, passnullto interact with the blob directly.customerProvidedKey- Customer provided key used during encryption of the blob's data on the server, passnullto allow the service to use its own encryption.encryptionScope- Encryption scope used during encryption of the blob's data on the server, passnullto allow the service to use its own encryption.versionId- The version identifier for the blob, passnullto interact with the latest blob version.
-
-
Method Details
-
getSnapshotClient
Creates a newBlobAsyncClientlinked to thesnapshotof this blob resource.- Overrides:
getSnapshotClientin classBlobAsyncClientBase- Parameters:
snapshot- the identifier for a specific snapshot of this blob- Returns:
- A
BlobAsyncClientused to interact with the specific snapshot.
-
getVersionClient
Creates a newBlobAsyncClientlinked to theversionIdof this blob resource.- Overrides:
getVersionClientin classBlobAsyncClientBase- Parameters:
versionId- the identifier for a specific version of this blob, passnullto interact with the latest blob version.- Returns:
- A
BlobAsyncClientused to interact with the specific version.
-
getEncryptionScopeAsyncClient
Creates a newBlobAsyncClientwith the specifiedencryptionScope.- Overrides:
getEncryptionScopeAsyncClientin classBlobAsyncClientBase- Parameters:
encryptionScope- the encryption scope for the blob, passnullto use no encryption scope.- Returns:
- a
BlobAsyncClientwith the specifiedencryptionScope.
-
getCustomerProvidedKeyAsyncClient
Creates a newBlobAsyncClientwith the specifiedcustomerProvidedKey.- Overrides:
getCustomerProvidedKeyAsyncClientin classBlobAsyncClientBase- Parameters:
customerProvidedKey- theCustomerProvidedKeyfor the blob, passnullto use no customer provided key.- Returns:
- a
BlobAsyncClientwith the specifiedcustomerProvidedKey.
-
getAppendBlobAsyncClient
Creates a newAppendBlobAsyncClientassociated with this blob.- Returns:
- A
AppendBlobAsyncClientassociated with this blob.
-
getBlockBlobAsyncClient
Creates a newBlockBlobAsyncClientassociated with this blob.- Returns:
- A
BlockBlobAsyncClientassociated with this blob.
-
getPageBlobAsyncClient
Creates a newPageBlobAsyncClientassociated with this blob.- Returns:
- A
PageBlobAsyncClientassociated with this blob.
-
upload
public Mono<BlockBlobItem> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions) Creates a new block blob. By default, this method will not overwrite an existing blob.Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use
stageBlockandcommitBlockList. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
Code Samples
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency); client.upload(data, parallelTransferOptions).subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getContentMd5())));- Parameters:
data- The data to write to the blob. Unlike other upload methods, this method does not require that theFluxbe replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions-ParallelTransferOptionsused to configure buffered uploading.- Returns:
- A reactive response containing the information of the uploaded block blob.
-
upload
public Mono<BlockBlobItem> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new block blob, or updates the content of an existing block blob.Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use
stageBlockandcommitBlockList. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
Code Samples
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency); boolean overwrite = false; // Default behavior client.upload(data, parallelTransferOptions, overwrite).subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getContentMd5())));- Parameters:
data- The data to write to the blob. Unlike other upload methods, this method does not require that theFluxbe replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions-ParallelTransferOptionsused to configure buffered uploading.overwrite- Whether to overwrite, should the blob already exist.- Returns:
- A reactive response containing the information of the uploaded block blob.
-
upload
Creates a new block blob. By default, this method will not overwrite an existing blob.Code Samples
client.upload(BinaryData.fromString("Data!")).subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getContentMd5())));- Parameters:
data- The data to write to the blob.- Returns:
- A reactive response containing the information of the uploaded block blob.
-
upload
Creates a new block blob, or updates the content of an existing block blob.Code Samples
boolean overwrite = false; // Default behavior client.upload(BinaryData.fromString("Data!"), overwrite).subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getContentMd5())));- Parameters:
data- The data to write to the blob.overwrite- Whether to overwrite, should the blob already exist.- Returns:
- A reactive response containing the information of the uploaded block blob.
-
uploadWithResponse
public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadWithResponse(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String, String> metadata, AccessTier tier, BlobRequestConditions requestConditions) Creates a new block blob, or updates the content of an existing block blob.Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use
stageBlockandcommitBlockList. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
To avoid overwriting, pass "*" to
BlobRequestConditions.setIfNoneMatch(String).Code Samples
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency); client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, AccessTier.HOT, requestConditions) .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));Using Progress Reporting
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency) .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, AccessTier.HOT, requestConditions) .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));- Parameters:
data- The data to write to the blob. Unlike other upload methods, this method does not require that theFluxbe replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions-ParallelTransferOptionsused to configure buffered uploading.headers-BlobHttpHeadersmetadata- Metadata to associate with the blob. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.tier-AccessTierfor the destination blob.requestConditions-BlobRequestConditions- Returns:
- A reactive response containing the information of the uploaded block blob.
-
uploadWithResponse
public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadWithResponse(BlobParallelUploadOptions options) Creates a new block blob, or updates the content of an existing block blob.Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use
stageBlockandcommitBlockList. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
To avoid overwriting, pass "*" to
BlobRequestConditions.setIfNoneMatch(String).Code Samples
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); Map<String, String> tags = Collections.singletonMap("tag", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency).setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(new BlobParallelUploadOptions(data) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers).setMetadata(metadata).setTags(tags) .setTier(AccessTier.HOT).setRequestConditions(requestConditions)) .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));Using Progress Reporting
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); Map<String, String> tags = Collections.singletonMap("tag", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize) .setMaxConcurrency(maxConcurrency).setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(new BlobParallelUploadOptions(data) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers).setMetadata(metadata).setTags(tags) .setTier(AccessTier.HOT).setRequestConditions(requestConditions)) .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n", Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));- Parameters:
options-BlobParallelUploadOptions. Unlike other upload methods, this method does not require that theFluxbe replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.- Returns:
- A reactive response containing the information of the uploaded block blob.
-
uploadFromFile
Creates a new block blob with the content of the specified file. By default, this method will not overwrite an existing blob.Code Samples
client.uploadFromFile(filePath) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));- Parameters:
filePath- Path to the upload file- Returns:
- An empty response
- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFile
Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.Code Samples
boolean overwrite = false; // Default behavior client.uploadFromFile(filePath, overwrite) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));- Parameters:
filePath- Path to the upload fileoverwrite- Whether to overwrite, should the blob already exist.- Returns:
- An empty response
- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFile
public Mono<Void> uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String, String> metadata, AccessTier tier, BlobRequestConditions requestConditions) Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.To avoid overwriting, pass "*" to
BlobRequestConditions.setIfNoneMatch(String).Code Samples
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); client.uploadFromFile(filePath, new ParallelTransferOptions().setBlockSizeLong(BlockBlobClient.MAX_STAGE_BLOCK_BYTES_LONG), headers, metadata, AccessTier.HOT, requestConditions) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));- Parameters:
filePath- Path to the upload fileparallelTransferOptions-ParallelTransferOptionsto use to upload from file. Number of parallel transfers parameter is ignored.headers-BlobHttpHeadersmetadata- Metadata to associate with the blob. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.tier-AccessTierfor the destination blob.requestConditions-BlobRequestConditions- Returns:
- An empty response
- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFileWithResponse
public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadFromFileWithResponse(BlobUploadFromFileOptions options) Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.To avoid overwriting, pass "*" to
BlobRequestConditions.setIfNoneMatch(String).Code Samples
BlobHttpHeaders headers = new BlobHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); Map<String, String> tags = Collections.singletonMap("tag", "value"); BlobRequestConditions requestConditions = new BlobRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100 * 1024 * 1024L; // 100 MB; client.uploadFromFileWithResponse(new BlobUploadFromFileOptions(filePath) .setParallelTransferOptions( new ParallelTransferOptions().setBlockSizeLong(blockSize)) .setHeaders(headers).setMetadata(metadata).setTags(tags).setTier(AccessTier.HOT) .setRequestConditions(requestConditions)) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));- Parameters:
options-BlobUploadFromFileOptions- Returns:
- A reactive response containing the information of the uploaded block blob.
- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFileResourceSupplier
Deprecated.due to refactoring code to be in the common storage library.RESERVED FOR INTERNAL USE. Resource Supplier for UploadFile.- Parameters:
filePath- The path for the file- Returns:
AsynchronousFileChannel- Throws:
UncheckedIOException- an input output exception.
-