Class DataLakeFileAsyncClient
This client is instantiated through DataLakePathClientBuilder
or retrieved via
DataLakeFileSystemAsyncClient.getFileAsyncClient(String)
.
Please refer to the Azure Docs for more information.
-
Method Summary
Modifier and TypeMethodDescriptionappend
(com.azure.core.util.BinaryData data, long fileOffset) Appends data to the specified resource to later be flushed (written) by a call to flushappend
(Flux<ByteBuffer> data, long fileOffset, long length) Appends data to the specified resource to later be flushed (written) by a call to flushappendWithResponse
(com.azure.core.util.BinaryData data, long fileOffset, byte[] contentMd5, String leaseId) Appends data to the specified resource to later be flushed (written) by a call to flushappendWithResponse
(com.azure.core.util.BinaryData data, long fileOffset, DataLakeFileAppendOptions appendOptions) Appends data to the specified resource to later be flushed (written) by a call to flushappendWithResponse
(Flux<ByteBuffer> data, long fileOffset, long length, byte[] contentMd5, String leaseId) Appends data to the specified resource to later be flushed (written) by a call to flushappendWithResponse
(Flux<ByteBuffer> data, long fileOffset, long length, DataLakeFileAppendOptions appendOptions) Appends data to the specified resource to later be flushed (written) by a call to flushdelete()
Deletes a file.Deletes a file if it exists.Deletes a file if it exists.deleteWithResponse
(DataLakeRequestConditions requestConditions) Deletes a file.flush
(long position) Deprecated.flush
(long position, boolean overwrite) Flushes (writes) data previously appended to the file through a call to append.flushWithResponse
(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions) Flushes (writes) data previously appended to the file through a call to append.flushWithResponse
(long position, DataLakeFileFlushOptions flushOptions) Flushes (writes) data previously appended to the file through a call to append.getCustomerProvidedKeyAsyncClient
(CustomerProvidedKey customerProvidedKey) Creates a newDataLakeFileAsyncClient
with the specifiedcustomerProvidedKey
.Gets the name of this file, not including its full path.Gets the path of this file, not including the name of the resource itself.Gets the URL of the file represented by this client on the Data Lake service.Queries the entire file.queryWithResponse
(FileQueryOptions queryOptions) Queries the entire file.read()
Reads the entire file.readToFile
(ReadToFileOptions options) Reads the entire file into a file specified by the path.readToFile
(String filePath) Reads the entire file into a file specified by the path.readToFile
(String filePath, boolean overwrite) Reads the entire file into a file specified by the path.Mono
<com.azure.core.http.rest.Response<PathProperties>> readToFileWithResponse
(ReadToFileOptions options) Reads the entire file into a file specified by the path.Mono
<com.azure.core.http.rest.Response<PathProperties>> readToFileWithResponse
(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions) Reads the entire file into a file specified by the path.readWithResponse
(FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5) Reads a range of bytes from a file.Moves the file to another location within the file system.Mono
<com.azure.core.http.rest.Response<DataLakeFileAsyncClient>> renameWithResponse
(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions) Moves the file to another location within the file system.Schedules the file for deletion.Schedules the file for deletion.upload
(com.azure.core.util.BinaryData data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions) Creates a new file and uploads content.upload
(com.azure.core.util.BinaryData data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new file and uploads content.upload
(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions) Creates a new file and uploads content.upload
(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new file and uploads content.uploadFromFile
(String filePath) Creates a new file, with the content of the specified file.uploadFromFile
(String filePath, boolean overwrite) Creates a new file, with the content of the specified file.uploadFromFile
(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file, with the content of the specified file.uploadFromFileWithResponse
(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file, with the content of the specified file.Creates a new file.uploadWithResponse
(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file.Methods inherited from class com.azure.storage.file.datalake.DataLakePathAsyncClient
create, create, createIfNotExists, createIfNotExistsWithResponse, createWithResponse, createWithResponse, exists, existsWithResponse, generateSas, generateSas, generateSas, generateUserDelegationSas, generateUserDelegationSas, generateUserDelegationSas, getAccessControl, getAccessControlWithResponse, getAccountName, getCustomerProvidedKey, getFileSystemName, getHttpPipeline, getProperties, getProperties, getPropertiesWithResponse, getServiceVersion, removeAccessControlRecursive, removeAccessControlRecursiveWithResponse, setAccessControlList, setAccessControlListWithResponse, setAccessControlRecursive, setAccessControlRecursiveWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setMetadata, setMetadataWithResponse, setPermissions, setPermissionsWithResponse, updateAccessControlRecursive, updateAccessControlRecursiveWithResponse
-
Method Details
-
getFileUrl
Gets the URL of the file represented by this client on the Data Lake service.- Returns:
- the URL.
-
getFilePath
Gets the path of this file, not including the name of the resource itself.- Returns:
- The path of the file.
-
getFileName
Gets the name of this file, not including its full path.- Returns:
- The name of the file.
-
getCustomerProvidedKeyAsyncClient
public DataLakeFileAsyncClient getCustomerProvidedKeyAsyncClient(CustomerProvidedKey customerProvidedKey) Creates a newDataLakeFileAsyncClient
with the specifiedcustomerProvidedKey
.- Overrides:
getCustomerProvidedKeyAsyncClient
in classDataLakePathAsyncClient
- Parameters:
customerProvidedKey
- theCustomerProvidedKey
for the file, passnull
to use no customer provided key.- Returns:
- a
DataLakeFileAsyncClient
with the specifiedcustomerProvidedKey
.
-
delete
Deletes a file.Code Samples
client.delete().subscribe(response -> System.out.println("Delete request completed"));
For more information see the Azure Docs
- Returns:
- A reactive response signalling completion.
-
deleteWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> deleteWithResponse(DataLakeRequestConditions requestConditions) Deletes a file.Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); client.deleteWithResponse(requestConditions) .subscribe(response -> System.out.println("Delete request completed"));
For more information see the Azure Docs
- Parameters:
requestConditions
-DataLakeRequestConditions
- Returns:
- A reactive response signalling completion.
-
deleteIfExists
Deletes a file if it exists.Code Samples
client.deleteIfExists().subscribe(deleted -> { if (deleted) { System.out.println("Successfully deleted."); } else { System.out.println("Does not exist."); } });
For more information see the Azure Docs
- Overrides:
deleteIfExists
in classDataLakePathAsyncClient
- Returns:
- a reactive response signaling completion.
true
indicates that the file was successfully deleted,false
indicates that the file did not exist.
-
deleteIfExistsWithResponse
public Mono<com.azure.core.http.rest.Response<Boolean>> deleteIfExistsWithResponse(DataLakePathDeleteOptions options) Deletes a file if it exists.Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false) .setRequestConditions(requestConditions); client.deleteIfExistsWithResponse(options).subscribe(response -> { if (response.getStatusCode() == 404) { System.out.println("Does not exist."); } else { System.out.println("successfully deleted."); } });
For more information see the Azure Docs
- Overrides:
deleteIfExistsWithResponse
in classDataLakePathAsyncClient
- Parameters:
options
-DataLakePathDeleteOptions
- Returns:
- A reactive response signaling completion. If
Response
's status code is 200, the file was successfully deleted. If status code is 404, the file does not exist.
-
upload
public Mono<PathInfo> upload(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions) Creates a new file and uploads content.Code Samples
client.uploadFromFile(filePath) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));
- Parameters:
data
- The data to write to the file. Unlike other upload methods, this method does not require that theFlux
be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions
-ParallelTransferOptions
used to configure buffered uploading.- Returns:
- A reactive response containing the information of the uploaded file.
-
upload
public Mono<PathInfo> upload(com.azure.core.util.BinaryData data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions) Creates a new file and uploads content.Code Samples
Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions pto = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); BinaryData.fromFlux(data, length, false) .flatMap(binaryData -> client.upload(binaryData, pto)) .doOnError(throwable -> System.err.printf("Failed to upload %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload succeeded"));
- Parameters:
data
- The data to write to the file. Unlike other upload methods, this method does not require that theFlux
be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions
-ParallelTransferOptions
used to configure buffered uploading.- Returns:
- A reactive response containing the information of the uploaded file.
-
upload
public Mono<PathInfo> upload(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new file and uploads content.Code Samples
boolean overwrite = false; // Default behavior client.uploadFromFile(filePath, overwrite) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));
- Parameters:
data
- The data to write to the file. Unlike other upload methods, this method does not require that theFlux
be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions
-ParallelTransferOptions
used to configure buffered uploading.overwrite
- Whether to overwrite, should the file already exist.- Returns:
- A reactive response containing the information of the uploaded file.
-
upload
public Mono<PathInfo> upload(com.azure.core.util.BinaryData data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, boolean overwrite) Creates a new file and uploads content.Code Samples
Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions pto = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); BinaryData.fromFlux(data, length, false) .flatMap(binaryData -> client.upload(binaryData, pto, true)) .doOnError(throwable -> System.err.printf("Failed to upload %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload succeeded"));
- Parameters:
data
- The data to write to the file. Unlike other upload methods, this method does not require that theFlux
be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions
-ParallelTransferOptions
used to configure buffered uploading.overwrite
- Whether to overwrite, should the file already exist.- Returns:
- A reactive response containing the information of the uploaded file.
-
uploadWithResponse
public Mono<com.azure.core.http.rest.Response<PathInfo>> uploadWithResponse(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file. To avoid overwriting, pass "*" toDataLakeRequestConditions.setIfNoneMatch(String)
.Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, requestConditions) .subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadataMap = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions conditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions pto = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(data, pto, httpHeaders, metadataMap, conditions) .subscribe(response -> System.out.println("Uploaded file %n"));
- Parameters:
data
- The data to write to the file. Unlike other upload methods, this method does not require that theFlux
be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.parallelTransferOptions
-ParallelTransferOptions
used to configure buffered uploading.headers
-PathHttpHeaders
metadata
- Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.requestConditions
-DataLakeRequestConditions
- Returns:
- A reactive response containing the information of the uploaded file.
-
uploadWithResponse
public Mono<com.azure.core.http.rest.Response<PathInfo>> uploadWithResponse(FileParallelUploadOptions options) Creates a new file.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String)
.Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); client.uploadWithResponse(new FileParallelUploadOptions(data) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers) .setMetadata(metadata).setRequestConditions(requestConditions) .setPermissions("permissions").setUmask("umask")) .subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadataMap = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions conditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); ParallelTransferOptions pto = new ParallelTransferOptions() .setBlockSizeLong(blockSize) .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(new FileParallelUploadOptions(data) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers) .setMetadata(metadata).setRequestConditions(requestConditions) .setPermissions("permissions").setUmask("umask")) .subscribe(response -> System.out.println("Uploaded file %n"));
- Parameters:
options
-FileParallelUploadOptions
- Returns:
- A reactive response containing the information of the uploaded file.
-
uploadFromFile
Creates a new file, with the content of the specified file. By default, this method will not overwrite an existing file.Code Samples
client.uploadFromFile(filePath) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));
- Parameters:
filePath
- Path to the upload file- Returns:
- An empty response
- Throws:
UncheckedIOException
- If an I/O error occurs
-
uploadFromFile
Creates a new file, with the content of the specified file.Code Samples
boolean overwrite = false; // Default behavior client.uploadFromFile(filePath, overwrite) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));
- Parameters:
filePath
- Path to the upload fileoverwrite
- Whether to overwrite, should the file already exist.- Returns:
- An empty response
- Throws:
UncheckedIOException
- If an I/O error occurs
-
uploadFromFile
public Mono<Void> uploadFromFile(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file, with the content of the specified file.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String)
.Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded"));
- Parameters:
filePath
- Path to the upload fileparallelTransferOptions
-ParallelTransferOptions
to use to upload from file. Number of parallel transfers parameter is ignored.headers
-PathHttpHeaders
metadata
- Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.requestConditions
-DataLakeRequestConditions
- Returns:
- An empty response
- Throws:
UncheckedIOException
- If an I/O error occurs
-
uploadFromFileWithResponse
public Mono<com.azure.core.http.rest.Response<PathInfo>> uploadFromFileWithResponse(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions) Creates a new file, with the content of the specified file.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String)
.Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); client.uploadFromFileWithResponse(filePath, parallelTransferOptions, headers, metadata, requestConditions) .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion -> System.out.println("Upload from file succeeded at: " + completion.getValue().getLastModified()));
- Parameters:
filePath
- Path to the upload fileparallelTransferOptions
-ParallelTransferOptions
to use to upload from file. Number of parallel transfers parameter is ignored.headers
-PathHttpHeaders
metadata
- Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.requestConditions
-DataLakeRequestConditions
- Returns:
- A reactive response containing the information of the uploaded file.
- Throws:
UncheckedIOException
- If an I/O error occurs
-
append
Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
client.append(data, offset, length) .subscribe( response -> System.out.println("Append data completed"), error -> System.out.printf("Error when calling append data: %s", error));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.length
- The exact length of the data. It is important that this value match precisely the length of the data emitted by theFlux
.- Returns:
- A reactive response signalling completion.
-
append
Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
client.append(data, offset, length) .subscribe( response -> System.out.println("Append data completed"), error -> System.out.printf("Error when calling append data: %s", error));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.- Returns:
- A reactive response signalling completion.
-
appendWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> appendWithResponse(Flux<ByteBuffer> data, long fileOffset, long length, byte[] contentMd5, String leaseId) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 client.appendWithResponse(data, offset, length, contentMd5, leaseId).subscribe(response -> System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.length
- The exact length of the data. It is important that this value match precisely the length of the data emitted by theFlux
.contentMd5
- An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the received data and fail the request if it does not match the provided MD5.leaseId
- By setting lease id, requests will fail if the provided lease does not match the active lease on the file.- Returns:
- A reactive response signalling completion.
-
appendWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> appendWithResponse(Flux<ByteBuffer> data, long fileOffset, long length, DataLakeFileAppendOptions appendOptions) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); byte[] contentMd5 = new byte[0]; // Replace with valid md5 DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions() .setLeaseId(leaseId) .setContentHash(contentMd5) .setFlush(true); client.appendWithResponse(data, offset, length, appendOptions).subscribe(response -> System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.length
- The exact length of the data. It is important that this value match precisely the length of the data emitted by theFlux
.appendOptions
-DataLakeFileAppendOptions
- Returns:
- A reactive response signalling completion.
-
appendWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, byte[] contentMd5, String leaseId) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 BinaryData data = BinaryData.fromString("Data!"); client.appendWithResponse(data, offset, contentMd5, leaseId).subscribe(response -> System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.contentMd5
- An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the received data and fail the request if it does not match the provided MD5.leaseId
- By setting lease id, requests will fail if the provided lease does not match the active lease on the file.- Returns:
- A reactive response signalling completion.
-
appendWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, DataLakeFileAppendOptions appendOptions) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); byte[] contentMd5 = new byte[0]; // Replace with valid md5 DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions() .setLeaseId(leaseId) .setContentHash(contentMd5) .setFlush(true); BinaryData data = BinaryData.fromString("Data!"); client.appendWithResponse(data, offset, appendOptions).subscribe(response -> System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
data
- The data to write to the file.fileOffset
- The position where the data is to be appended.appendOptions
-DataLakeFileAppendOptions
- Returns:
- A reactive response signalling completion.
-
flush
Deprecated.Seeflush(long, boolean)
instead.Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.By default this method will not overwrite existing data.
Code Samples
client.flush(position).subscribe(response -> System.out.println("Flush data completed"));
For more information, see the Azure Docs
- Parameters:
position
- The length of the file after all data has been written.- Returns:
- A reactive response containing the information of the created resource.
-
flush
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
boolean overwrite = true; client.flush(position, overwrite).subscribe(response -> System.out.println("Flush data completed"));
For more information, see the Azure Docs
- Parameters:
position
- The length of the file after all data has been written.overwrite
- Whether to overwrite, should data exist on the file.- Returns:
- A reactive response containing the information of the created resource.
-
flushWithResponse
public Mono<com.azure.core.http.rest.Response<PathInfo>> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions) Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 boolean retainUncommittedData = false; boolean close = false; PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentLanguage("en-US") .setContentType("binary"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); client.flushWithResponse(position, retainUncommittedData, close, httpHeaders, requestConditions).subscribe(response -> System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
position
- The length of the file after all data has been written.retainUncommittedData
- Whether uncommitted data is to be retained after the operation.close
- Whether a file changed event raised indicates completion (true) or modification (false).httpHeaders
-httpHeaders
requestConditions
-requestConditions
- Returns:
- A reactive response containing the information of the created resource.
-
flushWithResponse
public Mono<com.azure.core.http.rest.Response<PathInfo>> flushWithResponse(long position, DataLakeFileFlushOptions flushOptions) Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 boolean retainUncommittedData = false; boolean close = false; PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentLanguage("en-US") .setContentType("binary"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); Integer leaseDuration = 15; DataLakeFileFlushOptions flushOptions = new DataLakeFileFlushOptions() .setUncommittedDataRetained(retainUncommittedData) .setClose(close) .setPathHttpHeaders(httpHeaders) .setRequestConditions(requestConditions) .setLeaseAction(LeaseAction.ACQUIRE) .setLeaseDuration(leaseDuration) .setProposedLeaseId(leaseId); client.flushWithResponse(position, flushOptions).subscribe(response -> System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
- Parameters:
position
- The length of the file after all data has been written.flushOptions
-DataLakeFileFlushOptions
- Returns:
- A reactive response containing the information of the created resource.
-
read
Reads the entire file.Code Samples
ByteArrayOutputStream downloadData = new ByteArrayOutputStream(); client.read().subscribe(piece -> { try { downloadData.write(piece.array()); } catch (IOException ex) { throw new UncheckedIOException(ex); } });
For more information, see the Azure Docs
- Returns:
- A reactive response containing the file data.
-
readWithResponse
public Mono<FileReadAsyncResponse> readWithResponse(FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5) Reads a range of bytes from a file.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); client.readWithResponse(range, options, null, false).subscribe(response -> { ByteArrayOutputStream readData = new ByteArrayOutputStream(); response.getValue().subscribe(piece -> { try { readData.write(piece.array()); } catch (IOException ex) { throw new UncheckedIOException(ex); } }); });
For more information, see the Azure Docs
- Parameters:
range
-FileRange
options
-DownloadRetryOptions
requestConditions
-DataLakeRequestConditions
getRangeContentMd5
- Whether the contentMD5 for the specified file range should be returned.- Returns:
- A reactive response containing the file data.
-
readToFile
Reads the entire file into a file specified by the path.The file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown.Code Samples
client.readToFile(file).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
- Parameters:
filePath
- AString
representing the filePath where the downloaded data will be written.- Returns:
- A reactive response containing the file properties and metadata.
-
readToFile
Reads the entire file into a file specified by the path.The file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown.Code Samples
client.readToFile(new ReadToFileOptions(file)) .subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
- Parameters:
options
-ReadToFileOptions
- Returns:
- A reactive response containing the file properties and metadata.
-
readToFile
Reads the entire file into a file specified by the path.If overwrite is set to false, the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown.Code Samples
boolean overwrite = false; // Default value client.readToFile(file, overwrite).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
- Parameters:
filePath
- AString
representing the filePath where the downloaded data will be written.overwrite
- Whether to overwrite the file, should the file exist.- Returns:
- A reactive response containing the file properties and metadata.
-
readToFileWithResponse
public Mono<com.azure.core.http.rest.Response<PathProperties>> readToFileWithResponse(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions) Reads the entire file into a file specified by the path.By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown. To override this behavior, provide appropriateOpenOptions
Code Samples
FileRange fileRange = new FileRange(1024, 2048L); DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5); Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options client.readToFileWithResponse(file, fileRange, null, downloadRetryOptions, null, false, openOptions) .subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
- Parameters:
filePath
- AString
representing the filePath where the downloaded data will be written.range
-FileRange
parallelTransferOptions
-ParallelTransferOptions
to use to download to file. Number of parallel transfers parameter is ignored.options
-DownloadRetryOptions
requestConditions
-DataLakeRequestConditions
rangeGetContentMd5
- Whether the contentMD5 for the specified file range should be returned.openOptions
-OpenOptions
to use to configure how to open or create the file.- Returns:
- A reactive response containing the file properties and metadata.
- Throws:
IllegalArgumentException
- IfblockSize
is less than 0 or greater than 100MB.UncheckedIOException
- If an I/O error occurs.
-
readToFileWithResponse
public Mono<com.azure.core.http.rest.Response<PathProperties>> readToFileWithResponse(ReadToFileOptions options) Reads the entire file into a file specified by the path.By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException
will be thrown. To override this behavior, provide appropriateOpenOptions
Code Samples
ReadToFileOptions options = new ReadToFileOptions(file); options.setRange(new FileRange(1024, 2048L)); options.setDownloadRetryOptions(new DownloadRetryOptions().setMaxRetryRequests(5)); options.setOpenOptions(new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE, StandardOpenOption.READ))); //Default options options.setParallelTransferOptions(new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB)); options.setDataLakeRequestConditions(null); options.setRangeGetContentMd5(false); client.readToFileWithResponse(options) .subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
- Parameters:
options
-ReadToFileOptions
- Returns:
- A reactive response containing the file properties and metadata.
- Throws:
IllegalArgumentException
- IfblockSize
is less than 0 or greater than 100MB.UncheckedIOException
- If an I/O error occurs.
-
rename
Moves the file to another location within the file system. For more information see the Azure Docs.Code Samples
DataLakeFileAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block(); System.out.println("Directory Client has been renamed");
- Parameters:
destinationFileSystem
- The file system of the destination within the account.null
for the current file system.destinationPath
- Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"- Returns:
- A
Mono
containing aDataLakeFileAsyncClient
used to interact with the new file created.
-
renameWithResponse
public Mono<com.azure.core.http.rest.Response<DataLakeFileAsyncClient>> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions) Moves the file to another location within the file system. For more information, see the Azure Docs.Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions(); DataLakeFileAsyncClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath, sourceRequestConditions, destinationRequestConditions).block().getValue(); System.out.println("Directory Client has been renamed");
- Parameters:
destinationFileSystem
- The file system of the destination within the account.null
for the current file system.destinationPath
- Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"sourceRequestConditions
-DataLakeRequestConditions
against the source.destinationRequestConditions
-DataLakeRequestConditions
against the destination.- Returns:
- A
Mono
containing aResponse
whosevalue
contains aDataLakeFileAsyncClient
used to interact with the file created.
-
query
Queries the entire file.For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream(); String expression = "SELECT * from BlobStorage"; client.query(expression).subscribe(piece -> { try { queryData.write(piece.array()); } catch (IOException ex) { throw new UncheckedIOException(ex); } });
- Parameters:
expression
- The query expression.- Returns:
- A reactive response containing the queried data.
-
queryWithResponse
Queries the entire file.For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage"; FileQueryJsonSerialization input = new FileQueryJsonSerialization() .setRecordSeparator('\n'); FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization() .setEscapeChar('\0') .setColumnSeparator(',') .setRecordSeparator('\n') .setFieldQuote('\'') .setHeadersPresent(true); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId); Consumer<FileQueryError> errorConsumer = System.out::println; Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: " + progress.getBytesScanned()); FileQueryOptions queryOptions = new FileQueryOptions(expression) .setInputSerialization(input) .setOutputSerialization(output) .setRequestConditions(requestConditions) .setErrorConsumer(errorConsumer) .setProgressConsumer(progressConsumer); client.queryWithResponse(queryOptions) .subscribe(response -> { ByteArrayOutputStream queryData = new ByteArrayOutputStream(); response.getValue().subscribe(piece -> { try { queryData.write(piece.array()); } catch (IOException ex) { throw new UncheckedIOException(ex); } }); });
- Parameters:
queryOptions
-The query options
- Returns:
- A reactive response containing the queried data.
-
scheduleDeletion
Schedules the file for deletion.Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); client.scheduleDeletion(options) .subscribe(r -> System.out.println("File deletion has been scheduled"));
- Parameters:
options
- Schedule deletion parameters.- Returns:
- A reactive response signalling completion.
-
scheduleDeletionWithResponse
public Mono<com.azure.core.http.rest.Response<Void>> scheduleDeletionWithResponse(FileScheduleDeletionOptions options) Schedules the file for deletion.Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); client.scheduleDeletionWithResponse(options) .subscribe(r -> System.out.println("File deletion has been scheduled"));
- Parameters:
options
- Schedule deletion parameters.- Returns:
- A reactive response signalling completion.
-
flush(long, boolean)
instead.