Class DataLakeFileClient
This client is instantiated through DataLakePathClientBuilder or retrieved via
getFileClient.
Please refer to the Azure Docs for more information.
-
Method Summary
Modifier and TypeMethodDescriptionvoidappend(com.azure.core.util.BinaryData data, long fileOffset) Appends data to the specified resource to later be flushed (written) by a call to flushvoidappend(InputStream data, long fileOffset, long length) Appends data to the specified resource to later be flushed (written) by a call to flushcom.azure.core.http.rest.Response<Void> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, byte[] contentMd5, String leaseId, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushcom.azure.core.http.rest.Response<Void> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, DataLakeFileAppendOptions appendOptions, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushcom.azure.core.http.rest.Response<Void> appendWithResponse(InputStream data, long fileOffset, long length, byte[] contentMd5, String leaseId, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushcom.azure.core.http.rest.Response<Void> appendWithResponse(InputStream data, long fileOffset, long length, DataLakeFileAppendOptions appendOptions, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushvoiddelete()Deletes a file.booleanDeletes a file if it exists.com.azure.core.http.rest.Response<Boolean> deleteIfExistsWithResponse(DataLakePathDeleteOptions options, Duration timeout, com.azure.core.util.Context context) Deletes a file if it exists.com.azure.core.http.rest.Response<Void> deleteWithResponse(DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Deletes a file.flush(long position) Deprecated.flush(long position, boolean overwrite) Flushes (writes) data previously appended to the file through a call to append.com.azure.core.http.rest.Response<PathInfo> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Flushes (writes) data previously appended to the file through a call to append.com.azure.core.http.rest.Response<PathInfo> flushWithResponse(long position, DataLakeFileFlushOptions flushOptions, Duration timeout, com.azure.core.util.Context context) Flushes (writes) data previously appended to the file through a call to append.getCustomerProvidedKeyClient(CustomerProvidedKey customerProvidedKey) Creates a newDataLakeFileClientwith the specifiedcustomerProvidedKey.Gets the name of this file, not including its full path.Gets the path of this file, not including the name of the resource itself.Gets the URL of the file represented by this client on the Data Lake service.Creates and opens an output stream to write data to the file.Creates and opens an output stream to write data to the file.getOutputStream(DataLakeFileOutputStreamOptions options, com.azure.core.util.Context context) Creates and opens an output stream to write data to the file.Opens a file input stream to download the file.Opens a file input stream to download the specified range of the file.openInputStream(DataLakeFileInputStreamOptions options, com.azure.core.util.Context context) Opens a file input stream to download the specified range of the file.openQueryInputStream(String expression) Opens an input stream to query the file.com.azure.core.http.rest.Response<InputStream> openQueryInputStreamWithResponse(FileQueryOptions queryOptions) Opens an input stream to query the file.voidquery(OutputStream stream, String expression) Queries an entire file into an output stream.queryWithResponse(FileQueryOptions queryOptions, Duration timeout, com.azure.core.util.Context context) Queries an entire file into an output stream.voidread(OutputStream stream) Reads the entire file into an output stream.readToFile(ReadToFileOptions options) Reads the entire file into a file specified by the path.readToFile(String filePath) Reads the entire file into a file specified by the path.readToFile(String filePath, boolean overwrite) Reads the entire file into a file specified by the path.com.azure.core.http.rest.Response<PathProperties> readToFileWithResponse(ReadToFileOptions options, Duration timeout, com.azure.core.util.Context context) Reads the entire file into a file specified by the path.com.azure.core.http.rest.Response<PathProperties> readToFileWithResponse(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions downloadRetryOptions, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions, Duration timeout, com.azure.core.util.Context context) Reads the entire file into a file specified by the path.readWithResponse(OutputStream stream, FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5, Duration timeout, com.azure.core.util.Context context) Reads a range of bytes from a file into an output stream.Moves the file to another location within the file system.com.azure.core.http.rest.Response<DataLakeFileClient> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions, Duration timeout, com.azure.core.util.Context context) Moves the file to another location within the file system.voidSchedules the file for deletion.com.azure.core.http.rest.Response<Void> scheduleDeletionWithResponse(FileScheduleDeletionOptions options, Duration timeout, com.azure.core.util.Context context) Schedules the file for deletion.upload(com.azure.core.util.BinaryData data) Creates a new file.upload(com.azure.core.util.BinaryData data, boolean overwrite) Creates a new file, or updates the content of an existing file.upload(InputStream data, long length) Creates a new file.upload(InputStream data, long length, boolean overwrite) Creates a new file, or updates the content of an existing file.voiduploadFromFile(String filePath) Creates a file, with the content of the specified file.voiduploadFromFile(String filePath, boolean overwrite) Creates a file, with the content of the specified file.voiduploadFromFile(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions, Duration timeout) Creates a file, with the content of the specified file.com.azure.core.http.rest.Response<PathInfo> uploadFromFileWithResponse(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Creates a file, with the content of the specified file.com.azure.core.http.rest.Response<PathInfo> uploadWithResponse(FileParallelUploadOptions options, Duration timeout, com.azure.core.util.Context context) Creates a new file.Methods inherited from class com.azure.storage.file.datalake.DataLakePathClient
create, create, createIfNotExists, createIfNotExistsWithResponse, createWithResponse, createWithResponse, exists, existsWithResponse, generateSas, generateSas, generateSas, generateUserDelegationSas, generateUserDelegationSas, generateUserDelegationSas, getAccessControl, getAccessControlWithResponse, getAccountName, getCustomerProvidedKey, getFileSystemName, getHttpPipeline, getProperties, getProperties, getPropertiesWithResponse, getServiceVersion, removeAccessControlRecursive, removeAccessControlRecursiveWithResponse, setAccessControlList, setAccessControlListWithResponse, setAccessControlRecursive, setAccessControlRecursiveWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setMetadata, setMetadataWithResponse, setPermissions, setPermissionsWithResponse, updateAccessControlRecursive, updateAccessControlRecursiveWithResponse
-
Method Details
-
getFileUrl
Gets the URL of the file represented by this client on the Data Lake service.- Returns:
- the URL.
-
getFilePath
Gets the path of this file, not including the name of the resource itself.- Returns:
- The path of the file.
-
getFileName
Gets the name of this file, not including its full path.- Returns:
- The name of the file.
-
getCustomerProvidedKeyClient
Creates a newDataLakeFileClientwith the specifiedcustomerProvidedKey.- Overrides:
getCustomerProvidedKeyClientin classDataLakePathClient- Parameters:
customerProvidedKey- theCustomerProvidedKeyfor the blob, passnullto use no customer provided key.- Returns:
- a
DataLakeFileClientwith the specifiedcustomerProvidedKey.
-
delete
public void delete()Deletes a file.Code Samples
client.delete(); System.out.println("Delete request completed");For more information see the Azure Docs
-
deleteWithResponse
public com.azure.core.http.rest.Response<Void> deleteWithResponse(DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Deletes a file.Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); client.deleteWithResponse(requestConditions, timeout, new Context(key1, value1)); System.out.println("Delete request completed");For more information see the Azure Docs
- Parameters:
requestConditions-DataLakeRequestConditionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing status code and HTTP headers.
-
deleteIfExists
public boolean deleteIfExists()Deletes a file if it exists.Code Samples
client.deleteIfExists(); System.out.println("Delete request completed");For more information see the Azure Docs
- Overrides:
deleteIfExistsin classDataLakePathClient- Returns:
trueif file is successfully deleted,falseif the file does not exist.
-
deleteIfExistsWithResponse
public com.azure.core.http.rest.Response<Boolean> deleteIfExistsWithResponse(DataLakePathDeleteOptions options, Duration timeout, com.azure.core.util.Context context) Deletes a file if it exists.Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false) .setRequestConditions(requestConditions); Response<Boolean> response = client.deleteIfExistsWithResponse(options, timeout, new Context(key1, value1)); if (response.getStatusCode() == 404) { System.out.println("Does not exist."); } else { System.out.printf("Delete completed with status %d%n", response.getStatusCode()); }For more information see the Azure Docs
- Overrides:
deleteIfExistsWithResponsein classDataLakePathClient- Parameters:
options-DataLakePathDeleteOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing status code and HTTP headers. If
Response's status code is 200, the file was successfully deleted. If status code is 404, the file does not exist.
-
upload
Creates a new file. By default, this method will not overwrite an existing file.Code Samples
try { client.upload(data, length); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
data- The data to write to the blob. The data must be markable. This is in order to support retries. If the data is not markable, consider wrapping your data source in aBufferedInputStreamto add mark support.length- The exact length of the data. It is important that this value match precisely the length of the data provided in theInputStream.- Returns:
- Information about the uploaded path.
-
upload
Creates a new file. By default, this method will not overwrite an existing file.Code Samples
try { client.upload(binaryData); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
data- The data to write to the blob. The data must be markable. This is in order to support retries. If the data is not markable, consider wrapping your data source in aBufferedInputStreamto add mark support.- Returns:
- Information about the uploaded path.
-
upload
Creates a new file, or updates the content of an existing file.Code Samples
try { boolean overwrite = false; client.upload(data, length, overwrite); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
data- The data to write to the blob. The data must be markable. This is in order to support retries. If the data is not markable, consider wrapping your data source in aBufferedInputStreamto add mark support.length- The exact length of the data. It is important that this value match precisely the length of the data provided in theInputStream.overwrite- Whether to overwrite, should data exist on the file.- Returns:
- Information about the uploaded path.
-
upload
Creates a new file, or updates the content of an existing file.Code Samples
try { boolean overwrite = false; client.upload(binaryData, overwrite); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
data- The data to write to the blob. The data must be markable. This is in order to support retries. If the data is not markable, consider wrapping your data source in aBufferedInputStreamto add mark support.overwrite- Whether to overwrite, should data exist on the file.- Returns:
- Information about the uploaded path.
-
uploadWithResponse
public com.azure.core.http.rest.Response<PathInfo> uploadWithResponse(FileParallelUploadOptions options, Duration timeout, com.azure.core.util.Context context) Creates a new file. To avoid overwriting, pass "*" toDataLakeRequestConditions.setIfNoneMatch(String).Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); try { client.uploadWithResponse(new FileParallelUploadOptions(data, length) .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers) .setMetadata(metadata).setRequestConditions(requestConditions) .setPermissions("permissions").setUmask("umask"), timeout, new Context("key", "value")); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
options-FileParallelUploadOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- Information about the uploaded path.
-
uploadFromFile
Creates a file, with the content of the specified file. By default, this method will not overwrite an existing file.Code Samples
try { client.uploadFromFile(filePath); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
filePath- Path of the file to upload- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFile
Creates a file, with the content of the specified file.Code Samples
try { boolean overwrite = false; client.uploadFromFile(filePath, overwrite); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
filePath- Path of the file to uploadoverwrite- Whether to overwrite, should the file already exist- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFile
public void uploadFromFile(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions, Duration timeout) Creates a file, with the content of the specified file.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String).Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); try { client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions, timeout); System.out.println("Upload from file succeeded"); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
filePath- Path of the file to uploadparallelTransferOptions-ParallelTransferOptionsused to configure buffered uploading.headers-PathHttpHeadersmetadata- Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.requestConditions-DataLakeRequestConditionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.- Throws:
UncheckedIOException- If an I/O error occurs
-
uploadFromFileWithResponse
public com.azure.core.http.rest.Response<PathInfo> uploadFromFileWithResponse(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String, String> metadata, DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Creates a file, with the content of the specified file.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String).Code Samples
PathHttpHeaders headers = new PathHttpHeaders() .setContentMd5("data".getBytes(StandardCharsets.UTF_8)) .setContentLanguage("en-US") .setContentType("binary"); Map<String, String> metadata = Collections.singletonMap("metadata", "value"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3)); Long blockSize = 100L * 1024L * 1024L; // 100 MB; ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize); try { Response<PathInfo> response = client.uploadFromFileWithResponse(filePath, parallelTransferOptions, headers, metadata, requestConditions, timeout, new Context("key", "value")); System.out.printf("Upload from file succeeded with status %d%n", response.getStatusCode()); } catch (UncheckedIOException ex) { System.err.printf("Failed to upload from file %s%n", ex.getMessage()); }- Parameters:
filePath- Path of the file to uploadparallelTransferOptions-ParallelTransferOptionsused to configure buffered uploading.headers-PathHttpHeadersmetadata- Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.requestConditions-DataLakeRequestConditionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- Response containing information about the uploaded path.
- Throws:
UncheckedIOException- If an I/O error occurs
-
append
Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
client.append(data, offset, length); System.out.println("Append data completed");For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.length- The exact length of the data.
-
append
public void append(com.azure.core.util.BinaryData data, long fileOffset) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
client.append(binaryData, offset); System.out.println("Append data completed");For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.
-
appendWithResponse
public com.azure.core.http.rest.Response<Void> appendWithResponse(InputStream data, long fileOffset, long length, byte[] contentMd5, String leaseId, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 Response<Void> response = client.appendWithResponse(data, offset, length, contentMd5, leaseId, timeout, new Context(key1, value1)); System.out.printf("Append data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.length- The exact length of the data.contentMd5- An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the received data and fail the request if it does not match the provided MD5.leaseId- By setting lease id, requests will fail if the provided lease does not match the active lease on the file.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response signalling completion.
-
appendWithResponse
public com.azure.core.http.rest.Response<Void> appendWithResponse(InputStream data, long fileOffset, long length, DataLakeFileAppendOptions appendOptions, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); byte[] contentMd5 = new byte[0]; // Replace with valid md5 DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions() .setLeaseId(leaseId) .setContentHash(contentMd5) .setFlush(true); Response<Void> response = client.appendWithResponse(data, offset, length, appendOptions, timeout, new Context(key1, value1)); System.out.printf("Append data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.length- The exact length of the data.appendOptions-DataLakeFileAppendOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response signalling completion.
-
appendWithResponse
public com.azure.core.http.rest.Response<Void> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, byte[] contentMd5, String leaseId, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 Response<Void> response = client.appendWithResponse(binaryData, offset, contentMd5, leaseId, timeout, new Context(key1, value1)); System.out.printf("Append data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.contentMd5- An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the received data and fail the request if it does not match the provided MD5.leaseId- By setting lease id, requests will fail if the provided lease does not match the active lease on the file.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response signalling completion.
-
appendWithResponse
public com.azure.core.http.rest.Response<Void> appendWithResponse(com.azure.core.util.BinaryData data, long fileOffset, DataLakeFileAppendOptions appendOptions, Duration timeout, com.azure.core.util.Context context) Appends data to the specified resource to later be flushed (written) by a call to flushCode Samples
BinaryData binaryData = BinaryData.fromStream(data, length); FileRange range = new FileRange(1024, 2048L); byte[] contentMd5 = new byte[0]; // Replace with valid md5 DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions() .setLeaseId(leaseId) .setContentHash(contentMd5) .setFlush(true); Response<Void> response = client.appendWithResponse(binaryData, offset, appendOptions, timeout, new Context(key1, value1)); System.out.printf("Append data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
data- The data to write to the file.fileOffset- The position where the data is to be appended.appendOptions-DataLakeFileAppendOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response signalling completion.
-
flush
Deprecated.Seeflush(long, boolean)instead.Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.By default this method will not overwrite existing data.
Code Samples
client.flush(position); System.out.println("Flush data completed");For more information, see the Azure Docs
- Parameters:
position- The length of the file after all data has been written.- Returns:
- Information about the created resource.
-
flush
Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
boolean overwrite = true; client.flush(position, overwrite); System.out.println("Flush data completed");For more information, see the Azure Docs
- Parameters:
position- The length of the file after all data has been written.overwrite- Whether to overwrite, should data exist on the file.- Returns:
- Information about the created resource.
-
flushWithResponse
public com.azure.core.http.rest.Response<PathInfo> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions, Duration timeout, com.azure.core.util.Context context) Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 boolean retainUncommittedData = false; boolean close = false; PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentLanguage("en-US") .setContentType("binary"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); Response<PathInfo> response = client.flushWithResponse(position, retainUncommittedData, close, httpHeaders, requestConditions, timeout, new Context(key1, value1)); System.out.printf("Flush data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
position- The length of the file after all data has been written.retainUncommittedData- Whether uncommitted data is to be retained after the operation.close- Whether a file changed event raised indicates completion (true) or modification (false).httpHeaders-httpHeadersrequestConditions-requestConditionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing the information of the created resource.
-
flushWithResponse
public com.azure.core.http.rest.Response<PathInfo> flushWithResponse(long position, DataLakeFileFlushOptions flushOptions, Duration timeout, com.azure.core.util.Context context) Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); byte[] contentMd5 = new byte[0]; // Replace with valid md5 boolean retainUncommittedData = false; boolean close = false; PathHttpHeaders httpHeaders = new PathHttpHeaders() .setContentLanguage("en-US") .setContentType("binary"); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); Integer leaseDuration = 15; DataLakeFileFlushOptions flushOptions = new DataLakeFileFlushOptions() .setUncommittedDataRetained(retainUncommittedData) .setClose(close) .setPathHttpHeaders(httpHeaders) .setRequestConditions(requestConditions) .setLeaseAction(LeaseAction.ACQUIRE) .setLeaseDuration(leaseDuration) .setProposedLeaseId(leaseId); Response<PathInfo> response = client.flushWithResponse(position, flushOptions, timeout, new Context(key1, value1)); System.out.printf("Flush data completed with status %d%n", response.getStatusCode());For more information, see the Azure Docs
- Parameters:
position- The length of the file after all data has been written.flushOptions-DataLakeFileFlushOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing the information of the created resource.
-
read
Reads the entire file into an output stream.Code Samples
client.read(new ByteArrayOutputStream()); System.out.println("Download completed.");For more information, see the Azure Docs
- Parameters:
stream- A non-nullOutputStreaminstance where the downloaded data will be written.- Throws:
UncheckedIOException- If an I/O error occurs.NullPointerException- ifstreamis null
-
readWithResponse
public FileReadResponse readWithResponse(OutputStream stream, FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5, Duration timeout, com.azure.core.util.Context context) Reads a range of bytes from a file into an output stream.Code Samples
FileRange range = new FileRange(1024, 2048L); DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5); System.out.printf("Download completed with status %d%n", client.readWithResponse(new ByteArrayOutputStream(), range, options, null, false, timeout, new Context(key2, value2)).getStatusCode());For more information, see the Azure Docs
- Parameters:
stream- A non-nullOutputStreaminstance where the downloaded data will be written.range-FileRangeoptions-DownloadRetryOptionsrequestConditions-DataLakeRequestConditionsgetRangeContentMd5- Whether the contentMD5 for the specified file range should be returned.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing status code and HTTP headers.
- Throws:
UncheckedIOException- If an I/O error occurs.NullPointerException- ifstreamis null
-
openInputStream
Opens a file input stream to download the file. Locks on ETags.DataLakeFileOpenInputStreamResult inputStream = client.openInputStream();
- Returns:
- An
InputStreamobject that represents the stream to use for reading from the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
openInputStream
Opens a file input stream to download the specified range of the file. Defaults to ETag locking if the option is not specified.DataLakeFileInputStreamOptions options = new DataLakeFileInputStreamOptions().setBlockSize(1024) .setRequestConditions(new DataLakeRequestConditions()); DataLakeFileOpenInputStreamResult streamResult = client.openInputStream(options);- Parameters:
options-DataLakeFileInputStreamOptions- Returns:
- A
DataLakeFileOpenInputStreamResultobject that contains the stream to use for reading from the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream(DataLakeFileInputStreamOptions options, com.azure.core.util.Context context) Opens a file input stream to download the specified range of the file. Defaults to ETag locking if the option is not specified.options = new DataLakeFileInputStreamOptions().setBlockSize(1024) .setRequestConditions(new DataLakeRequestConditions()); DataLakeFileOpenInputStreamResult stream = client.openInputStream(options, new Context(key1, value1));- Parameters:
options-DataLakeFileInputStreamOptionscontext- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
DataLakeFileOpenInputStreamResultobject that contains the stream to use for reading from the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
getOutputStream
Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.- Returns:
- The
OutputStreamthat can be used to write to the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
getOutputStream
Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String).- Parameters:
options-DataLakeFileOutputStreamOptions- Returns:
- The
OutputStreamthat can be used to write to the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
getOutputStream
public OutputStream getOutputStream(DataLakeFileOutputStreamOptions options, com.azure.core.util.Context context) Creates and opens an output stream to write data to the file. If the file already exists on the service, it will be overwritten.To avoid overwriting, pass "*" to
DataLakeRequestConditions.setIfNoneMatch(String).- Parameters:
options-DataLakeFileOutputStreamOptionscontext- Additional context that is passed through the Http pipeline during the service call.- Returns:
- The
OutputStreamthat can be used to write to the file. - Throws:
DataLakeStorageException- If a storage service error occurred.
-
readToFile
Reads the entire file into a file specified by the path.The file will be created and must not exist, if the file already exists a
FileAlreadyExistsExceptionwill be thrown.Code Samples
client.readToFile(file); System.out.println("Completed download to file");For more information, see the Azure Docs
- Parameters:
filePath- AStringrepresenting the filePath where the downloaded data will be written.- Returns:
- The file properties and metadata.
- Throws:
UncheckedIOException- If an I/O error occurs
-
readToFile
Reads the entire file into a file specified by the path.The file will be created and must not exist, if the file already exists a
FileAlreadyExistsExceptionwill be thrown.Code Samples
client.readToFile(new ReadToFileOptions(file)); System.out.println("Completed download to file");For more information, see the Azure Docs
- Parameters:
options-ReadToFileOptions- Returns:
- The file properties and metadata.
- Throws:
UncheckedIOException- If an I/O error occurs
-
readToFile
Reads the entire file into a file specified by the path.If overwrite is set to false, the file will be created and must not exist, if the file already exists a
FileAlreadyExistsExceptionwill be thrown.Code Samples
boolean overwrite = false; // Default value client.readToFile(file, overwrite); System.out.println("Completed download to file");For more information, see the Azure Docs
- Parameters:
filePath- AStringrepresenting the filePath where the downloaded data will be written.overwrite- Whether to overwrite the file, should the file exist.- Returns:
- The file properties and metadata.
- Throws:
UncheckedIOException- If an I/O error occurs
-
readToFileWithResponse
public com.azure.core.http.rest.Response<PathProperties> readToFileWithResponse(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions downloadRetryOptions, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions, Duration timeout, com.azure.core.util.Context context) Reads the entire file into a file specified by the path.By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsExceptionwill be thrown. To override this behavior, provide appropriateOpenOptionsCode Samples
FileRange fileRange = new FileRange(1024, 2048L); DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5); Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options client.readToFileWithResponse(file, fileRange, new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB), downloadRetryOptions, null, false, openOptions, timeout, new Context(key2, value2)); System.out.println("Completed download to file");For more information, see the Azure Docs
- Parameters:
filePath- AStringrepresenting the filePath where the downloaded data will be written.range-FileRangeparallelTransferOptions-ParallelTransferOptionsto use to download to file. Number of parallel transfers parameter is ignored.downloadRetryOptions-DownloadRetryOptionsrequestConditions-DataLakeRequestConditionsrangeGetContentMd5- Whether the contentMD5 for the specified file range should be returned.openOptions-OpenOptionsto use to configure how to open or create the file.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing the file properties and metadata.
- Throws:
UncheckedIOException- If an I/O error occurs.
-
readToFileWithResponse
public com.azure.core.http.rest.Response<PathProperties> readToFileWithResponse(ReadToFileOptions options, Duration timeout, com.azure.core.util.Context context) Reads the entire file into a file specified by the path.By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsExceptionwill be thrown. To override this behavior, provide appropriateOpenOptionsCode Samples
ReadToFileOptions options = new ReadToFileOptions(file); options.setRange(new FileRange(1024, 2048L)); options.setDownloadRetryOptions(new DownloadRetryOptions().setMaxRetryRequests(5)); options.setOpenOptions(new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE, StandardOpenOption.READ))); //Default options options.setParallelTransferOptions(new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB)); options.setDataLakeRequestConditions(null); options.setRangeGetContentMd5(false); client.readToFileWithResponse(options, timeout, new Context(key2, value2)); System.out.println("Completed download to file");- Parameters:
options-ReadToFileOptionstimeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing the file properties and metadata.
- Throws:
UncheckedIOException- If an I/O error occurs.
-
rename
Moves the file to another location within the file system. For more information see the Azure Docs.Code Samples
DataLakeDirectoryAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block(); System.out.println("Directory Client has been renamed");- Parameters:
destinationFileSystem- The file system of the destination within the account.nullfor the current file system.destinationPath- Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"- Returns:
- A
DataLakeFileClientused to interact with the new file created.
-
renameWithResponse
public com.azure.core.http.rest.Response<DataLakeFileClient> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions, Duration timeout, com.azure.core.util.Context context) Moves the file to another location within the file system. For more information, see the Azure Docs.Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions() .setLeaseId(leaseId); DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions(); DataLakeFileClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath, sourceRequestConditions, destinationRequestConditions, timeout, new Context(key1, value1)).getValue(); System.out.println("Directory Client has been renamed");- Parameters:
destinationFileSystem- The file system of the destination within the account.nullfor the current file system.destinationPath- Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"sourceRequestConditions-DataLakeRequestConditionsagainst the source.destinationRequestConditions-DataLakeRequestConditionsagainst the destination.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsewhosevaluethat contains aDataLakeFileClientused to interact with the file created.
-
openQueryInputStream
Opens an input stream to query the file.For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage"; InputStream inputStream = client.openQueryInputStream(expression); // Now you can read from the input stream like you would normally.
- Parameters:
expression- The query expression.- Returns:
- An
InputStreamobject that represents the stream to use for reading the query response.
-
openQueryInputStreamWithResponse
public com.azure.core.http.rest.Response<InputStream> openQueryInputStreamWithResponse(FileQueryOptions queryOptions) Opens an input stream to query the file.For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage"; FileQuerySerialization input = new FileQueryDelimitedSerialization() .setColumnSeparator(',') .setEscapeChar('\n') .setRecordSeparator('\n') .setHeadersPresent(true) .setFieldQuote('"'); FileQuerySerialization output = new FileQueryJsonSerialization() .setRecordSeparator('\n'); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions() .setLeaseId("leaseId"); Consumer<FileQueryError> errorConsumer = System.out::println; Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: " + progress.getBytesScanned()); FileQueryOptions queryOptions = new FileQueryOptions(expression) .setInputSerialization(input) .setOutputSerialization(output) .setRequestConditions(requestConditions) .setErrorConsumer(errorConsumer) .setProgressConsumer(progressConsumer); InputStream inputStream = client.openQueryInputStreamWithResponse(queryOptions).getValue(); // Now you can read from the input stream like you would normally.- Parameters:
queryOptions-The query options.- Returns:
- A response containing status code and HTTP headers including an
InputStreamobject that represents the stream to use for reading the query response.
-
query
Queries an entire file into an output stream.For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream(); String expression = "SELECT * from BlobStorage"; client.query(queryData, expression); System.out.println("Query completed.");- Parameters:
stream- A non-nullOutputStreaminstance where the downloaded data will be written.expression- The query expression.- Throws:
UncheckedIOException- If an I/O error occurs.NullPointerException- ifstreamis null.
-
queryWithResponse
public FileQueryResponse queryWithResponse(FileQueryOptions queryOptions, Duration timeout, com.azure.core.util.Context context) Queries an entire file into an output stream.For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream(); String expression = "SELECT * from BlobStorage"; FileQueryJsonSerialization input = new FileQueryJsonSerialization() .setRecordSeparator('\n'); FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization() .setEscapeChar('\0') .setColumnSeparator(',') .setRecordSeparator('\n') .setFieldQuote('\'') .setHeadersPresent(true); DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId); Consumer<FileQueryError> errorConsumer = System.out::println; Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: " + progress.getBytesScanned()); FileQueryOptions queryOptions = new FileQueryOptions(expression, queryData) .setInputSerialization(input) .setOutputSerialization(output) .setRequestConditions(requestConditions) .setErrorConsumer(errorConsumer) .setProgressConsumer(progressConsumer); System.out.printf("Query completed with status %d%n", client.queryWithResponse(queryOptions, timeout, new Context(key1, value1)) .getStatusCode());- Parameters:
queryOptions-The query options.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing status code and HTTP headers.
- Throws:
UncheckedIOException- If an I/O error occurs.NullPointerException- ifstreamis null.
-
scheduleDeletion
Schedules the file for deletion.Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); client.scheduleDeletion(options); System.out.println("File deletion has been scheduled");- Parameters:
options- Schedule deletion parameters.
-
scheduleDeletionWithResponse
public com.azure.core.http.rest.Response<Void> scheduleDeletionWithResponse(FileScheduleDeletionOptions options, Duration timeout, com.azure.core.util.Context context) Schedules the file for deletion.Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1)); Context context = new Context("key", "value"); client.scheduleDeletionWithResponse(options, timeout, context); System.out.println("File deletion has been scheduled");- Parameters:
options- Schedule deletion parameters.timeout- An optional timeout value beyond which aRuntimeExceptionwill be raised.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A response containing status code and HTTP headers.
-
flush(long, boolean)instead.