Uses of Class
org.apache.drill.exec.store.parquet.ParquetReaderConfig
Packages that use ParquetReaderConfig
Package
Description
-
Uses of ParquetReaderConfig in org.apache.drill.exec.metastore.store.parquet
Methods in org.apache.drill.exec.metastore.store.parquet with parameters of type ParquetReaderConfigModifier and TypeMethodDescriptionMetastoreParquetTableMetadataProvider.Builder.withReaderConfig(ParquetReaderConfig readerConfig) ParquetMetadataProviderBuilder.withReaderConfig(ParquetReaderConfig readerConfig) -
Uses of ParquetReaderConfig in org.apache.drill.exec.store.delta
Methods in org.apache.drill.exec.store.delta with parameters of type ParquetReaderConfigModifier and TypeMethodDescriptionDeltaGroupScan.DeltaGroupScanBuilder.readerConfig(ParquetReaderConfig readerConfig) Constructors in org.apache.drill.exec.store.delta with parameters of type ParquetReaderConfigModifierConstructorDescriptionDeltaGroupScan(String userName, List<ReadEntryWithPath> entries, StoragePluginConfig storageConfig, FormatPluginConfig formatConfig, List<SchemaPath> columns, TupleMetadata schema, String path, ParquetReaderConfig readerConfig, LogicalExpression condition, Integer limit, Map<org.apache.hadoop.fs.Path, Map<String, String>> partitionHolder, StoragePluginRegistry pluginRegistry) DeltaRowGroupScan(String userName, DeltaFormatPlugin formatPlugin, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, Map<org.apache.hadoop.fs.Path, Map<String, String>> partitions, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) DeltaRowGroupScan(StoragePluginRegistry registry, String userName, StoragePluginConfig storageConfig, FormatPluginConfig formatPluginConfig, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, Map<org.apache.hadoop.fs.Path, Map<String, String>> partitions, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) -
Uses of ParquetReaderConfig in org.apache.drill.exec.store.hive
Constructors in org.apache.drill.exec.store.hive with parameters of type ParquetReaderConfigModifierConstructorDescriptionHiveDrillNativeParquetRowGroupScan(String userName, HiveStoragePlugin hiveStoragePlugin, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, HivePartitionHolder hivePartitionHolder, Map<String, String> confProperties, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) HiveDrillNativeParquetRowGroupScan(StoragePluginRegistry registry, String userName, HiveStoragePluginConfig hiveStoragePluginConfig, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, HivePartitionHolder hivePartitionHolder, Map<String, String> confProperties, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) HiveDrillNativeParquetScan(String userName, List<SchemaPath> columns, HiveStoragePlugin hiveStoragePlugin, List<HiveMetadataProvider.LogicalInputSplit> logicalInputSplits, Map<String, String> confProperties, ParquetReaderConfig readerConfig) HiveDrillNativeParquetScan(String userName, List<SchemaPath> columns, HiveStoragePlugin hiveStoragePlugin, List<HiveMetadataProvider.LogicalInputSplit> logicalInputSplits, Map<String, String> confProperties, ParquetReaderConfig readerConfig, LogicalExpression filter) HiveDrillNativeParquetScan(StoragePluginRegistry engineRegistry, String userName, HiveStoragePluginConfig hiveStoragePluginConfig, List<SchemaPath> columns, List<ReadEntryWithPath> entries, HivePartitionHolder hivePartitionHolder, Map<String, String> confProperties, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) -
Uses of ParquetReaderConfig in org.apache.drill.exec.store.parquet
Fields in org.apache.drill.exec.store.parquet declared as ParquetReaderConfigModifier and TypeFieldDescriptionprotected ParquetReaderConfigAbstractParquetGroupScan.readerConfigprotected final ParquetReaderConfigAbstractParquetRowGroupScan.readerConfigprotected final ParquetReaderConfigBaseParquetMetadataProvider.readerConfigMethods in org.apache.drill.exec.store.parquet that return ParquetReaderConfigModifier and TypeMethodDescriptionParquetReaderConfig.Builder.build()static ParquetReaderConfigParquetReaderConfig.getDefaultInstance()AbstractParquetGroupScan.getReaderConfig()AbstractParquetRowGroupScan.getReaderConfig()AbstractParquetGroupScan.getReaderConfigForSerialization()AbstractParquetRowGroupScan.getReaderConfigForSerialization()Methods in org.apache.drill.exec.store.parquet with parameters of type ParquetReaderConfigModifier and TypeMethodDescriptionstatic voidParquetReaderUtility.transformBinaryInMetadataCache(MetadataBase.ParquetTableMetadataBase parquetTableMetadata, ParquetReaderConfig readerConfig) Transforms values for min / max binary statistics to byte array.BaseParquetMetadataProvider.Builder.withReaderConfig(ParquetReaderConfig readerConfig) Constructors in org.apache.drill.exec.store.parquet with parameters of type ParquetReaderConfigModifierConstructorDescriptionprotectedAbstractParquetGroupScan(String userName, List<SchemaPath> columns, List<ReadEntryWithPath> entries, ParquetReaderConfig readerConfig, LogicalExpression filter) protectedAbstractParquetRowGroupScan(String userName, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, ParquetReaderConfig readerConfig, LogicalExpression filter, org.apache.hadoop.fs.Path selectionRoot, TupleMetadata schema) ParquetGroupScan(String userName, FileSelection selection, ParquetFormatPlugin formatPlugin, List<SchemaPath> columns, ParquetReaderConfig readerConfig, LogicalExpression filter, MetadataProviderManager metadataProviderManager) ParquetGroupScan(String userName, FileSelection selection, ParquetFormatPlugin formatPlugin, List<SchemaPath> columns, ParquetReaderConfig readerConfig, MetadataProviderManager metadataProviderManager) ParquetGroupScan(StoragePluginRegistry engineRegistry, String userName, List<ReadEntryWithPath> entries, StoragePluginConfig storageConfig, FormatPluginConfig formatConfig, List<SchemaPath> columns, org.apache.hadoop.fs.Path selectionRoot, org.apache.hadoop.fs.Path cacheFileRoot, ParquetReaderConfig readerConfig, LogicalExpression filter, TupleMetadata schema) ParquetRowGroupScan(String userName, ParquetFormatPlugin formatPlugin, List<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, ParquetReaderConfig readerConfig, org.apache.hadoop.fs.Path selectionRoot, LogicalExpression filter, TupleMetadata schema) ParquetRowGroupScan(StoragePluginRegistry registry, String userName, StoragePluginConfig storageConfig, FormatPluginConfig formatConfig, LinkedList<RowGroupReadEntry> rowGroupReadEntries, List<SchemaPath> columns, ParquetReaderConfig readerConfig, org.apache.hadoop.fs.Path selectionRoot, LogicalExpression filter, TupleMetadata schema) -
Uses of ParquetReaderConfig in org.apache.drill.exec.store.parquet.metadata
Methods in org.apache.drill.exec.store.parquet.metadata with parameters of type ParquetReaderConfigModifier and TypeMethodDescriptionstatic voidMetadata.createMeta(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, ParquetReaderConfig readerConfig, boolean allColumnsInteresting, Set<SchemaPath> columnSet) Create the parquet metadata file for the directory at the given path, and for any subdirectories.Metadata.getParquetFileMetadata_v4(Metadata_V4.ParquetTableMetadata_v4 parquetTableMetadata, org.apache.parquet.hadoop.metadata.ParquetMetadata footer, org.apache.hadoop.fs.FileStatus file, org.apache.hadoop.fs.FileSystem fs, boolean allColumnsInteresting, boolean skipNonInteresting, Set<SchemaPath> columnSet, ParquetReaderConfig readerConfig) Get the file metadata for a single fileMetadata.getParquetTableMetadata(Map<org.apache.hadoop.fs.FileStatus, org.apache.hadoop.fs.FileSystem> fileStatusMap, ParquetReaderConfig readerConfig) Get the parquet metadata for a list of parquet files.Metadata.getParquetTableMetadata(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, ParquetReaderConfig readerConfig) Get the parquet metadata for the parquet files in the given directory, including those in subdirectories.static Metadata_V4.MetadataSummaryMetadata.getSummary(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path metadataParentDir, boolean autoRefreshTriggered, ParquetReaderConfig readerConfig) Reads the summary from the metadata cache file, if the cache file is stale recreates the metadataMetadata.readBlockMeta(org.apache.hadoop.fs.FileSystem fs, List<org.apache.hadoop.fs.Path> paths, MetadataContext metaContext, ParquetReaderConfig readerConfig) Get the parquet metadata for the table by reading the metadata filestatic ParquetTableMetadataDirsMetadata.readMetadataDirs(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, MetadataContext metaContext, ParquetReaderConfig readerConfig) Get the parquet metadata for all subdirectories by reading the metadata fileConstructors in org.apache.drill.exec.store.parquet.metadata with parameters of type ParquetReaderConfigModifierConstructorDescriptionFileMetadataCollector(org.apache.parquet.hadoop.metadata.ParquetMetadata metadata, org.apache.hadoop.fs.FileStatus file, org.apache.hadoop.fs.FileSystem fs, boolean allColumnsInteresting, boolean skipNonInteresting, Set<SchemaPath> columnSet, ParquetReaderConfig readerConfig)