public class LTSVFormatPlugin extends EasyFormatPlugin<LTSVFormatPluginConfig>
EasyFormatPlugin.EasyFormatConfig, EasyFormatPlugin.EasyFormatConfigBuilderformatConfig| Constructor and Description |
|---|
LTSVFormatPlugin(String name,
DrillbitContext context,
org.apache.hadoop.conf.Configuration fsConf,
StoragePluginConfig storageConfig) |
LTSVFormatPlugin(String name,
DrillbitContext context,
org.apache.hadoop.conf.Configuration fsConf,
StoragePluginConfig config,
LTSVFormatPluginConfig formatPluginConfig) |
| Modifier and Type | Method and Description |
|---|---|
RecordReader |
getRecordReader(FragmentContext context,
DrillFileSystem dfs,
FileWork fileWork,
List<SchemaPath> columns,
String userName)
Return a record reader for the specific file format, when using the original
ScanBatch scanner. |
org.apache.drill.exec.store.RecordWriter |
getRecordWriter(FragmentContext context,
EasyWriter writer) |
String |
getWriterOperatorType() |
DrillStatsTable.TableStatistics |
readStatistics(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path statsTablePath) |
boolean |
supportsPushDown()
Does this plugin support projection push down? That is, can the reader
itself handle the tasks of projecting table columns, creating null
columns for missing table columns, and so on?
|
boolean |
supportsStatistics() |
void |
writeStatistics(DrillStatsTable.TableStatistics statistics,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path statsTablePath) |
easyConfig, frameworkBuilder, getConfig, getContext, getFsConf, getGroupScan, getGroupScan, getMatcher, getName, getOptimizerRules, getReaderBatch, getReaderOperatorType, getScanStats, getStatisticsRecordWriter, getStorageConfig, getWriter, getWriterBatch, initScanBuilder, isBlockSplittable, isCompressible, isStatisticsRecordWriter, newBatchReader, supportsAutoPartitioning, supportsFileImplicitColumns, supportsLimitPushdown, supportsRead, supportsWrite, useEnhancedScanclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetGroupScan, getGroupScanpublic LTSVFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig storageConfig)
public LTSVFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig config, LTSVFormatPluginConfig formatPluginConfig)
public RecordReader getRecordReader(FragmentContext context, DrillFileSystem dfs, FileWork fileWork, List<SchemaPath> columns, String userName)
EasyFormatPluginScanBatch scanner.getRecordReader in class EasyFormatPlugin<LTSVFormatPluginConfig>context - fragment contextdfs - Drill file systemfileWork - metadata about the file to be scannedcolumns - list of projected columns (or may just contain the wildcard)userName - the name of the user running the querypublic String getWriterOperatorType()
getWriterOperatorType in class EasyFormatPlugin<LTSVFormatPluginConfig>public boolean supportsPushDown()
EasyFormatPluginsupportsPushDown in class EasyFormatPlugin<LTSVFormatPluginConfig>true if the plugin supports projection push-down,
false if Drill should do the task by adding a project operatorpublic org.apache.drill.exec.store.RecordWriter getRecordWriter(FragmentContext context, EasyWriter writer)
getRecordWriter in class EasyFormatPlugin<LTSVFormatPluginConfig>public boolean supportsStatistics()
supportsStatistics in interface FormatPluginsupportsStatistics in class EasyFormatPlugin<LTSVFormatPluginConfig>public DrillStatsTable.TableStatistics readStatistics(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath)
readStatistics in interface FormatPluginreadStatistics in class EasyFormatPlugin<LTSVFormatPluginConfig>public void writeStatistics(DrillStatsTable.TableStatistics statistics, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath)
writeStatistics in interface FormatPluginwriteStatistics in class EasyFormatPlugin<LTSVFormatPluginConfig>Copyright © 2021 The Apache Software Foundation. All rights reserved.