Class JSONFormatPlugin
java.lang.Object
org.apache.drill.exec.store.dfs.easy.EasyFormatPlugin<JSONFormatConfig>
org.apache.drill.exec.store.easy.json.JSONFormatPlugin
- All Implemented Interfaces:
FormatPlugin
-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.drill.exec.store.dfs.easy.EasyFormatPlugin
EasyFormatPlugin.EasyFormatConfig, EasyFormatPlugin.EasyFormatConfigBuilder, EasyFormatPlugin.ScanFrameworkVersion
-
Field Summary
Fields inherited from class org.apache.drill.exec.store.dfs.easy.EasyFormatPlugin
formatConfig
-
Constructor Summary
ConstructorDescriptionJSONFormatPlugin
(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig storageConfig) JSONFormatPlugin
(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig config, JSONFormatConfig formatPluginConfig) -
Method Summary
Modifier and TypeMethodDescriptionprotected FileScanFramework.FileScanBuilder
frameworkBuilder
(EasySubScan scan, OptionSet options) Create the plugin-specific framework that manages the scan.getRecordReader
(FragmentContext context, DrillFileSystem dfs, FileWork fileWork, List<SchemaPath> columns, String userName) Return a record reader for the specific file format, when using the originalScanBatch
scanner.getRecordWriter
(FragmentContext context, EasyWriter writer) getStatisticsRecordWriter
(FragmentContext context, EasyWriter writer) boolean
isStatisticsRecordWriter
(FragmentContext context, EasyWriter writer) readStatistics
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath) protected EasyFormatPlugin.ScanFrameworkVersion
scanVersion
(OptionSet options) Choose whether to use the enhanced scan based on the row set and scan framework, or the "traditional" ad-hoc structure based on ScanBatch.boolean
Does this plugin support projection push down? That is, can the reader itself handle the tasks of projecting table columns, creating null columns for missing table columns, and so on?boolean
void
writeStatistics
(DrillStatsTable.TableStatistics statistics, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath) Methods inherited from class org.apache.drill.exec.store.dfs.easy.EasyFormatPlugin
configureScan, easyConfig, getConfig, getContext, getFsConf, getGroupScan, getGroupScan, getMatcher, getName, getOptimizerRules, getReaderBatch, getScanStats, getStorageConfig, getWriter, getWriterBatch, initScanBuilder, isBlockSplittable, isCompressible, newBatchReader, supportsAutoPartitioning, supportsFileImplicitColumns, supportsLimitPushdown, supportsRead, supportsWrite
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.drill.exec.store.dfs.FormatPlugin
getGroupScan, getGroupScan, getOptimizerRules
-
Field Details
-
PLUGIN_NAME
- See Also:
-
READER_OPERATOR_TYPE
- See Also:
-
WRITER_OPERATOR_TYPE
- See Also:
-
-
Constructor Details
-
JSONFormatPlugin
public JSONFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig storageConfig) -
JSONFormatPlugin
public JSONFormatPlugin(String name, DrillbitContext context, org.apache.hadoop.conf.Configuration fsConf, StoragePluginConfig config, JSONFormatConfig formatPluginConfig)
-
-
Method Details
-
getRecordReader
public RecordReader getRecordReader(FragmentContext context, DrillFileSystem dfs, FileWork fileWork, List<SchemaPath> columns, String userName) Description copied from class:EasyFormatPlugin
Return a record reader for the specific file format, when using the originalScanBatch
scanner.- Overrides:
getRecordReader
in classEasyFormatPlugin<JSONFormatConfig>
- Parameters:
context
- fragment contextdfs
- Drill file systemfileWork
- metadata about the file to be scannedcolumns
- list of projected columns (or may just contain the wildcard)userName
- the name of the user running the query- Returns:
- a record reader for this format
-
isStatisticsRecordWriter
- Overrides:
isStatisticsRecordWriter
in classEasyFormatPlugin<JSONFormatConfig>
-
getStatisticsRecordWriter
- Overrides:
getStatisticsRecordWriter
in classEasyFormatPlugin<JSONFormatConfig>
-
getRecordWriter
- Overrides:
getRecordWriter
in classEasyFormatPlugin<JSONFormatConfig>
- Throws:
IOException
-
supportsStatistics
public boolean supportsStatistics()- Specified by:
supportsStatistics
in interfaceFormatPlugin
- Overrides:
supportsStatistics
in classEasyFormatPlugin<JSONFormatConfig>
-
readStatistics
public DrillStatsTable.TableStatistics readStatistics(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath) throws IOException - Specified by:
readStatistics
in interfaceFormatPlugin
- Overrides:
readStatistics
in classEasyFormatPlugin<JSONFormatConfig>
- Throws:
IOException
-
writeStatistics
public void writeStatistics(DrillStatsTable.TableStatistics statistics, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path statsTablePath) throws IOException - Specified by:
writeStatistics
in interfaceFormatPlugin
- Overrides:
writeStatistics
in classEasyFormatPlugin<JSONFormatConfig>
- Throws:
IOException
-
scanVersion
Description copied from class:EasyFormatPlugin
Choose whether to use the enhanced scan based on the row set and scan framework, or the "traditional" ad-hoc structure based on ScanBatch. Normally set as a config option. Override this method if you want to make the choice based on a system/session option.- Overrides:
scanVersion
in classEasyFormatPlugin<JSONFormatConfig>
- Returns:
- true to use the enhanced scan framework, false for the traditional scan-batch framework
-
frameworkBuilder
protected FileScanFramework.FileScanBuilder frameworkBuilder(EasySubScan scan, OptionSet options) throws ExecutionSetupException Description copied from class:EasyFormatPlugin
Create the plugin-specific framework that manages the scan. The framework creates batch readers one by one for each file or block. It defines semantic rules for projection. It handles "early" or "late" schema readers. A typical framework builds on standardized frameworks for files in general or text files in particular.For EVF V1, to be removed.
- Overrides:
frameworkBuilder
in classEasyFormatPlugin<JSONFormatConfig>
- Parameters:
scan
- the physical operation definition for the scan operation. Contains one or more files to read. (The Easy format plugin works only for files.)- Returns:
- the scan framework which orchestrates the scan operation across potentially many files
- Throws:
ExecutionSetupException
- for all setup failures
-
getReaderOperatorType
- Overrides:
getReaderOperatorType
in classEasyFormatPlugin<JSONFormatConfig>
-
getWriterOperatorType
- Overrides:
getWriterOperatorType
in classEasyFormatPlugin<JSONFormatConfig>
-
supportsPushDown
public boolean supportsPushDown()Description copied from class:EasyFormatPlugin
Does this plugin support projection push down? That is, can the reader itself handle the tasks of projecting table columns, creating null columns for missing table columns, and so on?- Overrides:
supportsPushDown
in classEasyFormatPlugin<JSONFormatConfig>
- Returns:
true
if the plugin supports projection push-down,false
if Drill should do the task by adding a project operator
-