public class JSONRecordReader extends AbstractRecordReader
| Modifier and Type | Field and Description |
|---|---|
static long |
DEFAULT_ROWS_PER_BATCH |
DEFAULT_TEXT_COLS_TO_READALLOCATOR_INITIAL_RESERVATION, ALLOCATOR_MAX_RESERVATION| Constructor and Description |
|---|
JSONRecordReader(FragmentContext fragmentContext,
com.fasterxml.jackson.databind.JsonNode embeddedContent,
DrillFileSystem fileSystem,
List<SchemaPath> columns)
Create a new JSON Record Reader that uses a in memory materialized JSON stream.
|
JSONRecordReader(FragmentContext fragmentContext,
List<SchemaPath> columns)
Create a JSON Record Reader that uses an InputStream directly
|
JSONRecordReader(FragmentContext fragmentContext,
org.apache.hadoop.fs.Path inputPath,
DrillFileSystem fileSystem,
List<SchemaPath> columns)
Create a JSON Record Reader that uses a file based input stream.
|
| Modifier and Type | Method and Description |
|---|---|
void |
close() |
protected List<SchemaPath> |
getDefaultColumnsToRead() |
protected void |
handleAndRaise(String suffix,
Exception e) |
int |
next()
Increments this record reader forward, writing via the provided output
mutator into the output batch.
|
void |
setInputStream(InputStream in) |
void |
setup(OperatorContext context,
OutputMutator output)
Configure the RecordReader with the provided schema and the record batch that should be written to.
|
String |
toString() |
allocate, getColumns, hasNext, isSkipQuery, isStarQuery, setColumns, transformColumnspublic static final long DEFAULT_ROWS_PER_BATCH
public JSONRecordReader(FragmentContext fragmentContext, org.apache.hadoop.fs.Path inputPath, DrillFileSystem fileSystem, List<SchemaPath> columns) throws OutOfMemoryException
fragmentContext - inputPath - fileSystem - columns - pathnames of columns/subfields to readOutOfMemoryExceptionpublic JSONRecordReader(FragmentContext fragmentContext, com.fasterxml.jackson.databind.JsonNode embeddedContent, DrillFileSystem fileSystem, List<SchemaPath> columns) throws OutOfMemoryException
fragmentContext - embeddedContent - fileSystem - columns - pathnames of columns/subfields to readOutOfMemoryExceptionpublic JSONRecordReader(FragmentContext fragmentContext, List<SchemaPath> columns) throws OutOfMemoryException
fragmentContext - The Drill FragmementinputStream - The inputStream from which data will be receivedcolumns - pathnames of columns/subfields to readOutOfMemoryExceptionpublic String toString()
toString in class AbstractRecordReaderpublic void setup(OperatorContext context, OutputMutator output) throws ExecutionSetupException
RecordReadercontext - operator context for the readeroutput - The place where output for a particular scan should be written. The record reader is responsible for
mutating the set of schema values for that particular record.ExecutionSetupExceptionprotected List<SchemaPath> getDefaultColumnsToRead()
getDefaultColumnsToRead in class AbstractRecordReaderprotected void handleAndRaise(String suffix, Exception e) throws UserException
UserExceptionpublic int next()
RecordReaderpublic void setInputStream(InputStream in)
Copyright © 2021 The Apache Software Foundation. All rights reserved.