public class MergeJoinIndexer extends LoadFunc
| Constructor and Description |
|---|
MergeJoinIndexer(String funcSpec,
String innerPlan,
String serializedPhyPlan,
String udfCntxtSignature,
String scope,
String ignoreNulls) |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.InputFormat |
getInputFormat()
This will be called during planning on the front end.
|
LoadCaster |
getLoadCaster()
This will be called on both the front end and the back
end during execution.
|
Tuple |
getNext()
Retrieves the next tuple to be processed.
|
void |
prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
Initializes LoadFunc for reading data.
|
void |
setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the loader the location of the object(s) being loaded.
|
getAbsolutePath, getCacheFiles, getPathStrings, getShipFiles, join, relativeToAbsolutePath, setUDFContextSignature, warnpublic MergeJoinIndexer(String funcSpec, String innerPlan, String serializedPhyPlan, String udfCntxtSignature, String scope, String ignoreNulls) throws ExecException
funcSpec - : Loader specification.innerPlan - : This is serialized version of LR plan. We
want to keep only keys in our index file and not the whole tuple. So, we need LR and thus its plan
to get keys out of the sampled tuple.serializedPhyPlan - Serialized physical plan on right side.ExecExceptionpublic Tuple getNext() throws IOException
LoadFuncgetNext in class LoadFuncIOException - if there is an exception while retrieving the next
tuplepublic org.apache.hadoop.mapreduce.InputFormat getInputFormat()
throws IOException
LoadFuncgetInputFormat in class LoadFuncIOException - if there is an exception during InputFormat
constructionpublic LoadCaster getLoadCaster() throws IOException
LoadFuncgetLoadCaster in class LoadFuncLoadCaster associated with this loader. Returning null
indicates that casts from byte array are not supported for this loader.
constructionIOException - if there is an exception during LoadCasterpublic void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
throws IOException
LoadFuncprepareToRead in class LoadFuncreader - RecordReader to be used by this instance of the LoadFuncsplit - The input PigSplit to processIOException - if there is an exception during initializationpublic void setLocation(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadFuncLoadFunc.relativeToAbsolutePath(String, Path). Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.setLocation in class LoadFunclocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)job - the Job object
store or retrieve earlier stored information from the UDFContextIOException - if the location is not valid.Copyright © 2007-2017 The Apache Software Foundation