|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
| Packages that use org.apache.hadoop.mapred | |
|---|---|
| org.apache.hadoop.contrib.index.example | |
| org.apache.hadoop.contrib.index.mapred | |
| org.apache.hadoop.contrib.utils.join | |
| org.apache.hadoop.examples | Hadoop example code. |
| org.apache.hadoop.examples.dancing | This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. |
| org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
| org.apache.hadoop.filecache | |
| org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
| org.apache.hadoop.mapred.jobcontrol | Utilities for managing dependent jobs. |
| org.apache.hadoop.mapred.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
| org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
| org.apache.hadoop.mapred.lib.aggregate | Classes for performing various counting and aggregations. |
| org.apache.hadoop.mapred.lib.db | org.apache.hadoop.mapred.lib.db Package |
| org.apache.hadoop.mapred.pipes | Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce. |
| org.apache.hadoop.mapreduce | |
| org.apache.hadoop.mapreduce.server.jobtracker | |
| org.apache.hadoop.mapreduce.server.tasktracker | |
| org.apache.hadoop.mapreduce.server.tasktracker.userlogs | |
| org.apache.hadoop.mapreduce.split | |
| org.apache.hadoop.streaming | Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. |
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.contrib.index.example | |
|---|---|
| FileInputFormat
A base class for file-based InputFormat. |
|
| FileSplit
A section of an input file. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.contrib.index.mapred | |
|---|---|
| FileOutputFormat
A base class for OutputFormat. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapReduceBase
Base class for Mapper and Reducer implementations. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
| Partitioner
Partitions the key space. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.contrib.utils.join | |
|---|---|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.examples | |
|---|---|
| FileInputFormat
A base class for file-based InputFormat. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapReduceBase
Base class for Mapper and Reducer implementations. |
|
| MultiFileInputFormat
Deprecated. Use CombineFileInputFormat instead |
|
| MultiFileSplit
Deprecated. Use CombineFileSplit instead |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
| Partitioner
Partitions the key space. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.examples.dancing | |
|---|---|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapReduceBase
Base class for Mapper and Reducer implementations. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.examples.terasort | |
|---|---|
| FileInputFormat
A base class for file-based InputFormat. |
|
| FileOutputFormat
A base class for OutputFormat. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapReduceBase
Base class for Mapper and Reducer implementations. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| TextOutputFormat
An OutputFormat that writes plain text files. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.filecache | |
|---|---|
| InvalidJobConfException
This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid. |
|
| TaskController
Controls initialization, finalization and clean up of tasks, and also the launching and killing of task JVMs. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred | |
|---|---|
| AdminOperationsProtocol
Protocol for admin operations. |
|
| CleanupQueue
|
|
| ClusterStatus
Status information on the current state of the Map-Reduce cluster. |
|
| Counters
A set of named counters. |
|
| Counters.Counter
A counter record, comprising its name and value. |
|
Counters.Group
Group of counters, comprising of counters from a particular
counter Enum class. |
|
| FileAlreadyExistsException
Used when target file already exists for any operation and is not configured to be overwritten. |
|
| FileInputFormat
A base class for file-based InputFormat. |
|
| FileInputFormat.Counter
|
|
| FileOutputFormat
A base class for OutputFormat. |
|
| FileOutputFormat.Counter
|
|
| FileSplit
A section of an input file. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| InvalidJobConfException
This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid. |
|
| JobClient.TaskStatusFilter
|
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| JobContext
|
|
| JobHistory.JobInfo
Helper class for logging or reading back events related to job start, finish or failure. |
|
| JobHistory.Keys
Job history files contain key="value" pairs, where keys belong to this enum. |
|
| JobHistory.Listener
Callback interface for reading back log events from JobHistory. |
|
| JobHistory.RecordTypes
Record types are identifiers for each line of log in history files. |
|
| JobHistory.Task
Helper class for logging or reading back events related to Task's start, finish or failure. |
|
| JobHistory.TaskAttempt
Base class for Map and Reduce TaskAttempts. |
|
| JobHistory.Values
This enum contains some of the values commonly used by history log events. |
|
| JobID
JobID represents the immutable and unique identifier for the job. |
|
| JobInProgress
JobInProgress maintains all the info for keeping a Job on the straight and narrow. |
|
| JobInProgress.Counter
|
|
| JobPriority
Used to describe the priority of the running job. |
|
| JobProfile
A JobProfile is a MapReduce primitive. |
|
| JobQueueInfo
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. |
|
| JobStatus
Describes the current status of a job. |
|
| JobTracker
JobTracker is the central location for submitting and tracking MR jobs in a network environment. |
|
| JobTracker.SafeModeAction
JobTracker SafeMode |
|
| JobTracker.State
|
|
| JobTrackerMXBean
The MXBean interface for JobTrackerInfo |
|
| JvmTask
|
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapRunnable
Expert: Generic interface for Mappers. |
|
| MapTaskCompletionEventsUpdate
A class that represents the communication between the tasktracker and child tasks w.r.t the map task completion events. |
|
| Operation
Generic operation that maps to the dependent set of ACLs that drive the authorization of the operation. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
| Partitioner
Partitions the key space. |
|
| QueueAclsInfo
Class to encapsulate Queue ACLs for a particular user. |
|
RawKeyValueIterator
RawKeyValueIterator is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
|
| SequenceFileInputFilter.Filter
filter interface |
|
| SequenceFileInputFilter.FilterBase
base class for Filters |
|
| SequenceFileInputFormat
An InputFormat for SequenceFiles. |
|
| SequenceFileOutputFormat
An OutputFormat that writes SequenceFiles. |
|
| Task
Base class for tasks. |
|
| Task.CombinerRunner
|
|
| Task.Counter
|
|
| Task.TaskReporter
|
|
| TaskAttemptContext
|
|
| TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
| TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
| TaskCompletionEvent.Status
|
|
| TaskController
Controls initialization, finalization and clean up of tasks, and also the launching and killing of task JVMs. |
|
| TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
| TaskLog.LogName
The filter for userlogs. |
|
| TaskReport
A report on the state of a task. |
|
| TaskStatus
Describes the current status of a task. |
|
| TaskStatus.Phase
|
|
| TaskStatus.State
|
|
| TaskTracker
TaskTracker is a process that starts and tracks MR Tasks in a networked environment. |
|
| TaskTrackerMXBean
MXBean interface for TaskTracker |
|
| TaskTrackerStatus
A TaskTrackerStatus is a MapReduce primitive. |
|
| TaskUmbilicalProtocol
Protocol that task child process uses to contact its parent process. |
|
| TIPStatus
The states of a TaskInProgress as seen by the JobTracker. |
|
| Utils.OutputFileUtils.OutputLogFilter
This class filters log files from directory given It doesnt accept paths having _logs. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.jobcontrol | |
|---|---|
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
|
| JobConf
A map/reduce job configuration. |
|
| JobID
JobID represents the immutable and unique identifier for the job. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.join | |
|---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.lib | |
|---|---|
| FileInputFormat
A base class for file-based InputFormat. |
|
| FileOutputFormat
A base class for OutputFormat. |
|
| FileSplit
A section of an input file. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapReduceBase
Base class for Mapper and Reducer implementations. |
|
| MapRunnable
Expert: Generic interface for Mappers. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
| Partitioner
Partitions the key space. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.lib.aggregate | |
|---|---|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.lib.db | |
|---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapred.pipes | |
|---|---|
| JobConf
A map/reduce job configuration. |
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapreduce | |
|---|---|
| ID
A general identifier, which internally stores the id as an integer. |
|
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
|
RawKeyValueIterator
RawKeyValueIterator is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data. |
|
| TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapreduce.server.jobtracker | |
|---|---|
| JobInProgress
JobInProgress maintains all the info for keeping a Job on the straight and narrow. |
|
| TaskTrackerStatus
A TaskTrackerStatus is a MapReduce primitive. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapreduce.server.tasktracker | |
|---|---|
| Task
Base class for tasks. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapreduce.server.tasktracker.userlogs | |
|---|---|
| TaskController
Controls initialization, finalization and clean up of tasks, and also the launching and killing of task JVMs. |
|
| UserLogCleaner
This is used only in UserLogManager, to manage cleanup of user logs. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.mapreduce.split | |
|---|---|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
| Classes in org.apache.hadoop.mapred used by org.apache.hadoop.streaming | |
|---|---|
| FileInputFormat
A base class for file-based InputFormat. |
|
| FileSplit
A section of an input file. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
|
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
|
| JobConf
A map/reduce job configuration. |
|
| JobConfigurable
That what may be configured. |
|
| JobID
JobID represents the immutable and unique identifier for the job. |
|
| KeyValueTextInputFormat
An InputFormat for plain text files. |
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
| MapRunnable
Expert: Generic interface for Mappers. |
|
| MapRunner
Default MapRunnable implementation. |
|
| OutputCollector
Collects the <key, value> pairs output by Mappers
and Reducers. |
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
|
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
| Reporter
A facility for Map-Reduce applications to report progress and update counters, status information etc. |
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
|
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||