A B C D E F G H I J K L M N O P Q R S T U V W X Z

A

abandonBlock(Block, String) - Method in class org.apache.hadoop.dfs.NameNode
The client needs to give up on the block.
abandonFileInProgress(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
abort(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Aborts a single file.
abort() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Aborts the file creation, all uncommitted files created by this PhasedFileSystem instance are deleted.
ABSOLUTE - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
AbstractMetricsContext - Class in org.apache.hadoop.metrics.spi
The main class of the Service Provider Interface.
AbstractMetricsContext() - Constructor for class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new instance of AbstractMetricsContext
accept(Path) - Method in interface org.apache.hadoop.fs.PathFilter
Tests whether or not the specified abstract pathname should be included in a pathname list.
accept(Writable) - Method in interface org.apache.hadoop.mapred.SequenceFileInputFilter.Filter
filter function Decide if a record should be filtered or not
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
Filtering method If MD5(key) % frequency==0, return true; otherwise return false
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
Filtering method If record# % frequency==0, return true; otherwise return false
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Filtering method If key matches the regex, return true; otherwise return false
activateOptions() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
add(Object) - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
add(Object) - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
add(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Add a data node Update data node counter & rack counter if neccessary
add_escapes(String) - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
Used to convert raw characters to their escaped version when these raw version cannot be used as part of an ASCII string literal.
addArchiveToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an archive path to the current set of classpath entries.
addBlock(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
addCacheArchive(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a archives to be localized to the conf
addCacheFile(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a file to be localized to the conf
addClass(String, Class, String) - Method in class org.apache.hadoop.util.ProgramDriver
This is the method that adds the classed to the repository
addDefaultResource(String) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDefaultResource(URL) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDefaultResource(Path) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addEscapes(String) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Replaces unprintable characters by their espaced (or unicode escaped) equivalents in the given string
addFileset(FileSet) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Adds a fileset that can consist of one or more files
addFileToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an file path to the current set of classpath entries It adds the file to cache as well.
addFinalResource(String) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addFinalResource(URL) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addFinalResource(Path) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addInputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
additionalConfSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
addJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a new job.
addJobs(Collection<Job>) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a collection of jobs
addLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addMissing(String, long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Add a missing block name, plus its size.
addName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Add an alternate name for a class.
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(double) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
add a value to the aggregator
addNextValue(Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
add the given val to the aggregator.
addPhase(String) - Method in class org.apache.hadoop.util.Progress
Adds a named node to the tree.
addPhase() - Method in class org.apache.hadoop.util.Progress
Adds a node to the tree.
addServlet(String, String, Class<T>) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Add a servlet in the server.
addTableFooter(JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableHeader(JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableRow(JspWriter, String[]) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableRow(JspWriter, String[], int) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTaskEnvironment_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Method to adjust line and column numbers for the start of a token.
adjustTop() - Method in class org.apache.hadoop.util.PriorityQueue
Should be called when the Object at top changes values.
adminState - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
aggregatorDescriptorList - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
allFinished() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
AlreadyBeingCreatedException - Exception in org.apache.hadoop.dfs
The exception that happens when you ask to create a file that already is being created, but is not closed yet.
AlreadyBeingCreatedException(String) - Constructor for exception org.apache.hadoop.dfs.AlreadyBeingCreatedException
 
append(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Writer
Append a value to the file.
append(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Writer
Append a key/value pair to the map.
append(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Writer
Append a key/value pair.
append(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Writer
Deprecated. Append a key to a set.
append(LoggingEvent) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
append(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer.
append(byte[]) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer
appendRaw(byte[], int, int, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Writer
 
archiveURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
argv_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
ArrayFile - Class in org.apache.hadoop.io
A dense file-based mapping from integers to values.
ArrayFile() - Constructor for class org.apache.hadoop.io.ArrayFile
 
ArrayFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing array file.
ArrayFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.ArrayFile.Reader
Construct an array reader for the named file.
ArrayFile.Writer - Class in org.apache.hadoop.io
Write a new array file.
ArrayFile.Writer(Configuration, FileSystem, String, Class) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayFile.Writer(Configuration, FileSystem, String, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayListBackedIterator - Class in org.apache.hadoop.contrib.utils.join
This class provides an implementation of ResetableIterator.
ArrayListBackedIterator() - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
ArrayListBackedIterator(ArrayList) - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
arrayToString(String[]) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of strings, return a comma-separated list of its elements.
ArrayWritable - Class in org.apache.hadoop.io
A Writable for arrays containing instances of a class.
ArrayWritable() - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(Class) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(Class, Writable[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(String[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
available() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 

B

backup(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
beginColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
beginLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
BeginToken() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
bestNode(LocatedBlock) - Method in class org.apache.hadoop.dfs.JspHelper
 
BinaryRecordInput - Class in org.apache.hadoop.record
 
BinaryRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordInput(DataInput) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordOutput - Class in org.apache.hadoop.record
 
BinaryRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
BinaryRecordOutput(DataOutput) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
Block - Class in org.apache.hadoop.fs.s3
Holds metadata about a block of data being stored in a FileSystemStore.
Block(long, long) - Constructor for class org.apache.hadoop.fs.s3.Block
 
BLOCK_INVALIDATE_CHUNK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
blockExists(long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
blockReceived(DatanodeRegistration, Block[]) - Method in class org.apache.hadoop.dfs.NameNode
 
blockReport(DatanodeRegistration, Block[]) - Method in class org.apache.hadoop.dfs.NameNode
 
BLOCKREPORT_INTERVAL - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
BOOLEAN_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
BooleanWritable - Class in org.apache.hadoop.io
A WritableComparable for booleans.
BooleanWritable() - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable(boolean) - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BooleanWritable.
BooleanWritable.Comparator() - Constructor for class org.apache.hadoop.io.BooleanWritable.Comparator
 
bufcolumn - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Buffer - Class in org.apache.hadoop.record
A byte sequence that is used as a Java native type for buffer.
Buffer() - Constructor for class org.apache.hadoop.record.Buffer
Create a zero-count sequence.
Buffer(byte[]) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte array as the initial value.
Buffer(byte[], int, int) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte range as the initial value.
buffer - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BUFFER_SIZE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
BUFFER_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
bufline - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
bufpos - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BuiltInZlibDeflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Deflater to make it conform to org.apache.hadoop.io.compress.Compressor interface.
BuiltInZlibDeflater(int, boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater(int) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibInflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Inflater to make it conform to org.apache.hadoop.io.compress.Decompressor interface.
BuiltInZlibInflater(boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BuiltInZlibInflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BYTE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
byteDesc(long) - Static method in class org.apache.hadoop.fs.FsShell
Return an abbreviated English-language desc of the byte length
bytesToCodePoint(ByteBuffer) - Static method in class org.apache.hadoop.io.Text
Returns the next code point at the current position in the buffer.
BytesWritable - Class in org.apache.hadoop.io
A byte sequence that is usable as a key or value.
BytesWritable() - Constructor for class org.apache.hadoop.io.BytesWritable
Create a zero-size sequence.
BytesWritable(byte[]) - Constructor for class org.apache.hadoop.io.BytesWritable
Create a BytesWritable using the byte array as the initial value.
BytesWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BytesWritable.
BytesWritable.Comparator() - Constructor for class org.apache.hadoop.io.BytesWritable.Comparator
 
byteToHexString(byte[]) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of bytes it will convert the bytes to a hex string representation of the bytes

C

cacheArchives - Variable in class org.apache.hadoop.streaming.StreamJob
 
cacheFiles - Variable in class org.apache.hadoop.streaming.StreamJob
 
call(Writable, InetSocketAddress) - Method in class org.apache.hadoop.ipc.Client
Make a call, passing param, to the IPC server running at address, returning the value.
call(Writable[], InetSocketAddress[]) - Method in class org.apache.hadoop.ipc.Client
Makes a set of calls in parallel.
call(Method, Object[][], InetSocketAddress[], Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Expert: Make multiple, parallel calls to a set of servers.
call(Writable) - Method in class org.apache.hadoop.ipc.RPC.Server
 
call(Writable) - Method in class org.apache.hadoop.ipc.Server
Called for each call.
capacity - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
charAt(int) - Method in class org.apache.hadoop.io.Text
Returns the Unicode Scalar Value (32-bit integer value) for the character at position.
checkDir(File) - Static method in class org.apache.hadoop.util.DiskChecker
 
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
 
checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
Check whether the output specification for a job is appropriate.
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.OutputFormatBase
 
checkPath(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check that a Path belongs to this FileSystem.
checkPath(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Check that a Path belongs to this FileSystem.
checkpoint() - Method in class org.apache.hadoop.fs.Trash
Create a trash checkpoint.
checkState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Check and update the state of this job.
ChecksumException - Exception in org.apache.hadoop.fs
Thrown for checksum errors.
ChecksumException(String, long) - Constructor for exception org.apache.hadoop.fs.ChecksumException
 
ChecksumFileSystem - Class in org.apache.hadoop.fs
Abstract Checksumed FileSystem.
ChecksumFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.ChecksumFileSystem
 
checkURIs(URI[], URI[]) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks if there is a conflict in the fragment names of the uris.
chmod(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Change the permissions on a filename.
chooseRandom(String) - Method in class org.apache.hadoop.net.NetworkTopology
randomly choose one node from scope if scope starts with ~, choose one from the all datanodes except for the ones in scope; otherwise, choose one from scope
CHUNKED_ENCODING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
cleanup() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
The default cleanup.
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Interface to cleanup *distcp* specific resources
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
clear() - Method in class org.apache.hadoop.util.PriorityQueue
Removes all entries from the PriorityQueue.
Client - Class in org.apache.hadoop.ipc
A client for an IPC service.
Client(Class, Configuration) - Constructor for class org.apache.hadoop.ipc.Client
Construct an IPC client whose values are of the given Writable class.
clone(Writable, JobConf) - Static method in class org.apache.hadoop.io.WritableUtils
Make a copy of a writable object using serialization to a buffer.
clone() - Method in class org.apache.hadoop.record.Buffer
 
cloneFileAttributes(FileSystem, Path, Path, Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Deprecated. call #cloneFileAttributes(Path,Path,Progressable) instead
cloneFileAttributes(Path, Path, Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Clones the attributes (like compression of the input file and creates a corresponding Writer
close() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
close() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
close() - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
 
close() - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
 
close() - Method in class org.apache.hadoop.fs.FileSystem
No more filesystem operations are needed.
close() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
close() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
close() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
close() - Method in interface org.apache.hadoop.io.Closeable
Called after the last call to any other method on this object to free and/or flush resources.
close() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
 
close() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
close() - Method in class org.apache.hadoop.io.MapFile.Reader
Close the map.
close() - Method in class org.apache.hadoop.io.MapFile.Writer
Close the map.
close() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Close the file.
close() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
closes the iterator so that the underlying streams can be closed
close() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Close the file.
close() - Method in class org.apache.hadoop.mapred.JobClient
 
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
close() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
close() - Method in class org.apache.hadoop.mapred.MapReduceBase
Default implementation that does nothing.
close() - Method in interface org.apache.hadoop.mapred.RecordReader
Close this to future operations.
close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
Close this to future operations.
close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
close() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
close() - Method in class org.apache.hadoop.mapred.TaskTracker
Close down the TaskTracker and all its components.
close(Reporter) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
close() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring and also frees any buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring and frees buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.streaming.PipeMapper
 
close() - Method in class org.apache.hadoop.streaming.PipeReducer
 
close() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Close this to future operations.
close() - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
Closeable - Interface in org.apache.hadoop.io
That which can be closed.
closeAll() - Static method in class org.apache.hadoop.fs.FileSystem
Close all cached filesystems.
cluster_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
ClusterStatus - Class in org.apache.hadoop.mapred
Summarizes the size and current state of the cluster.
CodeBuffer - Class in org.apache.hadoop.record.compiler
A wrapper around StringBuffer that automatically does indentation
collate(Object[], String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collate(List, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collect(WritableComparable, TaggedMapOutput, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected.
collect(WritableComparable, Writable) - Method in interface org.apache.hadoop.mapred.OutputCollector
Adds a key/value pair to the output.
collected - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
column - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
combine(Object[], Object[]) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
comCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
COMMA_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
commit(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Commits a single file file to its final locations as passed in create* methods.
commit() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Commits files to their final locations as passed in create* methods.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BooleanWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable.Comparator
Compare the buffers in serialized form.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.FloatWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.IntWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.Comparator
 
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.MD5Hash.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.Text.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.WritableComparator
Optimization hook.
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.WritableComparator
Compare two WritableComparables.
compare(Object, Object) - Method in class org.apache.hadoop.io.WritableComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.record.RecordComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.io.WritableComparator
Lexicographic order of binary data.
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.record.Utils
Lexicographic order of binary data.
compareTo(Object) - Method in class org.apache.hadoop.dfs.DatanodeID
Comparable.
compareTo(Object) - Method in class org.apache.hadoop.fs.Path
 
compareTo(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
compareTo(Object) - Method in class org.apache.hadoop.io.BytesWritable
Define the sort order of the BytesWritable.
compareTo(Object) - Method in class org.apache.hadoop.io.FloatWritable
Compares two FloatWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.IntWritable
Compares two IntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.LongWritable
Compares two LongWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.MD5Hash
Compares this object with the specified object for order.
compareTo(Object) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.io.Text
Compare two Texts bytewise using standard UTF8 ordering.
compareTo(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Compare two UTF8s.
compareTo(Object) - Method in class org.apache.hadoop.io.VIntWritable
Compares two VIntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.VLongWritable
Compares two VLongWritables.
compareTo(Object) - Method in class org.apache.hadoop.record.Buffer
Define the sort order of the Buffer.
compareTo(Object) - Method in class org.apache.hadoop.record.Record
 
complete(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
complete() - Method in class org.apache.hadoop.util.Progress
Completes this node, moving the parent node to its next child.
COMPLETE_SUCCESS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
completedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
compress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Fills specified buffer with compressed data.
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
CompressedWritable - Class in org.apache.hadoop.io
A base-class for Writables which store themselves compressed and lazily inflate on field access.
CompressedWritable() - Constructor for class org.apache.hadoop.io.CompressedWritable
 
CompressionCodec - Interface in org.apache.hadoop.io.compress
This class encapsulates a streaming compression/decompression pair.
CompressionCodecFactory - Class in org.apache.hadoop.io.compress
A factory that will find the correct codec for a given filename.
CompressionCodecFactory(Configuration) - Constructor for class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the codecs specified in the config value io.compression.codecs and register them.
CompressionInputStream - Class in org.apache.hadoop.io.compress
A compression input stream.
CompressionInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionInputStream
Create a compression input stream that reads the decompressed bytes from the given stream.
CompressionOutputStream - Class in org.apache.hadoop.io.compress
A compression output stream.
CompressionOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionOutputStream
Create a compression output stream that writes the compressed bytes to the given stream.
Compressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'compressor' which can be plugged into a CompressionOutputStream to compress data.
conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
conf - Variable in class org.apache.hadoop.util.ToolBase
 
config_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
configPath_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Configurable - Interface in org.apache.hadoop.conf
Something that may be configured with a Configuration.
Configuration - Class in org.apache.hadoop.conf
Provides access to configuration parameters.
Configuration() - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration.
Configuration(Configuration) - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration with the same settings cloned from another.
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
Mapper configuration.
configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Reducer configuration.
configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Do nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
get the input file name.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combiner does not need to configure.
configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Configure the object
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
Default implementation that does nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapper
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapRed
 
configure(JobConf) - Method in class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
configure(JobConf) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Mapper configuration.
configure(JobConf) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
Configured - Class in org.apache.hadoop.conf
Base class for things that may be configured with a Configuration.
Configured(Configuration) - Constructor for class org.apache.hadoop.conf.Configured
Construct a Configured.
contains(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Check if the tree contains data node node
ContextFactory - Class in org.apache.hadoop.metrics
Factory class for creating MetricsContext objects.
ContextFactory() - Constructor for class org.apache.hadoop.metrics.ContextFactory
Creates a new instance of ContextFactory
copy(FileSystem, Path, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy files between FileSystems.
copy(File, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy local files to a FileSystem.
copy(FileSystem, Path, File, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy FileSystem files to local files.
copy(String, String, Configuration) - Method in class org.apache.hadoop.fs.FsShell
Copy files that match the file pattern srcf to a destination file.
copy(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Copy the specified byte array to the Buffer.
copy(Configuration, String, String, boolean, boolean) - Static method in class org.apache.hadoop.util.CopyFiles
Driver to copy srcPath to destPath depending on required protocol.
CopyFiles - Class in org.apache.hadoop.util
A Map-reduce program to recursively copy directories between different file-systems.
CopyFiles() - Constructor for class org.apache.hadoop.util.CopyFiles
 
CopyFiles.CopyFilesMapper - Class in org.apache.hadoop.util
Base-class for all mappers for distcp
CopyFiles.CopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
 
CopyFiles.FSCopyFilesMapper - Class in org.apache.hadoop.util
DFSCopyFilesMapper: The mapper for copying files from the DFS.
CopyFiles.FSCopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
CopyFiles.HTTPCopyFilesMapper - Class in org.apache.hadoop.util
 
CopyFiles.HTTPCopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
copyFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
copy/move operations are not supported
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
copyMerge(FileSystem, Path, FileSystem, Path, boolean, Configuration, String) - Static method in class org.apache.hadoop.fs.FileUtil
Copy all files in a directory to one output file (merge).
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
Counters - Class in org.apache.hadoop.mapred
A set of named counters.
Counters() - Constructor for class org.apache.hadoop.mapred.Counters
 
Counters.Group - Class in org.apache.hadoop.mapred
Represents a group of counters, comprising the counters from a particular counter enum class.
countNumOfAvailableNodes(String, List<DatanodeDescriptor>) - Method in class org.apache.hadoop.net.NetworkTopology
return the number of leaves in scope but not in excludedNodes if scope starts with ~, return the number of datanodes that are not in scope and excludedNodes;
create(String, String, boolean, short, long) - Method in class org.apache.hadoop.dfs.NameNode
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Create an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, short, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
create(Class<?>, Object, RetryPolicy) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the same retry policy for each method in the interface.
create(Class<?>, Object, Map<String, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the a set of retry policies specified by method name.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method create symlinks for all files in a given dir in another directory
createDataJoinJob(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
createHardLink(File, File) - Static method in class org.apache.hadoop.fs.FileUtil.HardLink
 
createInputStream(InputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a stream decompressor that will read from the given input stream.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
Create a stream decompressor that will read from the given input stream.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
Create a stream decompressor that will read from the given input stream.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Create an instance of the given class
createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as a key.
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createKey() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createMD5(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Returns md5 of the checksum file for a given dfs file.
createNewFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
Creates the given Path as a brand-new zero-length file.
createOutputStream(OutputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a stream compressor that will write to the given output stream.
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
Create a stream compressor that will write to the given output stream.
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
Create a stream compressor that will write to the given output stream.
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createRecord(String) - Method in interface org.apache.hadoop.metrics.MetricsContext
Creates a new MetricsRecord instance with the given recordName.
createRecord(MetricsContext, String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to create and return new metrics record instance within the given context.
createRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new AbstractMetricsRecord instance with the given recordName.
createResetableIterator() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can provide a different implementation on ResetableIterator.
createSocketAddr(String) - Static method in class org.apache.hadoop.dfs.DataNode
Util method to build socket addr from either: : ://:/
createSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method allows you to create symlinks in the current working directory of the task to all the cache files/archives
createValue() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as the value.
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createValue() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
Create an Abacus based map/reduce job.
createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
createValueBytes() - Method in class org.apache.hadoop.io.SequenceFile.Reader
 
createWriter(FileSystem, Configuration, Path, Class, Class) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
CSTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
CsvRecordInput - Class in org.apache.hadoop.record
 
CsvRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.CsvRecordInput
Creates a new instance of CsvRecordInput
CsvRecordOutput - Class in org.apache.hadoop.record
 
CsvRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.CsvRecordOutput
Creates a new instance of CsvRecordOutput
CUR_DIR - Static variable in class org.apache.hadoop.fs.Path
 
curChar - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
currentToken - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is the last token that has been consumed successfully.

D

Daemon - Class in org.apache.hadoop.util
A thread that has called Thread.setDaemon(boolean) with true.
Daemon() - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
Daemon(Runnable) - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
DATA_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the data file.
DataInputBuffer - Class in org.apache.hadoop.io
A reusable DataInput implementation that reads from an in-memory buffer.
DataInputBuffer() - Constructor for class org.apache.hadoop.io.DataInputBuffer
Constructs a new empty buffer.
DataJoinJob - Class in org.apache.hadoop.contrib.utils.join
This class implements the main function for creating a map/reduce job to join data of different sources.
DataJoinJob() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
DataJoinMapperBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the mapper class of a data join job.
DataJoinMapperBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
DataJoinReducerBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the reducer class of a data join job.
DataJoinReducerBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
DataNode - Class in org.apache.hadoop.dfs
DataNode is a class (and program) that stores a set of blocks for a DFS deployment.
DatanodeDescriptor - Class in org.apache.hadoop.dfs
DatanodeDescriptor tracks stats on a given DataNode, such as available storage capacity, last update time, etc., and maintains a set of blocks stored on the datanode.
DatanodeDescriptor() - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
Default constructor
DatanodeDescriptor(DatanodeID) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String, String) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, long, long, int) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String, String, long, long, int) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeID - Class in org.apache.hadoop.dfs
DatanodeID is composed of the data node name (hostname:portNumber) and the data storage ID, which it currently represents.
DatanodeID() - Constructor for class org.apache.hadoop.dfs.DatanodeID
DatanodeID default constructor
DatanodeID(DatanodeID) - Constructor for class org.apache.hadoop.dfs.DatanodeID
DatanodeID copy constructor
DatanodeID(String, String, int) - Constructor for class org.apache.hadoop.dfs.DatanodeID
Create DatanodeID
DatanodeInfo - Class in org.apache.hadoop.dfs
DatanodeInfo represents the status of a DataNode.
DatanodeInfo.AdminStates - Enum in org.apache.hadoop.dfs
 
DataOutputBuffer - Class in org.apache.hadoop.io
A reusable DataOutput implementation that writes to an in-memory buffer.
DataOutputBuffer() - Constructor for class org.apache.hadoop.io.DataOutputBuffer
Constructs a new empty buffer.
debug_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
debugStream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
decode(byte[]) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decode(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
 
decode(byte[], int, int, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decompress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Fills specified buffer with uncompressed data.
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
Decompressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'de-compressor' which can be plugged into a CompressionInputStream to compress data.
DEFAULT - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DEFAULT_PERIOD - Static variable in interface org.apache.hadoop.metrics.MetricsContext
Default period in seconds at which data is sent to the metrics system.
DEFAULT_RACK - Static variable in class org.apache.hadoop.net.NetworkTopology
 
DefaultCodec - Class in org.apache.hadoop.io.compress
 
DefaultCodec() - Constructor for class org.apache.hadoop.io.compress.DefaultCodec
 
DefaultJobHistoryParser - Class in org.apache.hadoop.mapred
Default parser for job history files.
DefaultJobHistoryParser() - Constructor for class org.apache.hadoop.mapred.DefaultJobHistoryParser
 
define(Class, WritableComparator) - Static method in class org.apache.hadoop.io.WritableComparator
Register an optimized comparator for a WritableComparable implementation.
define(Class, RecordComparator) - Static method in class org.apache.hadoop.record.RecordComparator
Register an optimized comparator for a Record implementation.
delete(String) - Method in class org.apache.hadoop.dfs.NameNode
 
delete(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Get rid of Path f, whether a true file or dir.
delete(Path) - Method in class org.apache.hadoop.fs.FileSystem
Delete a file
delete(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Delete a file
delete(String, boolean) - Method in class org.apache.hadoop.fs.FsShell
Delete all files that match the file pattern srcf.
delete(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
delete(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
delete(FileSystem, String) - Static method in class org.apache.hadoop.io.MapFile
Deletes the named map file.
delete(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
deleteBlock(Block) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteLocalFiles() - Method in class org.apache.hadoop.mapred.JobConf
 
deleteLocalFiles(String) - Method in class org.apache.hadoop.mapred.JobConf
 
DEPENDENT_FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
depth() - Method in class org.apache.hadoop.fs.Path
Return the number of elements in this path.
deserialize(InputStream) - Static method in class org.apache.hadoop.fs.s3.INode
 
deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.Record
Deserialize a record with a tag (usually field name)
deserialize(RecordInput) - Method in class org.apache.hadoop.record.Record
Deserialize a record without a tag
detailedUsage_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
DF - Class in org.apache.hadoop.fs
Filesystem disk space usage statistics.
DF(File, Configuration) - Constructor for class org.apache.hadoop.fs.DF
 
DF(File, long) - Constructor for class org.apache.hadoop.fs.DF
 
DF_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.DF
 
dfmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
DFSAdmin - Class in org.apache.hadoop.dfs
This class provides some DFS administrative access.
DFSAdmin() - Constructor for class org.apache.hadoop.dfs.DFSAdmin
Construct a DFSAdmin object.
DFSck - Class in org.apache.hadoop.dfs
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
DFSck(Configuration) - Constructor for class org.apache.hadoop.dfs.DFSck
Filesystem checker.
DFSNodesStatus(ArrayList<DatanodeDescriptor>, ArrayList<DatanodeDescriptor>) - Method in class org.apache.hadoop.dfs.JspHelper
 
digest(byte[]) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(byte[], int, int) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(String) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
digest(UTF8) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
DIRECTORY_INODE - Static variable in class org.apache.hadoop.fs.s3.INode
 
disable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
DiskChecker - Class in org.apache.hadoop.util
Class that provides utility functions for checking disk problem
DiskChecker() - Constructor for class org.apache.hadoop.util.DiskChecker
 
DiskChecker.DiskErrorException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskErrorException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskErrorException
 
DiskChecker.DiskOutOfSpaceException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskOutOfSpaceException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException
 
displayByteArray(byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
DistributedCache - Class in org.apache.hadoop.filecache
The DistributedCache maintains all the caching information of cached archives and unarchives all the files as well and returns the path
DistributedCache() - Constructor for class org.apache.hadoop.filecache.DistributedCache
 
DistributedFileSystem - Class in org.apache.hadoop.dfs
Implementation of the abstract FileSystem for the DFS system.
DistributedFileSystem() - Constructor for class org.apache.hadoop.dfs.DistributedFileSystem
 
DistributedFileSystem(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.  
DNS - Class in org.apache.hadoop.net
A class that provides direct and reverse lookup functionalities, allowing the querying of specific network interfaces or nameservers.
DNS() - Constructor for class org.apache.hadoop.net.DNS
 
doAnalyze(String, String, String, String, String) - Method in class org.apache.hadoop.tools.Logalyzer
doAnalyze:
doArchive(String, String) - Method in class org.apache.hadoop.tools.Logalyzer
doArchive: Workhorse function to archive log-files.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.FsckServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.GetImageServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.SecondaryNameNode.GetImageServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.StreamFile
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.StatusHttpServer.StackServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
doMain(Configuration, String[]) - Method in class org.apache.hadoop.util.ToolBase
Work as a main program: execute a command and handle exception if any
done(String) - Method in class org.apache.hadoop.mapred.TaskTracker
The task is done.
Done() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
done() - Method in interface org.apache.hadoop.record.Index
 
doSync() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Do the sync checks
DOT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
DoubleValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of double values.
DoubleValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
The default constructor
doUpdates(MetricsContext) - Method in interface org.apache.hadoop.metrics.Updater
Timer-based call-back from the metric library.
driver(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
driver(String[]) - Method in class org.apache.hadoop.util.ProgramDriver
This is a driver for the example programs.
du(String) - Method in class org.apache.hadoop.fs.FsShell
Show the size of all files that match the file pattern src
dump() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Diagnostic method to dump all INodes to the console.
dus(String) - Method in class org.apache.hadoop.fs.FsShell
Show the summary disk usage of each dir/file that matches the file pattern src

E

emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.file.FileContext
Emits a metrics record to a file.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sends a record to the metrics system.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of emitRecord
EMPTY_ARRAY - Static variable in class org.apache.hadoop.mapred.TaskCompletionEvent
 
enable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
encode(String) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
encode(String, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
end() - Method in interface org.apache.hadoop.io.compress.Compressor
Closes the compressor and discards any unprocessed input.
end() - Method in interface org.apache.hadoop.io.compress.Decompressor
Closes the decompressor and discards any unprocessed input.
end() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
end() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
endColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized map.
endMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized map.
endMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized record.
endRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized record.
endRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized vector.
endVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized vector.
endVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
ensureInflated() - Method in class org.apache.hadoop.io.CompressedWritable
Must be called by all methods which access fields to ensure that the data has been uncompressed.
entries() - Method in class org.apache.hadoop.conf.Configuration
 
env_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Environment - Class in org.apache.hadoop.streaming
This is a class used to get the current environment on the host machines running the map/reduce.
Environment() - Constructor for class org.apache.hadoop.streaming.Environment
 
EOF - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
eol - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
The end of line string for this machine.
equals(Object) - Method in class org.apache.hadoop.dfs.DatanodeID
 
equals(Object) - Method in class org.apache.hadoop.fs.Path
 
equals(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
equals(Object) - Method in class org.apache.hadoop.io.BytesWritable
Are the two byte sequences equal?
equals(Object) - Method in class org.apache.hadoop.io.FloatWritable
Returns true iff o is a FloatWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.IntWritable
Returns true iff o is a IntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.LongWritable
Returns true iff o is a LongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.MD5Hash
Returns true iff o is an MD5Hash whose digest contains the same values.
equals(SequenceFile.Metadata) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
equals(Object) - Method in class org.apache.hadoop.io.Text
Returns true iff o is a Text with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Returns true iff o is a UTF8 with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.VIntWritable
Returns true iff o is a VIntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.VLongWritable
Returns true iff o is a VLongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.record.Buffer
 
errorReport(DatanodeRegistration, int, String) - Method in class org.apache.hadoop.dfs.NameNode
 
ExampleDriver - Class in org.apache.hadoop.examples
A description of an example program based on its class and a human-readable description.
ExampleDriver() - Constructor for class org.apache.hadoop.examples.ExampleDriver
 
execute() - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Invoke the Hadoop record compiler on each record definition file
exists(String) - Method in class org.apache.hadoop.dfs.NameNode
 
exists(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check if exists.
exists(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Check if exists.
exists(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
exists(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
exitUsage(boolean) - Method in class org.apache.hadoop.streaming.StreamJob
 
ExpandBuff(boolean) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
expectedTokenSequences - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
Each entry in this array is an array of integers.
exponentialBackoffRetry(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
expunge() - Method in class org.apache.hadoop.fs.Trash
Delete old checkpoints.

F

fail(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
FAILED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
failedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
Field() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
FieldSelectionMapReduce - Class in org.apache.hadoop.mapred.lib
This class implements a mapper/reducer class that can be used to perform field selections in a manner similar to unix cut.
FieldSelectionMapReduce() - Constructor for class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
FILE_NAME_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
FILE_TYPES - Static variable in class org.apache.hadoop.fs.s3.INode
 
FileAlreadyExistsException - Exception in org.apache.hadoop.mapred
Used when target file already exists for any operation and is not configured to be overwritten.
FileAlreadyExistsException() - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileAlreadyExistsException(String) - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileContext - Class in org.apache.hadoop.metrics.file
Metrics context for writing metrics to a file.

This class is configured by setting ContextFactory attributes which in turn are usually configured through a properties file.

FileContext() - Constructor for class org.apache.hadoop.metrics.file.FileContext
Creates a new instance of FileContext
fileExtension(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
FileInputFormat - Class in org.apache.hadoop.mapred
A base class for InputFormat.
FileInputFormat() - Constructor for class org.apache.hadoop.mapred.FileInputFormat
 
FileSplit - Class in org.apache.hadoop.mapred
A section of an input file.
FileSplit(Path, long, long, JobConf) - Constructor for class org.apache.hadoop.mapred.FileSplit
Constructs a split.
FileSystem - Class in org.apache.hadoop.fs
An abstract base class for a fairly generic filesystem.
FileSystem() - Constructor for class org.apache.hadoop.fs.FileSystem
 
FileSystemStore - Interface in org.apache.hadoop.fs.s3
A facility for storing and retrieving INodes and Blocks.
fileURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
FileUtil - Class in org.apache.hadoop.fs
A collection of file-processing util methods
FileUtil() - Constructor for class org.apache.hadoop.fs.FileUtil
 
FileUtil.HardLink - Class in org.apache.hadoop.fs
Class for creating hardlinks.
FileUtil.HardLink() - Constructor for class org.apache.hadoop.fs.FileUtil.HardLink
 
FillBuff() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
FilterFileSystem - Class in org.apache.hadoop.fs
A FilterFileSystem contains some other file system, which it uses as its basic file system, possibly transforming the data along the way or providing additional functionality.
FilterFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.FilterFileSystem
 
finalize() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
finalize() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.DFSAdmin
Command to ask the namenode to finalize previously performed upgrade.
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Finalize previously upgraded files system state.
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.NameNode
 
finalKey(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Reads the final key from the file.
find(String) - Method in class org.apache.hadoop.io.Text
 
find(String, int) - Method in class org.apache.hadoop.io.Text
Finds any occurence of what in the backing buffer, starting as position start.
findByte(byte[], int, int, byte) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occurrence of the given byte b in a UTF-8 encoded string
findInClasspath(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findInClasspath(String, ClassLoader) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findNthByte(byte[], byte, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the nth occurrence of the given byte b in a UTF-8 encoded string
findSeparator(byte[], int, int, byte) - Static method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
findTab(byte[], int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occured tab in a UTF-8 encoded string
findTab(byte[]) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occured tab in a UTF-8 encoded string
finish() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Finishes writing compressed data to the output stream without closing the underlying stream.
finish() - Method in interface org.apache.hadoop.io.compress.Compressor
When called, indicates that compression should end with the current contents of the input buffer.
finish() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
finish() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
finish() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
fix(FileSystem, Path, Class, Class, boolean, Configuration) - Static method in class org.apache.hadoop.io.MapFile
This method attempts to fix a corrupt MapFile by re-creating its index.
FIXING_DELETE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Delete corrupted files.
FIXING_MOVE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Move corrupted files to /lost+found .
FIXING_NONE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Don't attempt any fixing .
FLOAT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
FloatWritable - Class in org.apache.hadoop.io
A WritableComparable for floats.
FloatWritable() - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable(float) - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for FloatWritable.
FloatWritable.Comparator() - Constructor for class org.apache.hadoop.io.FloatWritable.Comparator
 
flush() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
flush() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
flush() - Method in class org.apache.hadoop.metrics.file.FileContext
Flushes the output writer, forcing updates to disk.
flush() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called each period after all records have been emitted, this method does nothing.
format(Configuration) - Static method in class org.apache.hadoop.dfs.NameNode
Format a new filesystem.
formatBytes(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatBytes2(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatPercent(double, int) - Static method in class org.apache.hadoop.util.StringUtils
Format a percentage for presentation to the user.
formatTimeDiff(long, long) - Static method in class org.apache.hadoop.util.StringUtils
Given a finish and start time in long milliseconds, returns a String in the format Xhrs, Ymins, Z sec, for the time difference between two times.
fs - Variable in class org.apache.hadoop.fs.FilterFileSystem
 
fs - Variable in class org.apache.hadoop.fs.FsShell
 
fsck() - Method in class org.apache.hadoop.dfs.NamenodeFsck
Check files on DFS, starting from the indicated path.
FsckServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to do fsck on namenode.
FsckServlet() - Constructor for class org.apache.hadoop.dfs.FsckServlet
 
FSConstants - Interface in org.apache.hadoop.dfs
Some handy constants
FSConstants.NodeType - Enum in org.apache.hadoop.dfs
Type of the node
FSConstants.SafeModeAction - Enum in org.apache.hadoop.dfs
 
FSConstants.StartupOption - Enum in org.apache.hadoop.dfs
 
FSDataInputStream - Class in org.apache.hadoop.fs
Utility that wraps a FSInputStream in a DataInputStream and buffers input through a BufferedInputStream.
FSDataInputStream(FSInputStream, Configuration) - Constructor for class org.apache.hadoop.fs.FSDataInputStream
 
FSDataInputStream(FSInputStream, int) - Constructor for class org.apache.hadoop.fs.FSDataInputStream
 
FSDataOutputStream - Class in org.apache.hadoop.fs
Utility that wraps a OutputStream in a DataOutputStream, buffers output through a BufferedOutputStream and creates a checksum file.
FSDataOutputStream(OutputStream, int) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
 
FSDataOutputStream(OutputStream, Configuration) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
 
FSError - Error in org.apache.hadoop.fs
Thrown for unexpected filesystem errors, presumed to reflect disk errors in the native filesystem.
fsError(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A child task had a local filesystem error.
FSInputStream - Class in org.apache.hadoop.fs
FSInputStream is a generic old InputStream with a little bit of RAF-style seek ability.
FSInputStream() - Constructor for class org.apache.hadoop.fs.FSInputStream
 
FsShell - Class in org.apache.hadoop.fs
Provide command line access to a FileSystem.
FsShell() - Constructor for class org.apache.hadoop.fs.FsShell
 
fullyDelete(File) - Static method in class org.apache.hadoop.fs.FileUtil
Delete a directory and all its contents.
fullyDelete(FileSystem, Path) - Static method in class org.apache.hadoop.fs.FileUtil
Recursively delete a directory.

G

GangliaContext - Class in org.apache.hadoop.metrics.ganglia
Context for sending metrics to Ganglia.
GangliaContext() - Constructor for class org.apache.hadoop.metrics.ganglia.GangliaContext
Creates a new instance of GangliaContext
genCode(String, String, ArrayList<String>) - Method in class org.apache.hadoop.record.compiler.JFile
Generate record code in given language.
generateEntry(String, String, Object) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
generateGroupKey(TaggedMapOutput) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a map output key.
generateInputTag(String) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Determine the source tag based on the input file name.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pair.
generateParseException() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
generateTaggedMapOutput(Writable) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a tagged map output value.
generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
GenericWritable - Class in org.apache.hadoop.io
A wrapper for Writable instances.
GenericWritable() - Constructor for class org.apache.hadoop.io.GenericWritable
 
get(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property.
get(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property, or null if no such property exists.
get(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property.
get(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the configured filesystem implementation.
get(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the FileSystem for this URI's scheme and authority.
get(long, Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Return the nth value in the file.
get() - Method in class org.apache.hadoop.io.ArrayWritable
 
get() - Method in class org.apache.hadoop.io.BooleanWritable
Returns the value of the BooleanWritable
get() - Method in class org.apache.hadoop.io.BytesWritable
Get the data from the BytesWritable.
get() - Method in class org.apache.hadoop.io.FloatWritable
Return the value of this FloatWritable.
get() - Method in class org.apache.hadoop.io.GenericWritable
Return the wrapped instance.
get() - Method in class org.apache.hadoop.io.IntWritable
Return the value of this IntWritable.
get() - Method in class org.apache.hadoop.io.LongWritable
Return the value of this LongWritable.
get(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Return the value for the named key, or null if none exists.
get() - Static method in class org.apache.hadoop.io.NullWritable
Returns the single instance of this class.
get() - Method in class org.apache.hadoop.io.ObjectWritable
Return the instance, or null if none.
get(Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
get(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the matching key from a set into key.
get() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
get() - Method in class org.apache.hadoop.io.VIntWritable
Return the value of this VIntWritable.
get() - Method in class org.apache.hadoop.io.VLongWritable
Return the value of this LongWritable.
get(Class) - Static method in class org.apache.hadoop.io.WritableComparator
Get a comparator for a WritableComparable implementation.
get() - Static method in class org.apache.hadoop.ipc.Server
Returns the server instance called under or null.
get(DataInput) - Static method in class org.apache.hadoop.record.BinaryRecordInput
Get a thread-local record input for the supplied DataInput.
get(DataOutput) - Static method in class org.apache.hadoop.record.BinaryRecordOutput
Get a thread-local record output for the supplied DataOutput.
get() - Method in class org.apache.hadoop.record.Buffer
Get the data from the Buffer.
get() - Method in class org.apache.hadoop.util.Progress
Returns the overall progress of the root.
getAbsolutePath(String) - Method in class org.apache.hadoop.streaming.PathFinder
Returns the full path name of this file if it is listed in the path
getAddress(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
 
getAllTasks() - Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Returns all map and reduce tasks .
getApproxChkSumLength(long) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
 
getArchiveClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the archive entries in classpath as an array of Path
getArchiveMd5(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the md5 checksums of the archives
getAssignedTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
Get tracker name for a given task id.
getAttribute(String) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the value in the webapp context.
getAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the value of the named attribute, or null if there is no attribute of that name.
getAttribute(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Convenience method for subclasses to access factory attributes.
getAttributeNames() - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the names of all the factory's attributes.
getAttributeTable(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns an attribute-value map derived from the factory attributes by finding all factory attributes that begin with contextName.tableName.
getAvailable() - Method in class org.apache.hadoop.fs.DF
 
getAvailableSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getBasePathInJarOut(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
getBeginColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBeginLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBlocks() - Method in class org.apache.hadoop.fs.s3.INode
 
getBlockSize(String) - Method in class org.apache.hadoop.dfs.NameNode
 
getBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
Get the block size for a particular file.
getBlockSize(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the block size for a particular file.
getBlockSize(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getBlockSize(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an boolean.
getBoundAntProperty(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getBytes() - Method in class org.apache.hadoop.io.Text
Retuns the raw bytes.
getBytes() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The raw bytes.
getBytes(String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Convert a string to a UTF-8 encoded byte array.
getBytesPerSum() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the bytes Per Checksum
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of uncompressed bytes input so far.
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of uncompressed bytes input so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of compressed bytes output so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of compressed bytes output so far.
getCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache archives set in the Configuration
getCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache files set in the Configuration
getCapacity() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw capacity.
getCapacity() - Method in class org.apache.hadoop.fs.DF
 
getCapacity() - Method in class org.apache.hadoop.io.BytesWritable
Get the capacity, which is the maximum size that could handled without resizing the backing storage.
getCapacity() - Method in class org.apache.hadoop.record.Buffer
Get the capacity, which is the maximum count that could handled without resizing the backing storage.
getCapacitySkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getChecksumFile(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the name of the checksum file associated with a file.
getChecksumFileLength(Path, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the length of the checksum file given the size of the actual file.
getClass(String, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a Class.
getClass(String, Class<? extends U>, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a Class.
getClass(String, Configuration) - Static method in class org.apache.hadoop.io.WritableName
Return the class for a name.
getClassByName(String) - Method in class org.apache.hadoop.conf.Configuration
Load a class by name.
getClassByName(String) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
getClassLoader() - Method in class org.apache.hadoop.conf.Configuration
Get the class loader for this job.
getClassName() - Method in exception org.apache.hadoop.ipc.RemoteException
 
getClientVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the client's prefered version
getClosest(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClusterNick() - Method in class org.apache.hadoop.streaming.StreamJob
 
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
 
getClusterStatus() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get the current status of the cluster
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobTracker
 
getCodec(Path) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the relevant compression codec for the given file based on its filename suffix.
getCodecClasses(Configuration) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Get the list of codecs listed in the configuration
getColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Deprecated.  
getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getCombinerOutput() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the compression codec of data in this file.
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the compression codec of data in this file.
getCompressionType(Configuration) - Static method in class org.apache.hadoop.io.SequenceFile
Get the compression type for the reduce outputs
getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
Are the outputs of the maps be compressed?
getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Is the reduce output compressed?
getConf() - Method in interface org.apache.hadoop.conf.Configurable
Return the configuration used by this object.
getConf() - Method in class org.apache.hadoop.conf.Configured
 
getConf() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
getConf() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getConf() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
getConf() - Method in class org.apache.hadoop.io.ObjectWritable
 
getConf() - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
getConf() - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
getConf() - Method in class org.apache.hadoop.util.ToolBase
 
getConfResourceAsInputStream(String) - Method in class org.apache.hadoop.conf.Configuration
Returns an input stream attached to the configuration resource with the given name.
getConfResourceAsReader(String) - Method in class org.apache.hadoop.conf.Configuration
Returns a reader attached to the configuration resource with the given name.
getContentLength(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getContentLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes of the given path If f is a file, return the size of the file; If f is a directory, return the size of the directory tree
getContext(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the named MetricsContext instance, constructing it if necessary using the factory's current configuration attributes.
getContext(String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to return the named context.
getContext() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
getContextFactory() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the factory by which this context was created.
getContextName() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns the context name.
getContextName() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the context name.
getCorruptFiles() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of currupted files.
getCount() - Method in class org.apache.hadoop.record.Buffer
Get the current count of the buffer.
getCounter(Enum) - Method in class org.apache.hadoop.mapred.Counters
Returns current value of the specified counter, or 0 if the counter does not exist.
getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the value of the specified counter, or 0 if the counter does not exist.
getCounterNames() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the counters for this group, with their names localized.
getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
Gets the counters for this job.
getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
A table of counters.
getCurrentSplit(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getCurrentValue(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getData() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getData() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the current contents of the buffer.
getDataNode() - Static method in class org.apache.hadoop.dfs.DataNode
Return the DataNode object
getDatanodeReport() - Method in class org.apache.hadoop.dfs.DatanodeInfo
A formatted string for reporting the status of the DataNode.
getDatanodeReport() - Method in class org.apache.hadoop.dfs.NameNode
 
getDataNodeStats() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return statistics for each datanode.
getDate() - Static method in class org.apache.hadoop.util.VersionInfo
The date that Hadoop was compiled.
getDeclaredClass() - Method in class org.apache.hadoop.io.ObjectWritable
Return the class this is meant to be.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FilterFileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getDefaultExtension() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.DefaultCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.GzipCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.LzoCodec
Get the default filename extension for this kind of compression.
getDefaultHost(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the provided nameserver with the address bound to the specified network interface
getDefaultHost(String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the default nameserver with the address bound to the specified network interface
getDefaultIP(String) - Static method in class org.apache.hadoop.net.DNS
Returns the first available IP address associated with the provided network interface
getDefaultReplication() - Method in class org.apache.hadoop.fs.FileSystem
Get the default replication.
getDefaultReplication() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the default replication.
getDefaultReplication() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getDefaultReplication() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getDiagnostics() - Method in class org.apache.hadoop.mapred.TaskReport
A list of error messages.
getDigest() - Method in class org.apache.hadoop.io.MD5Hash
Returns the digest bytes.
getDirPath() - Method in class org.apache.hadoop.fs.DF
 
getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns localized name of the group.
getDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Returns localized name of the specified counter.
getDistance(DatanodeDescriptor, DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Return the distance between two data nodes It is assumed that the distance from one node to its parent is 1 The distance between two nodes is calculated by summing up their distances to their closest common ancestor.
getDoubleValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getDU(File) - Static method in class org.apache.hadoop.fs.FileUtil
Takes an input dir and returns the du on that local directory.
getEditLogSize() - Method in class org.apache.hadoop.dfs.NameNode
Returns the size of the current edit log.
getEmptier() - Method in class org.apache.hadoop.fs.Trash
Return a Runnable that periodically empties the trash.
getEndColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEndLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEntry(MapFile.Reader[], Partitioner, WritableComparable, Writable) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Get an entry from output generated by this class.
getEventId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns event Id.
getExcludedHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getFactor() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the number of streams to merge at once.
getFactory(Class) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
getFactory() - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns the singleton ContextFactory instance, constructing it if necessary.
getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getFile(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns a local file name under a directory named in dirsProp with the given path.
getFile() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated. Call FileSplit.getPath() instead.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.FileSystem
Return a 2D array of size 1x1 or greater, containing hostnames where portions of the given file can be found.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.FilterFileSystem
Return a 2D array of size 1x1 or greater, containing hostnames where portions of the given file can be found.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Return 1x1 'localhost' cell if the file exists.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Return 1x1 'localhost' cell if the file exists.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
getFileClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the file entries in classpath as an array of Path
getFileMd5(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the md5 checksums of the files
getFileName() - Method in class org.apache.hadoop.metrics.file.FileContext
Returns the configured file name, or null.
getFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getFilesystem() - Method in class org.apache.hadoop.fs.DF
 
getFileSystem(Configuration) - Method in class org.apache.hadoop.fs.Path
Return the FileSystem that owns this Path.
getFileSystem() - Method in class org.apache.hadoop.mapred.TaskTracker
Return the DFS filesystem
getFilesystemName() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
A MapReduce system always operates on a single filesystem.
getFilesystemName() - Method in class org.apache.hadoop.mapred.JobTracker
Grab the local fs name
getFileType() - Method in class org.apache.hadoop.fs.s3.INode
 
getFinishTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get finish time of task.
getFloat(String, float) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a float.
getFormattedTimeWithDiff(DateFormat, long, long) - Static method in class org.apache.hadoop.util.StringUtils
Formats time in ms and appends difference (finishTime - startTime) as returned by formatTimeDiff().
getFs() - Method in class org.apache.hadoop.mapred.JobClient
Get a filesystem handle.
getFsEditName() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the edits file
getFsImageName() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file
getFsImageNameCheckpoint() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file uploaded by periodic checkpointing
getFSSize() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
Returns the named counter group, or an empty group if there is none with the specified name.
getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
Returns the names of all counter classes.
getHadoopClientHome() - Method in class org.apache.hadoop.streaming.StreamJob
 
getHints(String, long, long) - Method in class org.apache.hadoop.dfs.NameNode
 
getHost() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getHost() - Method in class org.apache.hadoop.streaming.Environment
 
getHostName() - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
getHosts(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the provided nameserver with the address bound to the specified network interface
getHosts(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the default nameserver with the address bound to the specified network interface
getHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getId() - Method in class org.apache.hadoop.fs.s3.Block
 
GetImage() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
GetImageServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to retrieve a file.
GetImageServlet() - Constructor for class org.apache.hadoop.dfs.GetImageServlet
 
getIndexInterval() - Method in class org.apache.hadoop.io.MapFile.Writer
The number of entries that are added before an index entry is added.
getInfoPort() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getInfoPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
 
getInputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Call RecordReader.createKey().
getInputPaths() - Method in class org.apache.hadoop.mapred.JobConf
 
getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
Get the InputSplit object for a map.
getInputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Call RecordReader.createValue().
getInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an integer.
getInterfaceName() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the interface name
getIPs(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the IPs associated with the provided interface, if any, in textual form.
getJar() - Method in class org.apache.hadoop.mapred.JobConf
 
getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Get an RunningJob object to track an ongoing job.
getJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobClient() - Method in class org.apache.hadoop.mapred.TaskTracker
The connection to the JobTracker, used by the TaskRunner for locating remote files.
getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobCounters(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab the current job counters
getJobCounters(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobFile() - Method in class org.apache.hadoop.mapred.JobProfile
Get the configuration file for the job.
getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns the path of the submitted job.
getJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobId() - Method in class org.apache.hadoop.mapred.JobProfile
Get the job id.
getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
 
getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns an identifier for the job
getJobName() - Method in class org.apache.hadoop.mapred.JobConf
Get the user-specified job name.
getJobName() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobName() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user-specified job name.
getJobProfile(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a handle to a job that is already known to the JobTracker
getJobProfile(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobStatus(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a handle to a job that is already known to the JobTracker
getJobStatus(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobTrackerHostPort() - Method in class org.apache.hadoop.streaming.StreamJob
 
getJobTrackerMachine() - Method in class org.apache.hadoop.mapred.JobTracker
 
getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
Should the temporary files for failed tasks be kept?
getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
Get the regular expression that is matched against the task names to see if we need to keep the files.
getKey() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw key
getKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Returns the stored rawKey
getKeyClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.WritableComparator
Returns the WritableComparable implementation class.
getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of key that must be passed to SequenceFileRecordReader.next(Writable,Writable)..
getLastUpdate() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The time when this information was accurate.
getLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
The number of bytes in a file.
getLength(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The number of bytes in a file.
getLength(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getLength() - Method in class org.apache.hadoop.fs.s3.Block
 
getLength(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getLength() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the length of the input.
getLength() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the current length of the output file.
getLength() - Method in class org.apache.hadoop.io.Text
Returns the number of bytes in the byte array
getLength() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The number of bytes in the encoded string.
getLength() - Method in class org.apache.hadoop.mapred.FileSplit
The number of bytes in the file to process.
getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
Get the number of input bytes in the split.
getLevel() - Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's level in the tree.
getLevel() - Method in interface org.apache.hadoop.net.Node
Return this node's level in the tree.
getLevel() - Method in class org.apache.hadoop.net.NodeBase
Return this node's level in the tree.
getLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Deprecated.  
getListenerAddress() - Method in class org.apache.hadoop.ipc.Server
Return the socket (ip+port) on which the RPC server is listening to.
getListing(String) - Method in class org.apache.hadoop.dfs.NameNode
 
getLocal(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Get the local file syste
getLocalCache(URI, Configuration, Path, boolean, String, Path) - Static method in class org.apache.hadoop.filecache.DistributedCache
 
getLocalCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized caches
getLocalCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized files
getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
 
getLocalPath(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns a local file under a directory named in dirsProp with the given path.
getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
Constructs a local file name.
getLocalPathForWrite(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathForWrite(String, long, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathToRead(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS for reading.
getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
 
getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
Get the list of hostnames where the input split is located.
getLogsRetainHours() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
getLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a long.
getLongValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getMapCompletionEvents(String, int, int) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getMapCount(int, long, JobClient) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Calculate how many maps to run.
getMapOutputCompressionType() - Method in class org.apache.hadoop.mapred.JobConf
Get the compression type for the map outputs.
getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Get the codec for compressing the map outputs
getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Get the key class for the map output data.
getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Get the value class for the map output data.
getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getMapredJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the map tasks of a job.
getMapTaskReports(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a bunch of info on the map tasks that make up the job
getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of currently running map tasks.
getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a map task, as specified by the mapred.map.max.attempts property.
getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of map tasks that can fail without the job being aborted.
getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a reduce task, as specified by the mapred.reduce.max.attempts property.
getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of reduce tasks that can fail without the job being aborted.
getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum no.
getMaxTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The maximum capacity for running tasks in the cluster.
getMemory() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the total amount of buffer memory, in bytes.
getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get a summary message of the problems found.
getMessage() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getMessage() - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
This method has the standard behavior when this object has been created using the standard constructors.
getMessage() - Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
You can also modify the body of this method to customize your error messages.
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the metadata object of the file
getMetric(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the metric object which can be a Float, Integer, Short or Byte.
getMetricNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of metric names.
getMissingIds() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return a list of missing block names (as list of Strings).
getMissingSize() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of missing data, in bytes.
getMount() - Method in class org.apache.hadoop.fs.DF
 
getName() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getName() - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.Path
Returns the final component of this path.
getName() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
getName() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getName(Class) - Static method in class org.apache.hadoop.io.WritableName
Return the name for a class.
getName() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns raw name of the group.
getName() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
getName() - Method in interface org.apache.hadoop.net.Node
Return this node's name
getName() - Method in class org.apache.hadoop.net.NodeBase
Return this node's name
getNamed(String, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #get(URI,Configuration) instead.
getNamenode() - Method in class org.apache.hadoop.dfs.DataNode
Return the namenode's identifier
getNameNodeAddr() - Method in class org.apache.hadoop.dfs.DataNode
 
getNameNodeAddress() - Method in class org.apache.hadoop.dfs.NameNode
Returns the address on which the NameNodes is listening to.
getNetworkLocation() - Method in class org.apache.hadoop.dfs.DatanodeInfo
rack name
getNetworkLocation() - Method in interface org.apache.hadoop.net.Node
Return the string representation of this node's network location
getNetworkLocation() - Method in class org.apache.hadoop.net.NodeBase
Return this node's network location
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
getNode(String) - Method in class org.apache.hadoop.net.NetworkTopology
Given a string representation of a node, return its reference
getNoKeepSplits() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
getNullContext(String) - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns a "null" context - one which does nothing.
getNumber() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
getNumFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
 
getNumOfLeaves() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of data nodes
getNumOfRacks() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of racks
getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
 
getObject(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property, or null if no such property exists.
getOutputCompressorClass(JobConf, Class) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Get the codec for compressing the reduce outputs
getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputPath() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
Get the user defined comparator for grouping values.
getOverReplicatedBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of over-replicated blocks.
getParent() - Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's parent
getParent() - Method in class org.apache.hadoop.fs.Path
Returns the parent of a path or null if at root.
getParent() - Method in interface org.apache.hadoop.net.Node
Return this node's parent
getParent() - Method in class org.apache.hadoop.net.NodeBase
Return this node's parent
getPartition(WritableComparable, Writable, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
Use Object.hashCode() to partition.
getPartition(WritableComparable, Writable, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
Use Object.hashCode() to partition.
getPartition(WritableComparable, Writable, int) - Method in interface org.apache.hadoop.mapred.Partitioner
Returns the paritition number for a given entry given the total number of partitions.
getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getPath() - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
getPath() - Method in class org.apache.hadoop.mapred.FileSplit
The file containing this split's data.
getPath() - Method in class org.apache.hadoop.net.NodeBase
Return this node's path
getPercentUsed() - Method in class org.apache.hadoop.fs.DF
 
getPercentUsed() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getPercentUsedSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getPeriod() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the timer period.
getPlatformName() - Static method in class org.apache.hadoop.util.PlatformName
Get the complete platform as per the java-vm.
getPort() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getPort() - Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the port that the server is on
getPos() - Method in exception org.apache.hadoop.fs.ChecksumException
 
getPos() - Method in class org.apache.hadoop.fs.FSDataInputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSInputStream
Return the current offset from the start of the file
getPos() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
Returns the current position in the input.
getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getPos() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Return the current byte position in the input file.
getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get the complete list of the problems reported.
getProgress() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the Progress object; this has a float (0.0 - 1.0) indicating the bytes processed by the iterator so far
getProgress() - Method in class org.apache.hadoop.mapred.LineRecordReader
Get the progress within the split
getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
How far has the reader gone through the input.
getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
Return the progress within the input split
getProgress() - Method in class org.apache.hadoop.mapred.TaskReport
The amount completed, between zero and one.
getProgress() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.dfs.NameNode
 
getProtocolVersion(String, long) - Method in interface org.apache.hadoop.ipc.VersionedProtocol
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.JobTracker
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getProxy(Class, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol, talking to a server at the named address.
getRawCapacity() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw capacity of the filesystem, disregarding replication .
getRawFileSystem() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
get the raw file system
getRawUsed() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw used space in the filesystem, disregarding replication .
getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Open the output generated by this format.
getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Open the output generated by this format.
getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRecordName() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Returns the record name.
getRecordName() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Returns the record name.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
Construct a RecordReader for a FileSplit.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
Create a record reader for the given split
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.streaming.StreamInputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
Construct a RecordWriter with Progressable.
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.OutputFormatBase
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
 
getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the reduce tasks of a job.
getReduceTaskReports(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a bunch of info on the reduce tasks that make up the job
getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of current running reduce tasks.
getRemaining() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw free space.
getRemoteAddress() - Static method in class org.apache.hadoop.ipc.Server
Returns remote address as a string when invoked inside an RPC.
getRemoteIp() - Static method in class org.apache.hadoop.ipc.Server
Returns the remote side ip address when invoked inside an RPC Returns null incase of an error.
getReplication() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the intended replication factor, against which the over/under- replicated blocks are counted.
getReplication(Path) - Method in class org.apache.hadoop.fs.FileSystem
Get replication.
getReplication(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Get replication.
getReplication(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Replication is not supported for the local file system.
getReplication(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Replication is not supported for S3 file systems since S3 handles it for us.
getReplicationFactor() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the actual replication factor.
getReport() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getReport() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportDetails() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getResource(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the URL for the named resource.
getRevision() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion revision number for the root directory
getRunnable() - Method in class org.apache.hadoop.util.Daemon
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
Version that is called from a timer thread, and therefore needs to be careful to synchronize.
getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
 
getSafeModeText() - Method in class org.apache.hadoop.dfs.JspHelper
 
getSerializedLength() - Method in class org.apache.hadoop.fs.s3.INode
 
getServer(Object, String, int, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServer(Object, String, int, int, boolean, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServerVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the server's agreed to version.
getSize() - Method in class org.apache.hadoop.io.BytesWritable
Get the current size of the buffer.
getSpace(int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
Should speculative execution be used for this job?
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
Splits files returned by FileInputFormat.listPaths(JobConf) when they're too big.
getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
Splits a set of input files.
getStart() - Method in class org.apache.hadoop.mapred.FileSplit
The position of the first byte in the file to process.
getStartTime() - Method in class org.apache.hadoop.mapred.JobStatus
 
getStartTime() - Method in class org.apache.hadoop.mapred.JobTracker
 
getStartTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get start time of task.
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getState() - Method in class org.apache.hadoop.mapred.TaskReport
The most recent state, reported by a Reporter.
getStats() - Method in class org.apache.hadoop.dfs.NameNode
 
getStorageID() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getStrings(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an array of strings.
getStrings(String) - Static method in class org.apache.hadoop.util.StringUtils
returns an arraylist of strings
getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
GetSuffix(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks to see if symlinks are to be create for the localized cache files in the current working directory
getSystemDir() - Method in class org.apache.hadoop.mapred.JobConf
 
getTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getTag() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getTag(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns a tag object which is can be a String, Integer, Short or Byte.
getTagNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of tag names
getTask(String) - Method in class org.apache.hadoop.mapred.TaskTracker
Called upon startup by the child process, to fetch Task data.
getTaskAttempts() - Method in class org.apache.hadoop.mapred.JobHistory.Task
Returns all task attempts for this task.
getTaskCompletionEvents(String, int, int) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get task completion events for the jobid, starting from fromEventId.
getTaskCompletionEvents(String, int, int) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
 
getTaskDiagnostics(String, String, String) - Method in class org.apache.hadoop.mapred.JobTracker
Get the diagnostics for a given task
getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns task id.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskLogAppender
Getter/Setter methods for log4j.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
The id of the task.
getTaskInfo(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Get the task output filter out of the JobConf
getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns enum Status.SUCESS or Status.FAILURE.
getTaskTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskTrackerHttp() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
http location of the tasktracker where this task ran.
getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of task trackers in the cluster.
getToken(int) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getTotalBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the total number of blocks in the scanned area.
getTotalDirs() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of directories encountered during this scan.
getTotalFiles() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of files encountered during this scan.
getTotalLogFileSize() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
getTotalSize() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of scanned data, in bytes.
getTotalSubmissions() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTracker() - Static method in class org.apache.hadoop.mapred.JobTracker
 
getTrackerPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a URL where some job progress information will be displayed.
getTypes() - Method in class org.apache.hadoop.io.GenericWritable
Return all classes that may be wrapped.
getUnderReplicatedBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of under-replicated blocks.
getUniqueItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getUri() - Method in class org.apache.hadoop.fs.FileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getURIs(String, String) - Method in class org.apache.hadoop.streaming.StreamJob
get the uris of all the files/caches
getURL() - Method in class org.apache.hadoop.mapred.JobProfile
Get the link to the web-ui for details of the job.
getUrl() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion URL for the root Hadoop directory.
getUsed() - Method in class org.apache.hadoop.fs.DF
 
getUsed() - Method in class org.apache.hadoop.fs.FileSystem
Return the total size of all files in the filesystem.
getUsedSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getUser() - Method in class org.apache.hadoop.mapred.JobConf
Get the reported username for this job.
getUser() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user id.
getUser() - Static method in class org.apache.hadoop.util.VersionInfo
The user that compiled Hadoop.
getUsername() - Method in class org.apache.hadoop.mapred.JobStatus
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getValue() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw value
getValueClass() - Method in class org.apache.hadoop.io.ArrayWritable
 
getValueClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of value that must be passed to SequenceFileRecordReader.next(Writable,Writable)..
getVersion() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
getVersion() - Method in class org.apache.hadoop.io.VersionedWritable
Return the version number of the current implementation.
getVersion() - Static method in class org.apache.hadoop.util.VersionInfo
Get the Hadoop version.
getVIntSize(long) - Static method in class org.apache.hadoop.io.WritableUtils
Get the encoded length if an integer is stored in a variable-length format
getVIntSize(long) - Static method in class org.apache.hadoop.record.Utils
Get the encoded length if an integer is stored in a variable-length format
getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
Get the current working directory for the default file system.
getXceiverCount() - Method in class org.apache.hadoop.dfs.DatanodeInfo
number of active connections
getZlibCompressor() - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib compressor.
getZlibDecompressor() - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib decompressor.
globPaths(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return all the files that match filePattern and are not checksum files.
globPaths(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
glob all the file names that matches filePattern and is accepted by filter.
go() - Method in class org.apache.hadoop.streaming.StreamJob
This is the method that actually intializes the job conf and submits the job to the jobtracker
goodClassOrNull(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
It may seem strange to silently switch behaviour when a String is not a classname; the reason is simplified Usage:
Grep - Class in org.apache.hadoop.examples
 
GT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
GzipCodec - Class in org.apache.hadoop.io.compress
This class creates gzip compressors/decompressors.
GzipCodec() - Constructor for class org.apache.hadoop.io.compress.GzipCodec
 
GzipCodec.GzipInputStream - Class in org.apache.hadoop.io.compress
 
GzipCodec.GzipInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
GzipCodec.GzipInputStream(DecompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
Allow subclasses to directly set the inflater stream.
GzipCodec.GzipOutputStream - Class in org.apache.hadoop.io.compress
A bridge that wraps around a DeflaterOutputStream to make it a CompressionOutputStream.
GzipCodec.GzipOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
GzipCodec.GzipOutputStream(CompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
Allow children types to put a different type in here.

H

hadoopAliasConf_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
HadoopStreaming - Class in org.apache.hadoop.streaming
The main entrypoint.
HadoopStreaming() - Constructor for class org.apache.hadoop.streaming.HadoopStreaming
 
HadoopVersionAnnotation - Annotation Type in org.apache.hadoop
A package attribute that captures the version of Hadoop that was compiled.
halfDigest() - Method in class org.apache.hadoop.io.MD5Hash
Construct a half-sized version of this MD5.
handle(JobHistory.RecordTypes, Map<JobHistory.Keys, String>) - Method in interface org.apache.hadoop.mapred.JobHistory.Listener
Callback method for history parser.
hashBytes(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Compute hash for binary data.
hashCode() - Method in class org.apache.hadoop.dfs.DatanodeID
 
hashCode() - Method in class org.apache.hadoop.fs.Path
 
hashCode() - Method in class org.apache.hadoop.io.BooleanWritable
 
hashCode() - Method in class org.apache.hadoop.io.BytesWritable
 
hashCode() - Method in class org.apache.hadoop.io.FloatWritable
 
hashCode() - Method in class org.apache.hadoop.io.IntWritable
 
hashCode() - Method in class org.apache.hadoop.io.LongWritable
 
hashCode() - Method in class org.apache.hadoop.io.MD5Hash
Returns a hash code value for this object.
hashCode() - Method in class org.apache.hadoop.io.Text
hash function
hashCode() - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
hashCode() - Method in class org.apache.hadoop.io.VIntWritable
 
hashCode() - Method in class org.apache.hadoop.io.VLongWritable
 
hashCode() - Method in class org.apache.hadoop.record.Buffer
 
HashPartitioner - Class in org.apache.hadoop.mapred.lib
Partition keys by their Object.hashCode().
HashPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.HashPartitioner
 
hasNext() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
hasSimpleInputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
HEADER - Static variable in class org.apache.hadoop.ipc.Server
The first four bytes of Hadoop RPC connections
heartbeat(TaskTrackerStatus, boolean, boolean, short) - Method in class org.apache.hadoop.mapred.JobTracker
The periodic heartbeat mechanism between the TaskTracker and the JobTracker.
HEARTBEAT_INTERVAL - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
hexchars - Static variable in class org.apache.hadoop.record.Utils
 
hexStringToByte(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a hexstring this will return the byte array corresponding to the string
HostsFileReader - Class in org.apache.hadoop.util
 
HostsFileReader(String, String) - Constructor for class org.apache.hadoop.util.HostsFileReader
 
humanReadableInt(long) - Static method in class org.apache.hadoop.util.StringUtils
Given an integer, return a string that is in an approximate, but human readable format.

I

IDENT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IdentityMapper - Class in org.apache.hadoop.mapred.lib
Implements the identity function, mapping inputs directly to outputs.
IdentityMapper() - Constructor for class org.apache.hadoop.mapred.lib.IdentityMapper
 
IdentityReducer - Class in org.apache.hadoop.mapred.lib
Performs no reduction, writing all input values directly to the output.
IdentityReducer() - Constructor for class org.apache.hadoop.mapred.lib.IdentityReducer
 
idWithinJob() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
ifmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
image - Variable in class org.apache.hadoop.record.compiler.generated.Token
The string image of the token.
in - Variable in class org.apache.hadoop.io.compress.CompressionInputStream
The input stream to be compressed.
inBuf - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Include() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
INCLUDE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
incr() - Method in interface org.apache.hadoop.record.Index
 
incrAllCounters(Counters) - Method in class org.apache.hadoop.mapred.Counters
Increments multiple counters by their amounts in another Counters instance.
incrCounter(Enum, long) - Method in class org.apache.hadoop.mapred.Counters
Increments the specified counter by the specified amount, creating it if it didn't already exist.
incrCounter(Enum, long) - Method in interface org.apache.hadoop.mapred.Reporter
Increments the counter identified by the key, which can be of any enum type, by the specified amount.
INCREMENT - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
incrMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
Index - Interface in org.apache.hadoop.record
Interface that acts as an iterator for deserializing maps.
INDEX_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the index file.
infoPort - Variable in class org.apache.hadoop.dfs.DatanodeID
 
init() - Method in class org.apache.hadoop.fs.FsShell
 
init() - Method in class org.apache.hadoop.mapred.JobClient
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.file.FileContext
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Initializes the context.
init() - Method in class org.apache.hadoop.streaming.StreamJob
 
init() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
initialize(URI, Configuration) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
initialize(int) - Method in class org.apache.hadoop.util.PriorityQueue
Subclass constructors must call this.
InMemoryFileSystem - Class in org.apache.hadoop.fs
An implementation of the in-memory filesystem.
InMemoryFileSystem() - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
 
InMemoryFileSystem(URI, Configuration) - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
 
INode - Class in org.apache.hadoop.fs.s3
Holds file metadata including type (regular file, or directory), and the list of blocks that are pointers to the data.
INode(INode.FileType, Block[]) - Constructor for class org.apache.hadoop.fs.s3.INode
 
inodeExists(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
Input() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
input_stream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
inputFile - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inputFile - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
InputFormat - Interface in org.apache.hadoop.mapred
An input data format.
InputFormatBase - Class in org.apache.hadoop.mapred
Deprecated. replaced by FileInputFormat
InputFormatBase() - Constructor for class org.apache.hadoop.mapred.InputFormatBase
Deprecated.  
inputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
inputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
InputSplit - Interface in org.apache.hadoop.mapred
The description of the data for a single map task.
inputStream - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
inputTag - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inReaderSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
insert(Object) - Method in class org.apache.hadoop.util.PriorityQueue
Adds element to the PriorityQueue in log(size) time if either the PriorityQueue is not full, or not lessThan(element, top()).
inStream - Variable in class org.apache.hadoop.fs.FSDataInputStream
 
INT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IntWritable - Class in org.apache.hadoop.io
A WritableComparable for ints.
IntWritable() - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable(int) - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for IntWritable.
IntWritable.Comparator() - Constructor for class org.apache.hadoop.io.IntWritable.Comparator
 
InvalidFileTypeException - Exception in org.apache.hadoop.mapred
Used when file type differs from the desired file type.
InvalidFileTypeException() - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidFileTypeException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidInputException - Exception in org.apache.hadoop.mapred
This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one by one.
InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapred.InvalidInputException
Create the exception with the given list.
InvalidJobConfException - Exception in org.apache.hadoop.mapred
This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid.
InvalidJobConfException() - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InvalidJobConfException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InverseMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that swaps keys and values.
InverseMapper() - Constructor for class org.apache.hadoop.mapred.lib.InverseMapper
 
isAbsolute() - Method in class org.apache.hadoop.fs.Path
True if the directory of this path is absolute.
isAbsolute() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isAlive - Variable in class org.apache.hadoop.dfs.DatanodeDescriptor
 
isBlockCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if records are block-compressed.
isChecksumFile(Path) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
Return true iff file is a checksum file name.
isComplete() - Method in interface org.apache.hadoop.mapred.RunningJob
Non-blocking function to check whether the job is finished or not.
isCompleted() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if values are compressed.
isContextValid(String) - Static method in class org.apache.hadoop.fs.LocalDirAllocator
Method to check whether a context is valid
isCygwin() - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isDir(String) - Method in class org.apache.hadoop.dfs.NameNode
 
isDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
True iff the named path is a directory.
isDirectory(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
True iff the named path is a directory.
isDirectory(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
isDirectory() - Method in class org.apache.hadoop.fs.s3.INode
 
isDirectory(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
isDisableHistory() - Static method in class org.apache.hadoop.mapred.JobHistory
Returns history disable status.
isFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
True iff the named path is a regular file.
isFile() - Method in class org.apache.hadoop.fs.s3.INode
 
isFile(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
isHealthy() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
DFS is considered healthy if there are no missing blocks.
isIdle() - Method in class org.apache.hadoop.mapred.TaskTracker
Is this task tracker idle?
isIncrement() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isLocalHadoop() - Method in class org.apache.hadoop.streaming.StreamJob
 
isLocalJobTracker(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isMapTask() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
isMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns true if monitoring is currently in progress.
isMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns true if monitoring is currently in progress.
isNativeCodeLoaded() - Static method in class org.apache.hadoop.util.NativeCodeLoader
Check if native-hadoop code is loaded for this platform.
isNativeLzoLoaded() - Static method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
Check if lzo compressors are loaded and initialized.
isNativeLzoLoaded() - Static method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Check if lzo decompressors are loaded and initialized.
isNativeLzoLoaded() - Static method in class org.apache.hadoop.io.compress.LzoCodec
Check if native-lzo library is loaded & initialized.
isNativeZlibLoaded() - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Check if native-zlib code is loaded and initialized correctly.
IsolationRunner - Class in org.apache.hadoop.mapred
 
IsolationRunner() - Constructor for class org.apache.hadoop.mapred.IsolationRunner
 
isOnSameRack(DatanodeDescriptor, DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Check if two data nodes are on the same rack
isPurgeLogSplits() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
isReady() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.FileInputFormat
Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be.
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
isSuccessful() - Method in interface org.apache.hadoop.mapred.RunningJob
True iff job completed successfully.

J

jar_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JarBuilder - Class in org.apache.hadoop.streaming
This class is the main class for generating job.jar for Hadoop Streaming jobs.
JarBuilder() - Constructor for class org.apache.hadoop.streaming.JarBuilder
 
JBoolean - Class in org.apache.hadoop.record.compiler
 
JBoolean() - Constructor for class org.apache.hadoop.record.compiler.JBoolean
Creates a new instance of JBoolean
JBuffer - Class in org.apache.hadoop.record.compiler
Code generator for "buffer" type.
JBuffer() - Constructor for class org.apache.hadoop.record.compiler.JBuffer
Creates a new instance of JBuffer
JByte - Class in org.apache.hadoop.record.compiler
Code generator for "byte" type.
JByte() - Constructor for class org.apache.hadoop.record.compiler.JByte
 
jc_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JDouble - Class in org.apache.hadoop.record.compiler
 
JDouble() - Constructor for class org.apache.hadoop.record.compiler.JDouble
Creates a new instance of JDouble
JField<T> - Class in org.apache.hadoop.record.compiler
A thin wrappper around record field.
JField(String, T) - Constructor for class org.apache.hadoop.record.compiler.JField
Creates a new instance of JField
JFile - Class in org.apache.hadoop.record.compiler
Container for the Hadoop Record DDL.
JFile(String, ArrayList<JFile>, ArrayList<JRecord>) - Constructor for class org.apache.hadoop.record.compiler.JFile
Creates a new instance of JFile
JFloat - Class in org.apache.hadoop.record.compiler
 
JFloat() - Constructor for class org.apache.hadoop.record.compiler.JFloat
Creates a new instance of JFloat
JInt - Class in org.apache.hadoop.record.compiler
Code generator for "int" type
JInt() - Constructor for class org.apache.hadoop.record.compiler.JInt
Creates a new instance of JInt
jj_nt - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
jjFillToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjnewLexState - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjstrLiteralImages - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
JLong - Class in org.apache.hadoop.record.compiler
Code generator for "long" type
JLong() - Constructor for class org.apache.hadoop.record.compiler.JLong
Creates a new instance of JLong
JMap - Class in org.apache.hadoop.record.compiler
 
JMap(JType, JType) - Constructor for class org.apache.hadoop.record.compiler.JMap
Creates a new instance of JMap
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
Job - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a MapReduce job and its dependency.
Job(JobConf, ArrayList) - Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
Construct a job.
JobBase - Class in org.apache.hadoop.contrib.utils.join
A common base implementing some statics collecting mechanisms that are commonly used in a typical map/reduce job.
JobBase() - Constructor for class org.apache.hadoop.contrib.utils.join.JobBase
 
JobClient - Class in org.apache.hadoop.mapred
JobClient interacts with the JobTracker network interface.
JobClient() - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client, connect to the default job tracker
JobClient(Configuration) - Constructor for class org.apache.hadoop.mapred.JobClient
 
JobClient(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client, connect to the indicated job tracker.
JobClient.TaskStatusFilter - Enum in org.apache.hadoop.mapred
 
JobConf - Class in org.apache.hadoop.mapred
A map/reduce job configuration.
JobConf() - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Configuration) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Configuration, Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(String) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce configuration.
JobConf(Path) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce configuration.
jobConf_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JobConfigurable - Interface in org.apache.hadoop.mapred
That what may be configured.
JobControl - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a set of MapReduce jobs and its dependency.
JobControl(String) - Constructor for class org.apache.hadoop.mapred.jobcontrol.JobControl
Construct a job control for a group of jobs.
JobEndNotifier - Class in org.apache.hadoop.mapred
 
JobEndNotifier() - Constructor for class org.apache.hadoop.mapred.JobEndNotifier
 
JobHistory - Class in org.apache.hadoop.mapred
Provides methods for writing to and reading from job history.
JobHistory() - Constructor for class org.apache.hadoop.mapred.JobHistory
 
JobHistory.HistoryCleaner - Class in org.apache.hadoop.mapred
Delete history files older than one month.
JobHistory.HistoryCleaner() - Constructor for class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
 
JobHistory.JobInfo - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to job start, finish or failure.
JobHistory.JobInfo(String) - Constructor for class org.apache.hadoop.mapred.JobHistory.JobInfo
Create new JobInfo
JobHistory.Keys - Enum in org.apache.hadoop.mapred
Job history files contain key="value" pairs, where keys belong to this enum.
JobHistory.Listener - Interface in org.apache.hadoop.mapred
Callback interface for reading back log events from JobHistory.
JobHistory.MapAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.MapAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.MapAttempt
 
JobHistory.RecordTypes - Enum in org.apache.hadoop.mapred
Record types are identifiers for each line of log in history files.
JobHistory.ReduceAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.ReduceAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
 
JobHistory.Task - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to Task's start, finish or failure.
JobHistory.Task() - Constructor for class org.apache.hadoop.mapred.JobHistory.Task
 
JobHistory.TaskAttempt - Class in org.apache.hadoop.mapred
Base class for Map and Reduce TaskAttempts.
JobHistory.TaskAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.TaskAttempt
 
JobHistory.Values - Enum in org.apache.hadoop.mapred
This enum contains some of the values commonly used by history log events.
jobId_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
jobInfo() - Method in class org.apache.hadoop.streaming.StreamJob
 
JobProfile - Class in org.apache.hadoop.mapred
A JobProfile is a MapReduce primitive.
JobProfile() - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct an empty JobProfile.
JobProfile(String, String, String, String, String) - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct a JobProfile the userid, jobid, job config-file, job-details url and job name.
JobStatus - Class in org.apache.hadoop.mapred
Describes the current status of a job.
JobStatus() - Constructor for class org.apache.hadoop.mapred.JobStatus
 
JobStatus(String, float, float, int) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobClient
 
jobsToComplete() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get the jobs that are not completed and not failed
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobTracker
 
JobSubmissionProtocol - Interface in org.apache.hadoop.mapred
Protocol that a JobClient and the central JobTracker use to communicate.
JobTracker - Class in org.apache.hadoop.mapred
JobTracker is the central location for submitting and tracking MR jobs in a network environment.
JOBTRACKER_START_TIME - Static variable in class org.apache.hadoop.mapred.JobHistory
 
join() - Method in class org.apache.hadoop.dfs.NameNode
Wait for service to finish.
join() - Method in class org.apache.hadoop.ipc.Server
Wait for the server to be stopped.
JRecord - Class in org.apache.hadoop.record.compiler
 
JRecord(String, ArrayList<JField<JType>>) - Constructor for class org.apache.hadoop.record.compiler.JRecord
Creates a new instance of JRecord
JspHelper - Class in org.apache.hadoop.dfs
 
JspHelper() - Constructor for class org.apache.hadoop.dfs.JspHelper
 
JString - Class in org.apache.hadoop.record.compiler
 
JString() - Constructor for class org.apache.hadoop.record.compiler.JString
Creates a new instance of JString
JType - Class in org.apache.hadoop.record.compiler
Abstract Base class for all types supported by Hadoop Record I/O.
JType() - Constructor for class org.apache.hadoop.record.compiler.JType
 
JVector - Class in org.apache.hadoop.record.compiler
 
JVector(JType) - Constructor for class org.apache.hadoop.record.compiler.JVector
Creates a new instance of JVector

K

key() - Method in class org.apache.hadoop.io.ArrayFile.Reader
Returns the key associated with the most recent call to ArrayFile.Reader.seek(long), ArrayFile.Reader.next(Writable), or ArrayFile.Reader.get(long,Writable).
KeyFieldBasedPartitioner - Class in org.apache.hadoop.mapred.lib
 
KeyFieldBasedPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
KeyValueLineRecordReader - Class in org.apache.hadoop.mapred
This class treats a line in the input as a key/value pair separated by a separator character.
KeyValueLineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
KeyValueTextInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for plain text files.
KeyValueTextInputFormat() - Constructor for class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
killJob(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Kill the indicated job
killJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
killJob() - Method in interface org.apache.hadoop.mapred.RunningJob
Kill the running job.
kind - Variable in class org.apache.hadoop.record.compiler.generated.Token
An integer that describes the kind of this token.

L

largestNumOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
lastUpdate - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
LAYOUT_VERSION - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
LBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LEASE_HARDLIMIT_PERIOD - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
LEASE_SOFTLIMIT_PERIOD - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
LeaseExpiredException - Exception in org.apache.hadoop.dfs
The lease that was being used to create this file has expired.
LeaseExpiredException(String) - Constructor for exception org.apache.hadoop.dfs.LeaseExpiredException
 
lessThan(Object, Object) - Method in class org.apache.hadoop.util.PriorityQueue
Determines the ordering of objects in this priority queue.
level - Variable in class org.apache.hadoop.net.NodeBase
 
LexicalError(boolean, int, int, int, String, char) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Returns a detailed message for the Error when it is thrown by the token manager to indicate a lexical error.
lexStateNames - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
limitDecimal(double, int) - Static method in class org.apache.hadoop.fs.FsShell
 
line - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
LineRecordReader - Class in org.apache.hadoop.mapred
Treats keys as offset in file and value as line.
LineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
 
LineRecordReader(InputStream, long, long) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
 
LINK_URI - Static variable in class org.apache.hadoop.streaming.StreamJob
 
listDeepSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
listJobConfProperties() - Method in class org.apache.hadoop.streaming.StreamJob
Prints out the jobconf properties on stdout when verbose is specified.
listPaths(Path[]) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Filter raw files in the given pathes using the default checksum filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Filter raw files in the given path using the default checksum filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.FileSystem
List files in a directory.
listPaths(Path[]) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in the given pathes using the default checksum filter.
listPaths(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in a directory.
listPaths(Path[], PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in a list directories using user-supplied path filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
List files in a directory.
listPaths(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
listPaths(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
listPaths(JobConf) - Method in class org.apache.hadoop.mapred.FileInputFormat
List input directories.
listPaths(JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
 
listSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
ljustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
LocalDirAllocator - Class in org.apache.hadoop.fs
An implementation of a round-robin scheme for disk allocation for creating files.
LocalDirAllocator(String) - Constructor for class org.apache.hadoop.fs.LocalDirAllocator
Create an allocator object
LocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the checksumed local filesystem.
LocalFileSystem() - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
LocalFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
localHadoop_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
localizeBin(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
localRunnerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
location - Variable in class org.apache.hadoop.net.NodeBase
 
lock(Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
lock(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. FS does not support file locks anymore.
lock(Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. FS does not support file locks anymore.
lock(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
lock(Path, boolean) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Deprecated.  
lock(Path, boolean) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
LOG - Static variable in class org.apache.hadoop.contrib.utils.join.JobBase
 
LOG - Static variable in class org.apache.hadoop.dfs.DataNode
 
LOG - Static variable in class org.apache.hadoop.dfs.NameNode
 
LOG - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
 
LOG - Static variable in class org.apache.hadoop.dfs.SecondaryNameNode
 
LOG - Static variable in class org.apache.hadoop.fs.FileSystem
 
LOG - Static variable in class org.apache.hadoop.io.compress.CompressionCodecFactory
 
LOG - Static variable in class org.apache.hadoop.io.SequenceFile
 
LOG - Static variable in class org.apache.hadoop.ipc.Client
 
LOG - Static variable in class org.apache.hadoop.ipc.Server
 
log(Log) - Method in class org.apache.hadoop.mapred.Counters
Logs the current counter values.
LOG - Static variable in class org.apache.hadoop.mapred.FileInputFormat
 
LOG - Static variable in class org.apache.hadoop.mapred.JobHistory
 
LOG - Static variable in class org.apache.hadoop.mapred.JobTracker
 
LOG - Static variable in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
LOG - Static variable in class org.apache.hadoop.mapred.TaskTracker
 
LOG - Static variable in class org.apache.hadoop.net.NetworkTopology
 
LOG - Static variable in class org.apache.hadoop.streaming.PipeMapRed
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamJob
 
Logalyzer - Class in org.apache.hadoop.tools
Logalyzer: A utility tool for archiving and analyzing hadoop logs.
Logalyzer() - Constructor for class org.apache.hadoop.tools.Logalyzer
 
Logalyzer.LogComparator - Class in org.apache.hadoop.tools
A WritableComparator optimized for UTF8 keys of the logs.
Logalyzer.LogComparator() - Constructor for class org.apache.hadoop.tools.Logalyzer.LogComparator
 
Logalyzer.LogRegexMapper - Class in org.apache.hadoop.tools
A Mapper that extracts text matching a regular expression.
Logalyzer.LogRegexMapper() - Constructor for class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
logFailed(String, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs job failed event.
logFailed(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log task attempt failed event.
logFailed(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log failed reduce task attempt.
logFailed(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log job failed event.
logFinished(String, long, int, int, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job finished.
logFinished(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log finish time of map task attempt.
logFinished(String, String, String, long, long, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log finished event of this task.
logFinished(String, String, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log finish time of task.
logSpec() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
logStarted(String, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs launch time of job.
logStarted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log start time of this map task attempt.
logStarted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log start time of Reduce task attempt.
logStarted(String, String, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log start time of task (TIP).
logSubmitted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job submitted event to history.
logThreadInfo(Log, String, long) - Static method in class org.apache.hadoop.util.ReflectionUtils
Log the current thread stacks at INFO level.
LONG_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LONG_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LongSumReducer - Class in org.apache.hadoop.mapred.lib
A Reducer that sums long values.
LongSumReducer() - Constructor for class org.apache.hadoop.mapred.lib.LongSumReducer
 
LongValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the maximum of a sequence of long values.
LongValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
the default constructor
LongValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the minimum of a sequence of long values.
LongValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
the default constructor
LongValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of long values.
LongValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
the default constructor
LongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs.
LongWritable() - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable(long) - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for LongWritable.
LongWritable.Comparator() - Constructor for class org.apache.hadoop.io.LongWritable.Comparator
 
LongWritable.DecreasingComparator - Class in org.apache.hadoop.io
A decreasing Comparator optimized for LongWritable.
LongWritable.DecreasingComparator() - Constructor for class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
ls(String, boolean) - Method in class org.apache.hadoop.fs.FsShell
Get a listing of all files in that match the file pattern srcf.
LT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LzoCodec - Class in org.apache.hadoop.io.compress
A CompressionCodec for a streaming lzo compression/decompression pair.
LzoCodec() - Constructor for class org.apache.hadoop.io.compress.LzoCodec
 
LzoCompressor - Class in org.apache.hadoop.io.compress.lzo
A Compressor based on the lzo algorithm.
LzoCompressor(LzoCompressor.CompressionStrategy, int) - Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
Creates a new compressor using the specified LzoCompressor.CompressionStrategy.
LzoCompressor() - Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
Creates a new compressor with the default lzo1x_1 compression.
LzoCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
The compression algorithm for lzo library.
LzoDecompressor - Class in org.apache.hadoop.io.compress.lzo
A Decompressor based on the lzo algorithm.
LzoDecompressor(LzoDecompressor.CompressionStrategy, int) - Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Creates a new lzo decompressor.
LzoDecompressor() - Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Creates a new lzo decompressor.
LzoDecompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
 

M

main(String[]) - Static method in class org.apache.hadoop.conf.Configuration
For debugging.
main(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
main(String[]) - Static method in class org.apache.hadoop.dfs.DataNode
 
main(String[]) - Static method in class org.apache.hadoop.dfs.DFSAdmin
main() has some simple utility methods.
main(String[]) - Static method in class org.apache.hadoop.dfs.DFSck
 
main(String[]) - Static method in class org.apache.hadoop.dfs.NameNode
 
main(String[]) - Static method in class org.apache.hadoop.dfs.SecondaryNameNode
main() has some simple utility methods.
main(String[]) - Static method in class org.apache.hadoop.examples.ExampleDriver
 
main(String[]) - Static method in class org.apache.hadoop.examples.Grep
 
main(String[]) - Static method in class org.apache.hadoop.examples.PiEstimator
Launches all the tasks in order.
main(String[]) - Static method in class org.apache.hadoop.examples.RandomWriter
This is the main routine for launching a distributed random write job.
main(String[]) - Static method in class org.apache.hadoop.examples.Sort
The main driver for sort program.
main(String[]) - Static method in class org.apache.hadoop.examples.WordCount
The main driver for word count map/reduce program.
main(String[]) - Static method in class org.apache.hadoop.fs.DF
 
main(String[]) - Static method in class org.apache.hadoop.fs.FsShell
main() has some simple utility methods
main(String[]) - Static method in class org.apache.hadoop.fs.s3.MigrationTool
 
main(String[]) - Static method in class org.apache.hadoop.fs.Trash
Run an emptier.
main(String[]) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
A little test program.
main(String[]) - Static method in class org.apache.hadoop.io.MapFile
 
main(String[]) - Static method in class org.apache.hadoop.mapred.IsolationRunner
Run a single task
main(String[]) - Static method in class org.apache.hadoop.mapred.JobClient
 
main(String[]) - Static method in class org.apache.hadoop.mapred.jobcontrol.Job
 
main(String[]) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker process.
main(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
create and run an Abacus based map/reduce job.
main(String[]) - Static method in class org.apache.hadoop.mapred.TaskTracker.Child
 
main(String[]) - Static method in class org.apache.hadoop.mapred.TaskTracker
Start the TaskTracker, point toward the indicated JobTracker
main(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
main(String[]) - Static method in class org.apache.hadoop.streaming.HadoopStreaming
 
main(String[]) - Static method in class org.apache.hadoop.streaming.JarBuilder
Test program
main(String[]) - Static method in class org.apache.hadoop.streaming.PathFinder
 
main(String[]) - Static method in class org.apache.hadoop.tools.Logalyzer
 
main(String[]) - Static method in class org.apache.hadoop.util.CopyFiles
 
main(String[]) - Static method in class org.apache.hadoop.util.PlatformName
 
main(String[]) - Static method in class org.apache.hadoop.util.PrintJarMainClass
 
main(String[]) - Static method in class org.apache.hadoop.util.RunJar
Run a Hadoop job jar.
main(String[]) - Static method in class org.apache.hadoop.util.VersionInfo
 
makeJavaCommand(Class, String[]) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
makeQualified(Path) - Method in class org.apache.hadoop.fs.FileSystem
Make sure that a path specifies a FileSystem.
makeQualified(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make sure that a path specifies a FileSystem.
makeRelative(Path, Path) - Static method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Make a path relative with respect to a root path.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
Map method.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.WordCount.MapClass
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
the map function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
Do nothing.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
The identify function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityMapper
The identify function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.InverseMapper
The inverse function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.TokenCountMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.Mapper
Maps a single input key/value pair into intermediate key/value pairs.
Map() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Map method.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
MAP_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
mapCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MapFile - Class in org.apache.hadoop.io
A file-based map from keys to values.
MapFile() - Constructor for class org.apache.hadoop.io.MapFile
 
MapFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing map.
MapFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map.
MapFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map using the named comparator.
MapFile.Writer - Class in org.apache.hadoop.io
Writes a new map.
MapFile.Writer(Configuration, FileSystem, String, Class, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFileOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes MapFiles.
MapFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.MapFileOutputFormat
 
mapOutputFieldSeparator - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
mapOutputLost(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A completed map task's output has been lost.
Mapper - Interface in org.apache.hadoop.mapred
Maps input key/value pairs to a set of intermediate key/value pairs.
mapProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
mapProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a float between 0.0 and 1.0, indicating progress on the map portion of the job.
mapRedFinished() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
MapReduceBase - Class in org.apache.hadoop.mapred
Base class for Mapper and Reducer implementations.
MapReduceBase() - Constructor for class org.apache.hadoop.mapred.MapReduceBase
 
MapRunnable - Interface in org.apache.hadoop.mapred
Expert: Permits greater control of map processing.
MapRunner - Class in org.apache.hadoop.mapred
Default MapRunnable implementation.
MapRunner() - Constructor for class org.apache.hadoop.mapred.MapRunner
 
MASTER_INDEX_LOG_FILE - Static variable in class org.apache.hadoop.mapred.JobHistory
 
MAX_PATH_DEPTH - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
MAX_PATH_LENGTH - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
maxNextCharInd - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
mayExit_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MD5_LEN - Static variable in class org.apache.hadoop.io.MD5Hash
 
MD5_LEN - Static variable in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
MD5Hash - Class in org.apache.hadoop.io
A Writable for MD5 hash values.
MD5Hash() - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash.
MD5Hash(String) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash from a hex string.
MD5Hash(byte[]) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash with a specified value.
MD5Hash.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for MD5Hash keys.
MD5Hash.Comparator() - Constructor for class org.apache.hadoop.io.MD5Hash.Comparator
 
merge(List<SequenceFile.Sorter.SegmentDescriptor>, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the list of segments of type SegmentDescriptor
merge(Path[], boolean, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[] using a max factor value that is already set
merge(Path[], boolean, int, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merge the provided files.
merge(List, List, String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
MergeSort - Class in org.apache.hadoop.util
An implementation of the core algorithm of MergeSort.
MergeSort(Comparator<IntWritable>) - Constructor for class org.apache.hadoop.util.MergeSort
 
mergeSort(int[], int[], int, int) - Method in class org.apache.hadoop.util.MergeSort
 
metaSave(String[], int) - Method in class org.apache.hadoop.dfs.DFSAdmin
Dumps DFS data structures into specified file.
metaSave(String) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
metaSave(String) - Method in class org.apache.hadoop.dfs.NameNode
Dumps namenode state into specified file
MetricsContext - Interface in org.apache.hadoop.metrics
The main interface to the metrics package.
MetricsException - Exception in org.apache.hadoop.metrics
General-purpose, unchecked metrics exception.
MetricsException() - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsException(String) - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsRecord - Interface in org.apache.hadoop.metrics
A named and optionally tagged set of records to be sent to the metrics system.
MetricsRecordImpl - Class in org.apache.hadoop.metrics.spi
An implementation of MetricsRecord.
MetricsRecordImpl(String, AbstractMetricsContext) - Constructor for class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Creates a new instance of FileRecord
MetricsUtil - Class in org.apache.hadoop.metrics
Utility class to simplify creation and reporting of hadoop metrics.
MetricValue - Class in org.apache.hadoop.metrics.spi
A Number that is either an absolute or an incremental amount.
MetricValue(Number, boolean) - Constructor for class org.apache.hadoop.metrics.spi.MetricValue
Creates a new instance of MetricValue
midKey() - Method in class org.apache.hadoop.io.MapFile.Reader
Get the key at approximately the middle of the file.
MigrationTool - Class in org.apache.hadoop.fs.s3
This class is a tool for migrating data from an older to a newer version of an S3 filesystem.
MigrationTool() - Constructor for class org.apache.hadoop.fs.s3.MigrationTool
 
MIN_BLOCKS_FOR_WRITE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
minRecWrittenToEnableSkip_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
mkdir(String) - Method in class org.apache.hadoop.fs.FsShell
Create the given dir
mkdirs(String) - Method in class org.apache.hadoop.dfs.NameNode
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.FileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Creates the specified directory hierarchy.
mkdirs(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
Module() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
MODULE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
ModuleName() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
moveToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
moveToTrash(Path) - Method in class org.apache.hadoop.fs.Trash
Move a file or directory to the current trash directory.
msg(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
MultithreadedMapRunner - Class in org.apache.hadoop.mapred.lib
Multithreaded implementation for @link org.apache.hadoop.mapred.MapRunnable.
MultithreadedMapRunner() - Constructor for class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 

N

name - Variable in class org.apache.hadoop.dfs.DatanodeID
 
name - Variable in class org.apache.hadoop.net.NodeBase
 
NameNode - Class in org.apache.hadoop.dfs
NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS.
NameNode(Configuration) - Constructor for class org.apache.hadoop.dfs.NameNode
Start NameNode.
NameNode(String, int, Configuration) - Constructor for class org.apache.hadoop.dfs.NameNode
Create a NameNode at the specified location and start it.
NamenodeFsck - Class in org.apache.hadoop.dfs
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
NamenodeFsck(Configuration, NameNode, Map<String, String[]>, HttpServletResponse) - Constructor for class org.apache.hadoop.dfs.NamenodeFsck
Filesystem checker.
NamenodeFsck.FsckResult - Class in org.apache.hadoop.dfs
FsckResult of checking, plus overall DFS statistics.
NamenodeFsck.FsckResult() - Constructor for class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
NativeCodeLoader - Class in org.apache.hadoop.util
A helper to load the native hadoop code i.e.
NativeCodeLoader() - Constructor for class org.apache.hadoop.util.NativeCodeLoader
 
needsDictionary() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if a preset dictionary is needed for decompression.
needsDictionary() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
needsDictionary() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
needsInput() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
NetworkTopology - Class in org.apache.hadoop.net
The class represents a cluster of computer with a tree hierarchical network topology.
NetworkTopology() - Constructor for class org.apache.hadoop.net.NetworkTopology
 
newInstance(Class, Configuration) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance(Class) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance() - Method in interface org.apache.hadoop.io.WritableFactory
Return a new instance.
newInstance(Class<?>, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Create an object for the given class and initialize it from conf
newKey() - Method in class org.apache.hadoop.io.WritableComparator
Construct a new WritableComparable instance.
newRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Subclasses should override this if they subclass MetricsRecordImpl.
newToken(int) - Static method in class org.apache.hadoop.record.compiler.generated.Token
Returns a new Token object, by default.
next() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
next(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Read and return the next value in the file.
next(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Read the next key/value pair in the map into key and val.
next(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key in the file into key, skipping its value.
next(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key/value pair in the file into key and val.
next(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Deprecated. Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes).
next() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Sets up the current key and value (for getKey and getValue)
next(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the next key in a set into key.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
Read key/value pair in a line.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.LineRecordReader
Read a line.
next(Writable, Writable) - Method in interface org.apache.hadoop.mapred.RecordReader
Reads the next key/value pair.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
Read key/value pair in a line.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next - Variable in class org.apache.hadoop.record.compiler.generated.Token
A reference to the next regular (non-special) token from the input stream.
next(Writable, Writable) - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Read a record.
next(Writable, Writable) - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
nextRaw(DataOutputBuffer, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' records.
nextRawKey(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' keys.
nextRawKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the rawKey object with the key returned by the Reader
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' values.
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the passed rawValue with the value corresponding to the key read earlier
Node - Interface in org.apache.hadoop.net
The interface defines a node in a network topology.
NodeBase - Class in org.apache.hadoop.net
A base class that implements interface Node
NodeBase() - Constructor for class org.apache.hadoop.net.NodeBase
Default constructor
NodeBase(String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its path
NodeBase(String, String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
NodeBase(String, String, Node, int) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
normalize(String) - Static method in class org.apache.hadoop.net.NodeBase
Normalize a path
NotReplicatedYetException - Exception in org.apache.hadoop.dfs
The file has not finished being written to enough datanodes yet.
NotReplicatedYetException(String) - Constructor for exception org.apache.hadoop.dfs.NotReplicatedYetException
 
NULL - Static variable in interface org.apache.hadoop.mapred.Reporter
A constant of Reporter type that does nothing.
NullContext - Class in org.apache.hadoop.metrics.spi
Null metrics context: a metrics context which does nothing.
NullContext() - Constructor for class org.apache.hadoop.metrics.spi.NullContext
Creates a new instance of NullContext
NullOutputFormat - Class in org.apache.hadoop.mapred.lib
Consume all outputs and put them in /dev/null.
NullOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.NullOutputFormat
 
NullWritable - Class in org.apache.hadoop.io
Singleton Writable with no data.
NUM_OF_VALUES_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
numOfMapOutputKeyFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfMapOutputPartitionFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfReduceOutputKeyFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
numReduceTasksSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 

O

ObjectWritable - Class in org.apache.hadoop.io
A polymorphic Writable that writes an instance with it's class name.
ObjectWritable() - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Class, Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
obtainLock(String, String, boolean) - Method in class org.apache.hadoop.dfs.NameNode
Deprecated.  
offerService() - Method in class org.apache.hadoop.dfs.DataNode
Main loop for the DataNode.
offerService() - Method in class org.apache.hadoop.mapred.JobTracker
Run forever
ONE - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 
OP_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_BLOCKRECEIVED - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_BLOCKREPORT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ABANDONBLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ABANDONBLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ADDBLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ADDBLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_COMPLETEFILE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_COMPLETEFILE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODE_HINTS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODE_HINTS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODEREPORT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODEREPORT_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DELETE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DELETE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_EXISTS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_EXISTS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ISDIR - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ISDIR_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_LISTING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_LISTING_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_MKDIRS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_MKDIRS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OBTAINLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OBTAINLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OPEN - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OPEN_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RAWSTATS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RAWSTATS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RELEASELOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RELEASELOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENAMETO - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENAMETO_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENEW_LEASE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENEW_LEASE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_STARTFILE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_STARTFILE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_TRYAGAIN - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_ERROR - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_FAILURE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_HEARTBEAT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_INVALIDATE_BLOCKS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_READ_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_READ_RANGE_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_READSKIP_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_TRANSFERBLOCKS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_TRANSFERDATA - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_WRITE_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
open(String) - Method in class org.apache.hadoop.dfs.NameNode
 
open(Path, int) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
OPERATION_FAILED - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
org.apache.hadoop - package org.apache.hadoop
 
org.apache.hadoop.conf - package org.apache.hadoop.conf
Configuration of system parameters.
org.apache.hadoop.contrib.utils.join - package org.apache.hadoop.contrib.utils.join
 
org.apache.hadoop.dfs - package org.apache.hadoop.dfs
A distributed implementation of FileSystem.
org.apache.hadoop.examples - package org.apache.hadoop.examples
Hadoop example code.
org.apache.hadoop.filecache - package org.apache.hadoop.filecache
 
org.apache.hadoop.fs - package org.apache.hadoop.fs
An abstract file system API.
org.apache.hadoop.fs.s3 - package org.apache.hadoop.fs.s3
A distributed implementation of FileSystem that uses Amazon S3.
org.apache.hadoop.io - package org.apache.hadoop.io
Generic i/o code for use when reading and writing data to the network, to databases, and to files.
org.apache.hadoop.io.compress - package org.apache.hadoop.io.compress
 
org.apache.hadoop.io.compress.lzo - package org.apache.hadoop.io.compress.lzo
 
org.apache.hadoop.io.compress.zlib - package org.apache.hadoop.io.compress.zlib
 
org.apache.hadoop.io.retry - package org.apache.hadoop.io.retry
A mechanism for selectively retrying methods that throw exceptions under certain circumstances.
org.apache.hadoop.ipc - package org.apache.hadoop.ipc
Tools to help define network clients and servers.
org.apache.hadoop.mapred - package org.apache.hadoop.mapred
A system for scalable, fault-tolerant, distributed computation over large data collections.
org.apache.hadoop.mapred.jobcontrol - package org.apache.hadoop.mapred.jobcontrol
Utilities for managing dependent jobs.
org.apache.hadoop.mapred.lib - package org.apache.hadoop.mapred.lib
Library of generally useful mappers, reducers, and partitioners.
org.apache.hadoop.mapred.lib.aggregate - package org.apache.hadoop.mapred.lib.aggregate
Classes for performing various counting and aggregations.
org.apache.hadoop.metrics - package org.apache.hadoop.metrics
This package defines an API for reporting performance metric information.
org.apache.hadoop.metrics.file - package org.apache.hadoop.metrics.file
Implementation of the metrics package that writes the metrics to a file.
org.apache.hadoop.metrics.ganglia - package org.apache.hadoop.metrics.ganglia
Implementation of the metrics package that sends metric data to Ganglia.
org.apache.hadoop.metrics.spi - package org.apache.hadoop.metrics.spi
The Service Provider Interface for the Metrics API.
org.apache.hadoop.net - package org.apache.hadoop.net
Network-related classes.
org.apache.hadoop.record - package org.apache.hadoop.record
Hadoop record I/O contains classes and a record description language translator for simplifying serialization and deserialization of records in a language-neutral manner.
org.apache.hadoop.record.compiler - package org.apache.hadoop.record.compiler
This package contains classes needed for code generation from the hadoop record compiler.
org.apache.hadoop.record.compiler.ant - package org.apache.hadoop.record.compiler.ant
 
org.apache.hadoop.record.compiler.generated - package org.apache.hadoop.record.compiler.generated
This package contains code generated by JavaCC from the Hadoop record syntax file rcc.jj.
org.apache.hadoop.streaming - package org.apache.hadoop.streaming
 
org.apache.hadoop.tools - package org.apache.hadoop.tools
 
org.apache.hadoop.util - package org.apache.hadoop.util
Common utilities.
out - Variable in class org.apache.hadoop.io.compress.CompressionOutputStream
The output stream to be compressed.
outerrThreadsThrowable - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
output_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OutputCollector - Interface in org.apache.hadoop.mapred
Passed to Mapper and Reducer implementations to collect output data.
OutputFormat - Interface in org.apache.hadoop.mapred
An output data format.
OutputFormatBase - Class in org.apache.hadoop.mapred
A base class for OutputFormat.
OutputFormatBase() - Constructor for class org.apache.hadoop.mapred.OutputFormatBase
 
outputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OutputRecord - Class in org.apache.hadoop.metrics.spi
Represents a record of metric data to be sent to a metrics system.
outputSingleNode_ - Variable in class org.apache.hadoop.streaming.StreamJob
 

P

packageFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
packageJobJar() - Method in class org.apache.hadoop.streaming.StreamJob
 
parent - Variable in class org.apache.hadoop.net.NodeBase
 
parse(String, int) - Static method in class org.apache.hadoop.metrics.spi.Util
Parses a space and/or comma separated sequence of server specifications of the form hostname or hostname:port.
parseArgs(String[], int, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Parse the cmd-line args, starting at i.
ParseException - Exception in org.apache.hadoop.record.compiler.generated
This exception is thrown when parse errors are encountered.
ParseException(Token, int[][], String[]) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
This constructor is used by the method "generateParseException" in the generated parser.
ParseException() - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
The following constructors are for use by you for whatever purpose you can think of.
ParseException(String) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
 
parseHistory(File, JobHistory.Listener) - Static method in class org.apache.hadoop.mapred.JobHistory
Parses history file and invokes Listener.handle() for each line of history.
parseJobTasks(File, JobHistory.JobInfo) - Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
Populates a JobInfo object from the job's history log file.
parseMasterIndex(File) - Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
Parses a master index file and returns a Map of (jobTrakerId - Map (job Id - JobHistory.JobInfo)).
Partitioner - Interface in org.apache.hadoop.mapred
Partitions the key space.
partitionerSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Path - Class in org.apache.hadoop.fs
Names a file or directory in a FileSystem.
Path(String, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String) - Constructor for class org.apache.hadoop.fs.Path
Construct a path from a String.
Path(String, String, String) - Constructor for class org.apache.hadoop.fs.Path
Construct a Path from components.
PATH_SEPARATOR - Static variable in class org.apache.hadoop.net.NodeBase
 
PATH_SEPARATOR_STR - Static variable in class org.apache.hadoop.net.NodeBase
 
PathFilter - Interface in org.apache.hadoop.fs
 
PathFinder - Class in org.apache.hadoop.streaming
Maps a relative pathname to an absolute pathname using the PATH enviroment.
PathFinder() - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from java.class.path
PathFinder(String) - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from the specified system environment variable.
pathToFile(Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
Convert a path to a File.
pathToFile(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Convert a path to a File.
PERIOD_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
phase() - Method in class org.apache.hadoop.util.Progress
Returns the current sub-node executing.
PhasedFileSystem - Class in org.apache.hadoop.mapred
Deprecated. PhasedFileSystem is no longer used during speculative execution of tasks.
PhasedFileSystem(FileSystem, String, String, String) - Constructor for class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. This Constructor is used to wrap a FileSystem object to a Phased FilsSystem.
PhasedFileSystem(FileSystem, JobConf) - Constructor for class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. This Constructor is used to wrap a FileSystem object to a Phased FilsSystem.
PiEstimator - Class in org.apache.hadoop.examples
A Map-reduce program to estimaate the valu eof Pi using monte-carlo method.
PiEstimator() - Constructor for class org.apache.hadoop.examples.PiEstimator
 
PiEstimator.PiMapper - Class in org.apache.hadoop.examples
Mappper class for Pi estimation.
PiEstimator.PiMapper() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiMapper
 
PiEstimator.PiReducer - Class in org.apache.hadoop.examples
 
PiEstimator.PiReducer() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiReducer
 
ping(String) - Method in class org.apache.hadoop.mapred.TaskTracker
Child checking to see if we're alive.
PipeMapper - Class in org.apache.hadoop.streaming
A generic Mapper bridge.
PipeMapper() - Constructor for class org.apache.hadoop.streaming.PipeMapper
 
PipeMapRed - Class in org.apache.hadoop.streaming
Shared functionality for PipeMapper, PipeReducer.
PipeMapRed() - Constructor for class org.apache.hadoop.streaming.PipeMapRed
 
PipeReducer - Class in org.apache.hadoop.streaming
A generic Reducer bridge.
PipeReducer() - Constructor for class org.apache.hadoop.streaming.PipeReducer
 
PlatformName - Class in org.apache.hadoop.util
A helper class for getting build-info of the java-vm.
PlatformName() - Constructor for class org.apache.hadoop.util.PlatformName
 
pop() - Method in class org.apache.hadoop.util.PriorityQueue
Removes and returns the least element of the PriorityQueue in log(size) time.
PositionedReadable - Interface in org.apache.hadoop.fs
Stream that permits positional reading.
PREP - Static variable in class org.apache.hadoop.mapred.JobStatus
 
prependPathComponent(String) - Method in class org.apache.hadoop.streaming.PathFinder
Appends the specified component to the path list
preserveInput(boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Whether to delete the files when no longer needed
prevCharIsCR - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
prevCharIsLF - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
PrintJarMainClass - Class in org.apache.hadoop.util
A micro-application that prints the main class name out of a jar file.
PrintJarMainClass() - Constructor for class org.apache.hadoop.util.PrintJarMainClass
 
printThreadInfo(PrintWriter, String) - Static method in class org.apache.hadoop.util.ReflectionUtils
Print all of the thread's information and stack traces.
printUsage(String) - Method in class org.apache.hadoop.dfs.DFSAdmin
Displays format of commands.
printUsage(String) - Method in class org.apache.hadoop.fs.FsShell
Displays format of commands.
PriorityQueue - Class in org.apache.hadoop.util
A PriorityQueue maintains a partial ordering of its elements such that the least element can always be found in constant time.
PriorityQueue() - Constructor for class org.apache.hadoop.util.PriorityQueue
 
ProgramDriver - Class in org.apache.hadoop.util
A driver that is used to run programs added to it
ProgramDriver() - Constructor for class org.apache.hadoop.util.ProgramDriver
 
progress(String, float, String, TaskStatus.Phase, Counters) - Method in class org.apache.hadoop.mapred.TaskTracker
Called periodically to report Task progress, from 0.0 to 1.0.
Progress - Class in org.apache.hadoop.util
Utility to assist with generation of progress reports.
Progress() - Constructor for class org.apache.hadoop.util.Progress
Creates a new root node.
progress() - Method in interface org.apache.hadoop.util.Progressable
callback for reporting progress.
Progressable - Interface in org.apache.hadoop.util
An interface for callbacks when an method makes some progress.
purge() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Delete everything.
put(Object) - Method in class org.apache.hadoop.util.PriorityQueue
Adds an Object to a PriorityQueue in log(size) time.

Q

quarterDigest() - Method in class org.apache.hadoop.io.MD5Hash
Return a 32-bit digest of the MD5.

R

RandomWriter - Class in org.apache.hadoop.examples
This program uses map/reduce to just run a distributed job where there is no interaction between the tasks and each task write a large unsorted random binary sequence file of BytesWritable.
RandomWriter() - Constructor for class org.apache.hadoop.examples.RandomWriter
 
RawLocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the raw local filesystem.
RawLocalFileSystem() - Constructor for class org.apache.hadoop.fs.RawLocalFileSystem
 
RBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
Rcc - Class in org.apache.hadoop.record.compiler.generated
 
Rcc(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(RccTokenManager) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
RccConstants - Interface in org.apache.hadoop.record.compiler.generated
 
RccTask - Class in org.apache.hadoop.record.compiler.ant
Hadoop record compiler ant Task
RccTask() - Constructor for class org.apache.hadoop.record.compiler.ant.RccTask
Creates a new instance of RccTask
RccTokenManager - Class in org.apache.hadoop.record.compiler.generated
 
RccTokenManager(SimpleCharStream) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
RccTokenManager(SimpleCharStream, int) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
read(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read upto the specified number of bytes, from a given position within a file, and return the number of bytes read.
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Read bytes from the stream.
read() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(DataInput) - Static method in class org.apache.hadoop.io.MD5Hash
Constructs, reads and returns an instance.
READ_TIMEOUT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
readBool(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBool(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBool(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a boolean from serialized record.
readBool(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBuffer(String) - Method in interface org.apache.hadoop.record.RecordInput
Read byte array from serialized record.
readBuffer(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readByte(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a byte from serialized record.
readByte(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readChar() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
readCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readDouble(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readDouble(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readDouble(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a double-precision number from serialized record.
readDouble(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readEnum(DataInput, Class<T>) - Static method in class org.apache.hadoop.io.WritableUtils
Read an Enum value from DataInput, Enums are read and written using String values.
readFields(DataInput) - Method in class org.apache.hadoop.dfs.DatanodeID
 
readFields(DataInput) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BooleanWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BytesWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.FloatWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.GenericWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.IntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.LongWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.MD5Hash
 
readFields(DataInput) - Method in class org.apache.hadoop.io.NullWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ObjectWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
readFields(DataInput) - Method in class org.apache.hadoop.io.Text
deserialize
readFields(DataInput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
readFields(DataInput) - Method in class org.apache.hadoop.io.VersionedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VIntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VLongWritable
 
readFields(DataInput) - Method in interface org.apache.hadoop.io.Writable
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.Counters
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.FileSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobProfile
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskReport
 
readFields(DataInput) - Method in class org.apache.hadoop.record.Record
 
readFieldsCompressed(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.readFields(DataInput).
readFloat(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readFloat(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readFloat(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a single-precision float from serialized record.
readFloat(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read the specified number of bytes, from a given position within a file.
readFully(long, byte[]) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read number of bytes equalt to the length of the buffer, from a given position within a file.
readInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an integer from a byte array.
readInt(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readInt(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readInt(String) - Method in interface org.apache.hadoop.record.RecordInput
Read an integer from serialized record.
readInt(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readLine() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
readLine(InputStream, OutputStream) - Static method in class org.apache.hadoop.mapred.LineRecordReader
 
readLine(InputStream) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Read a utf8 encoded line from a data input stream.
readLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a long from a byte array.
readLong(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readLong(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readLong(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a long integer from serialized record.
readLong(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readObject(DataInput, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readObject(DataInput, ObjectWritable, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readString(DataInput) - Static method in class org.apache.hadoop.io.Text
Read a UTF8 encoded string from in
readString(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Read a UTF-8 encoded string.
readString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readString(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readString(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readString(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a UTF-8 encoded string from serialized record.
readString(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readUnsignedShort(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an unsigned short from a byte array.
readVInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded integer from input stream and returns it.
readVInt(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a stream and returns it.
readVLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded long from input stream and returns it.
readVLong(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a stream and return it.
READY - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
Record() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
Record - Class in org.apache.hadoop.record
Abstract class that is extended by generated classes.
Record() - Constructor for class org.apache.hadoop.record.Record
 
RECORD_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
RecordComparator - Class in org.apache.hadoop.record
A raw record comparator base class
RecordComparator(Class) - Constructor for class org.apache.hadoop.record.RecordComparator
Construct a raw Record comparison implementation.
RecordInput - Interface in org.apache.hadoop.record
Interface that all the Deserializers have to implement.
RecordList() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
RecordOutput - Interface in org.apache.hadoop.record
Interface that alll the serializers have to implement.
RecordReader - Interface in org.apache.hadoop.mapred
Reads key/value pairs from an input file FileSplit.
RecordWriter - Interface in org.apache.hadoop.mapred
Writes key/value pairs to an output file.
redCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Reduce method.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.WordCount.Reduce
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combines values for a given key.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
Do nothing.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityReducer
Writes all keys and values directly to output.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.LongSumReducer
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.Reducer
Combines values for a given key.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeReducer
 
reduceOutFieldSeparator - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
reduceProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
reduceProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a float between 0.0 and 1.0, indicating progress on the reduce portion of the job.
Reducer - Interface in org.apache.hadoop.mapred
Reduces a set of intermediate values which share a key to a smaller set of values.
ReflectionUtils - Class in org.apache.hadoop.util
General reflection utils
ReflectionUtils() - Constructor for class org.apache.hadoop.util.ReflectionUtils
 
refresh() - Method in class org.apache.hadoop.util.HostsFileReader
 
refreshNodes() - Method in class org.apache.hadoop.dfs.DFSAdmin
Command to ask the namenode to reread the hosts and excluded hosts file.
refreshNodes() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
refreshNodes() - Method in class org.apache.hadoop.dfs.NameNode
 
RegexMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that extracts text matching a regular expression.
RegexMapper() - Constructor for class org.apache.hadoop.mapred.lib.RegexMapper
 
regexpEscape(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
register(DatanodeRegistration, String) - Method in class org.apache.hadoop.dfs.NameNode
 
registerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
registerUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Registers a callback to be called at regular time intervals, as determined by the implementation-class specific configuration.
registerUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Registers a callback to be called at time intervals determined by the configuration.
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(RccTokenManager) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(SimpleCharStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(SimpleCharStream, int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(Reader, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
release(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
release(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. FS does not support file locks anymore.
release(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. FS does not support file locks anymore.
release(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
release(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Deprecated.  
release(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
releaseCache(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is the opposite of getlocalcache.
releaseLock(String, String) - Method in class org.apache.hadoop.dfs.NameNode
Deprecated.  
remaining - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
RemoteException - Exception in org.apache.hadoop.ipc
 
RemoteException(String, String) - Constructor for exception org.apache.hadoop.ipc.RemoteException
 
remove() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
remove() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Removes, from the buffered data table, the row (if it exists) having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.remove().
remove() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Removes the row, if it exists, in the buffered data table having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of remove
remove(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Remove a data node Update data node counter & rack counter if neccessary
removeAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Removes the named attribute if it exists.
removeSuffix(String, String) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Removes a suffix from a filename, if it has it.
rename(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Rename files/dirs
rename(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Renames Path src to Path dst.
rename(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Renames Path src to Path dst.
rename(String, String) - Method in class org.apache.hadoop.fs.FsShell
Move files that match the file pattern srcf to a destination file.
rename(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
rename(FileSystem, String, String) - Static method in class org.apache.hadoop.io.MapFile
Renames an existing map directory.
rename(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
renewLease(String) - Method in class org.apache.hadoop.dfs.NameNode
 
report() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
report() - Method in class org.apache.hadoop.dfs.DFSAdmin
Gives a report on how the FileSystem is doing.
reportBadBlocks(LocatedBlock[]) - Method in class org.apache.hadoop.dfs.NameNode
The client has detected an error on the specified located blocks and is reporting them to the server.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
We need to find the blocks that didn't match.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Report a checksum error to the file system.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.LocalFileSystem
Moves files to a bad file directory on the same device, so that their storage will not be reused.
reportDiagnosticInfo(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
Called when the task dies before completion, and we want to report back diagnostic info
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
Reporter - Interface in org.apache.hadoop.mapred
Passed to application code to permit alteration of status.
reportTaskTrackerError(String, String, String) - Method in class org.apache.hadoop.mapred.JobTracker
 
requiresLayout() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
reserveSpaceWithCheckSum(Path, long) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Register a file with its size.
reset() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
reset() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
reset() - Method in interface org.apache.hadoop.io.compress.Compressor
Resets compressor so that a new set of input data can be processed.
reset() - Method in interface org.apache.hadoop.io.compress.Decompressor
Resets decompressor so that a new set of input data can be processed.
reset() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
reset(byte[], int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset(byte[], int, int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset() - Method in class org.apache.hadoop.io.DataOutputBuffer
Resets the buffer to empty.
reset() - Method in class org.apache.hadoop.io.MapFile.Reader
Re-positions the reader before its first key.
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
reset the aggregator
reset() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
reset the aggregator
reset() - Method in class org.apache.hadoop.record.Buffer
Reset the buffer to 0 size
ResetableIterator - Interface in org.apache.hadoop.contrib.utils.join
This interface defines an iterator interface that will help the reducer class for re-grouping the values in the values iterator of the reduce method according the their source tags.
resetState() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Reset the decompressor to its initial state and discard any buffered data, as the underlying stream may have been repositioned.
resetState() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Reset the compression to the initial state.
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
resume() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
resume the suspended thread
retrieveBlock(Block, long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
retrieveINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
RETRY_FOREVER - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying forever.
retryByException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Set a default policy with some explicit handlers for specific exceptions.
retryByRemoteException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
A retry policy for RemoteException Set a default policy with some explicit handlers for specific exceptions.
RetryPolicies - Class in org.apache.hadoop.io.retry
A collection of useful implementations of RetryPolicy.
RetryPolicies() - Constructor for class org.apache.hadoop.io.retry.RetryPolicies
 
RetryPolicy - Interface in org.apache.hadoop.io.retry
Specifies a policy for retrying method failures.
RetryProxy - Class in org.apache.hadoop.io.retry
A factory for creating retry proxies.
RetryProxy() - Constructor for class org.apache.hadoop.io.retry.RetryProxy
 
retryUpToMaximumCountWithFixedSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a fixed time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumCountWithProportionalSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumTimeWithFixedSleep(long, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying for a maximum time, waiting a fixed time between attempts, and then fail by re-throwing the exception.
reverseDns(InetAddress, String) - Static method in class org.apache.hadoop.net.DNS
Returns the hostname associated with the specified IP address by the provided nameserver.
rjustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
rollEditLog() - Method in class org.apache.hadoop.dfs.NameNode
Roll the edit log.
rollFsImage() - Method in class org.apache.hadoop.dfs.NameNode
Roll the image
ROOT - Static variable in class org.apache.hadoop.net.NodeBase
 
RPC - Class in org.apache.hadoop.ipc
A simple RPC mechanism.
RPC.Server - Class in org.apache.hadoop.ipc
An RPC Server.
RPC.Server(Object, Configuration, String, int) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.Server(Object, Configuration, String, int, int, boolean) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.VersionMismatch - Exception in org.apache.hadoop.ipc
A version mismatch for the RPC protocol.
RPC.VersionMismatch(String, long, long) - Constructor for exception org.apache.hadoop.ipc.RPC.VersionMismatch
Create a version mismatch exception
run() - Method in class org.apache.hadoop.dfs.DataNode
No matter what kind of exception we get, keep retrying to offerService().
run(Configuration) - Static method in class org.apache.hadoop.dfs.DataNode
Start datanode daemon.
run(String[]) - Method in class org.apache.hadoop.dfs.DFSAdmin
 
run(String[]) - Method in class org.apache.hadoop.dfs.DFSck
 
run(String[]) - Method in class org.apache.hadoop.dfs.NamenodeFsck
 
run() - Method in class org.apache.hadoop.dfs.SecondaryNameNode
 
run(String[]) - Method in class org.apache.hadoop.fs.FsShell
run
run(String[]) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
run(String[]) - Method in class org.apache.hadoop.mapred.JobClient
 
run() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
The main loop for the thread.
run() - Method in class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
Cleans up history data.
run(RecordReader, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
run(RecordReader, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.MapRunnable
Called to execute mapping.
run(RecordReader, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.MapRunner
 
run() - Method in class org.apache.hadoop.mapred.TaskTracker
The server retry loop.
run(String[]) - Method in class org.apache.hadoop.util.CopyFiles
This is the main driver for recursively copying directories across file systems.
run(String[]) - Method in interface org.apache.hadoop.util.Tool
execute the command with the given arguments
RunJar - Class in org.apache.hadoop.util
Run a Hadoop job jar.
RunJar() - Constructor for class org.apache.hadoop.util.RunJar
 
runJob(JobConf) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
Submit/run a map/reduce job.
runJob(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Utility that submits a job, then polls for progress until the job is complete.
runJob(JobConf) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
Submit/run a map/reduce job.
RUNLENGTH_ENCODING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
RUNNING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
RUNNING - Static variable in class org.apache.hadoop.mapred.JobStatus
 
running_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
RunningJob - Interface in org.apache.hadoop.mapred
Includes details on a running MapReduce job.
runningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 

S

S3Exception - Exception in org.apache.hadoop.fs.s3
Thrown if there is a problem communicating with Amazon S3.
S3Exception(Throwable) - Constructor for exception org.apache.hadoop.fs.s3.S3Exception
 
S3FileSystem - Class in org.apache.hadoop.fs.s3
A FileSystem backed by Amazon S3.
S3FileSystem() - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystem(FileSystemStore) - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystemException - Exception in org.apache.hadoop.fs.s3
Thrown when there is a fatal exception while using S3FileSystem.
S3FileSystemException(String) - Constructor for exception org.apache.hadoop.fs.s3.S3FileSystemException
 
safeGetCanonicalPath(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
SafeModeException - Exception in org.apache.hadoop.dfs
This exception is thrown when the name node is in safe mode.
SafeModeException(String, FSNamesystem.SafeModeInfo) - Constructor for exception org.apache.hadoop.dfs.SafeModeException
 
SecondaryNameNode - Class in org.apache.hadoop.dfs
The Secondary NameNode is a helper to the primary NameNode.
SecondaryNameNode(Configuration) - Constructor for class org.apache.hadoop.dfs.SecondaryNameNode
Create a connection to the primary namenode.
SecondaryNameNode.GetImageServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to retrieve a file.
SecondaryNameNode.GetImageServlet() - Constructor for class org.apache.hadoop.dfs.SecondaryNameNode.GetImageServlet
 
seek(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seek(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seek to the given offset from the start of the file.
seek(long) - Method in interface org.apache.hadoop.fs.Seekable
Seek to the given offset from the start of the file.
seek(long) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Positions the reader before its nth value.
seek(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Positions the reader at the named key, or if none such exists, at the first entry after the named key.
seek(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Set the current byte position in the input file.
seek(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
 
seek(long) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
Seekable - Interface in org.apache.hadoop.fs
Stream that permits seeking.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Implementation should seek forward in_ to the first byte of the next record.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seeks a different copy of the data.
seenPrimary_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
SEMICOLON_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
sendHeartbeat(DatanodeRegistration, long, long, int, int) - Method in class org.apache.hadoop.dfs.NameNode
Data node notify the name node that it is alive Return a block-oriented command for the datanode to execute.
SEPARATOR - Static variable in class org.apache.hadoop.fs.Path
The directory separator, a slash.
SEPARATOR_CHAR - Static variable in class org.apache.hadoop.fs.Path
 
SequenceFile - Class in org.apache.hadoop.io
Support for flat files of binary key/value pairs.
SequenceFile.CompressionType - Enum in org.apache.hadoop.io
The type of compression.
SequenceFile.Metadata - Class in org.apache.hadoop.io
The class encapsulating with the metadata of a file.
SequenceFile.Metadata() - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Metadata(TreeMap<Text, Text>) - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Reader - Class in org.apache.hadoop.io
Reads key/value pairs from a sequence-format file.
SequenceFile.Reader(FileSystem, Path, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Reader
Open the named file.
SequenceFile.Sorter - Class in org.apache.hadoop.io
Sorts key/value pairs in a sequence-format file.
SequenceFile.Sorter(FileSystem, Class, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge files containing the named classes.
SequenceFile.Sorter(FileSystem, WritableComparator, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge using an arbitrary WritableComparator.
SequenceFile.Sorter.RawKeyValueIterator - Interface in org.apache.hadoop.io
The interface to iterate over raw keys/values of SequenceFiles.
SequenceFile.Sorter.SegmentDescriptor - Class in org.apache.hadoop.io
This class defines a merge segment.
SequenceFile.Sorter.SegmentDescriptor(long, long, Path) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Constructs a segment
SequenceFile.ValueBytes - Interface in org.apache.hadoop.io
The interface to 'raw' values of SequenceFiles.
SequenceFile.Writer - Class in org.apache.hadoop.io
Write key/value pairs to a sequence-format file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file with write-progress reporter.
SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader which converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
This class converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
SequenceFileInputFilter - Class in org.apache.hadoop.mapred
A class that allows a map/red job to work on a sample of sequence files.
SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
 
SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
filter interface
SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
base calss for Filters
SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
This class returns a set of records by examing the MD5 digest of its key against a filtering frequency f.
SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
This class returns a percentage of records The percentage is determined by a filtering frequency f using the criteria record# % f == 0.
SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
 
SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
Records filter by matching key to regex
SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
 
SequenceFileInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for SequenceFiles.
SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
 
SequenceFileOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes SequenceFiles.
SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
 
SequenceFileRecordReader - Class in org.apache.hadoop.mapred
An RecordReader for SequenceFiles.
SequenceFileRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
 
serialize() - Method in class org.apache.hadoop.fs.s3.INode
 
serialize(RecordOutput, String) - Method in class org.apache.hadoop.record.Record
Serialize a record with tag (ususally field name)
serialize(RecordOutput) - Method in class org.apache.hadoop.record.Record
Serialize a record without a tag
Server - Class in org.apache.hadoop.ipc
An abstract IPC service.
Server(String, int, Class, int, Configuration) - Constructor for class org.apache.hadoop.ipc.Server
Constructs a server listening on the named port and address.
set(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property.
set(Writable[]) - Method in class org.apache.hadoop.io.ArrayWritable
 
set(boolean) - Method in class org.apache.hadoop.io.BooleanWritable
Set the value of the BooleanWritable
set(BytesWritable) - Method in class org.apache.hadoop.io.BytesWritable
Set the BytesWritable to the contents of the given newData.
set(byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable
Set the value to a copy of the given byte range
set(float) - Method in class org.apache.hadoop.io.FloatWritable
Set the value of this FloatWritable.
set(Writable) - Method in class org.apache.hadoop.io.GenericWritable
Set the instance that is wrapped.
set(int) - Method in class org.apache.hadoop.io.IntWritable
Set the value of this IntWritable.
set(long) - Method in class org.apache.hadoop.io.LongWritable
Set the value of this LongWritable.
set(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
Copy the contents of another instance into this instance.
set(Object) - Method in class org.apache.hadoop.io.ObjectWritable
Reset the instance.
set(Text, Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
set(String) - Method in class org.apache.hadoop.io.Text
Set to contain the contents of a string.
set(byte[]) - Method in class org.apache.hadoop.io.Text
Set to a utf8 byte array
set(Text) - Method in class org.apache.hadoop.io.Text
copy a text.
set(byte[], int, int) - Method in class org.apache.hadoop.io.Text
Set the Text to range of bytes
set(Writable[][]) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
set(String) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(UTF8) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(int) - Method in class org.apache.hadoop.io.VIntWritable
Set the value of this VIntWritable.
set(long) - Method in class org.apache.hadoop.io.VLongWritable
Set the value of this LongWritable.
set(byte[]) - Method in class org.apache.hadoop.record.Buffer
Use the specified bytes array as underlying sequence.
set(float) - Method in class org.apache.hadoop.util.Progress
Called during execution on a leaf node to set its progress.
setArchiveMd5(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the md5 of the archives to be localized
setAttribute(String, Object) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Set a value in the webapp context.
setAttribute(String, Object) - Method in class org.apache.hadoop.metrics.ContextFactory
Sets the named factory attribute to the specified value, creating it if it did not already exist.
setBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to an integer.
setCacheArchives(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of archives
setCacheFiles(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of files
setCapacity(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the capacity of the backing storage.
setCapacity(int) - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing storage.
setClass(String, Class<?>, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to the name of a class.
setClassLoader(ClassLoader) - Method in class org.apache.hadoop.conf.Configuration
Set the class loader that will be used to load the various objects.
setCodecClasses(Configuration, List<Class>) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Sets a list of codec classes in the configuration.
setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
 
setCompressionType(Configuration, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Set the compression type for sequence files.
setCompressMapOutput(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Should the map outputs be compressed before transfer? Uses the SequenceFile compression.
setCompressOutput(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Set whether the output of the reduce is compressed
setConf(Configuration) - Method in interface org.apache.hadoop.conf.Configurable
Set the configuration to be used by this object.
setConf(Configuration) - Method in class org.apache.hadoop.conf.Configured
 
setConf(Configuration) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
setConf(Configuration) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
setConf(Configuration) - Method in class org.apache.hadoop.io.ObjectWritable
 
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
configure the filter according to configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
configure the filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
configure the Filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
setConf(Configuration) - Method in class org.apache.hadoop.util.CopyFiles
 
setConf(Object, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Check and set 'configuration' if necessary.
setConf(Configuration) - Method in class org.apache.hadoop.util.ToolBase
 
setContentionTracing(boolean) - Static method in class org.apache.hadoop.util.ReflectionUtils
 
setCorruptFiles(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setDebugStream(PrintStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
setDestdir(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets directory where output files will be generated
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setDigest(String) - Method in class org.apache.hadoop.io.MD5Hash
Sets the digest value from a hex string.
setDisableHistory(boolean) - Static method in class org.apache.hadoop.mapred.JobHistory
Enable/disable history logging.
setDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setEventId(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
set event Id.
setFactor(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the number of streams to merge at once.
setFactory(Class, WritableFactory) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
setFailonerror(boolean) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Given multiple files (via fileset), set the error handling behavior
SetFile - Class in org.apache.hadoop.io
A file-based set of keys.
SetFile() - Constructor for class org.apache.hadoop.io.SetFile
 
setFile(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the record definition file attribute
SetFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing set file.
SetFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set.
SetFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set using the named comparator.
SetFile.Writer - Class in org.apache.hadoop.io
Deprecated. pass a Configuration too
SetFile.Writer(FileSystem, String, Class) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create the named set for keys of the named class.
SetFile.Writer(Configuration, FileSystem, String, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create a set naming the element class and compression type.
SetFile.Writer(Configuration, FileSystem, String, WritableComparator, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create a set naming the element comparator and compression type.
setFileMd5(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the md5 of the files to be localized
setFilterClass(Configuration, Class) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
set the filter class
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
set the filtering frequency in configuration
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
set the frequency and stores it in conf
setHostName(String) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setIndexInterval(int) - Method in class org.apache.hadoop.io.MapFile.Writer
Sets the index interval.
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets input data for compression.
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets input data for decompression.
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setInputFormat(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
 
setInputKeyClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Not used
setInputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
setInputValueClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Not used
setInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to an integer.
setJar(String) - Method in class org.apache.hadoop.mapred.JobConf
 
setJarByClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Set the job's jar file by finding an example class location.
setJobConf(JobConf) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred job conf for this job.
setJobConf() - Method in class org.apache.hadoop.streaming.StreamJob
 
setJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job ID for this job.
setJobName(String) - Method in class org.apache.hadoop.mapred.JobConf
Set the user-specified job name.
setJobName(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job name for this job.
setKeepFailedTaskFiles(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Set whether the framework should keep the intermediate files for failed tasks.
setKeepTaskFilesPattern(String) - Method in class org.apache.hadoop.mapred.JobConf
Set a regular expression for task names that should be kept.
setLanguage(String) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the output language option
setLevel(int) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized archives
setLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized files
setLogsRetainHours(int) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to a long.
setLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setMapOutputCompressionType(SequenceFile.CompressionType) - Method in class org.apache.hadoop.mapred.JobConf
Set the compression type for the map outputs.
setMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Set the given class as the compression codec for the map outputs.
setMapOutputKeyClass(Class<? extends WritableComparable>) - Method in class org.apache.hadoop.mapred.JobConf
Set the key class for the map output data.
setMapOutputValueClass(Class<? extends Writable>) - Method in class org.apache.hadoop.mapred.JobConf
Set the value class for the map output data.
setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapred.JobConf
 
setMapredJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred ID for this job.
setMapRunnerClass(Class<? extends MapRunnable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Expert: Set the number of maximum attempts that will be made to run a map task
setMaxMapTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum percentage of map tasks that can fail without the job being aborted.
setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Expert: Set the number of maximum attempts that will be made to run a reduce task
setMaxReduceTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum percentage of reduce tasks that can fail without the job being aborted.
setMaxTaskFailuresPerTracker(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum no.
setMemory(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the total amount of buffer memory, in bytes.
setMessage(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the message for this job.
setMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMinSplitSize(long) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
setMissingSize(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Set the name that a class should be known as to something other than the class name.
setNoKeepSplits(int) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setNumMapTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
 
setNumReduceTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
 
setObject(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property.
setOutputCompressorClass(JobConf, Class) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Set the given class as the output compression codec.
setOutputFormat(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputKeyClass(Class<? extends WritableComparable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputKeyComparatorClass(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputValueClass(Class<? extends Writable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputValueGroupingComparator(Class) - Method in class org.apache.hadoop.mapred.JobConf
Set the user defined comparator for grouping values.
setOverReplicatedBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setParent(Node) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapred.JobConf
 
setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Define the filtering regex and stores it in conf
setPeriod(int) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sets the timer period
setPurgeLogSplits(boolean) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setQuietMode(boolean) - Method in class org.apache.hadoop.conf.Configuration
Make this class quiet.
setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
 
setReplication(String, short) - Method in class org.apache.hadoop.dfs.NameNode
 
setReplication(int) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setReplication(Path, short) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set replication for an existing file.
setReplication(short, String, boolean) - Method in class org.apache.hadoop.fs.FsShell
Set the replication for files that match file pattern srcf if it's a directory and recursive is true, set replication for all the subdirs and those files too.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Set the replication of the given file
setReplication(Path, short) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Replication is not supported for S3 file systems since S3 handles it for us.
setReplication(Path, short) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
setRunState(int) - Method in class org.apache.hadoop.mapred.JobStatus
Change the current run state of the job.
setSafeMode(String[], int) - Method in class org.apache.hadoop.dfs.DFSAdmin
Safe mode maintenance command.
setSafeMode(FSConstants.SafeModeAction) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Enter, leave or get safe mode.
setSafeMode(FSConstants.SafeModeAction) - Method in class org.apache.hadoop.dfs.NameNode
 
setSize(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the size of the buffer.
setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Turn on or off speculative execution for this job.
setState(int) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the state for this job.
setStatus(String) - Method in interface org.apache.hadoop.mapred.Reporter
Alter the application's status description.
setStatus(String) - Method in class org.apache.hadoop.util.Progress
 
setTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
setTag(Text) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
setTag(String, String) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, String) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Sets task id.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setTaskOutputFilter(JobClient.TaskStatusFilter) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) - Static method in class org.apache.hadoop.mapred.JobClient
Modify the JobConf to set the task output filter
setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task status.
setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task tracker http location.
setThreads(int, int) - Method in class org.apache.hadoop.mapred.StatusHttpServer
 
setTimeout(int) - Method in class org.apache.hadoop.ipc.Client
Sets the timeout used for network i/o.
setTimeout(int) - Method in class org.apache.hadoop.ipc.Server
Sets the timeout used for network i/o.
setTotalBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalDirs(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalFiles(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalLogFileSize(long) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setTotalSize(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setUnderReplicatedBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setup(Configuration, JobConf, String[], String, boolean) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Interface to initialize *distcp* specific map tasks.
setup(Configuration, JobConf, String[], String, boolean) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Initialize DFSCopyFileMapper specific job-configuration.
setup(Configuration, JobConf, String[], String, boolean) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
Initialize HTTPCopyFileMapper specific job.
setUser(String) - Method in class org.apache.hadoop.mapred.JobConf
Set the reported username for this job.
setUserJobConfProps(boolean) - Method in class org.apache.hadoop.streaming.StreamJob
This method sets the user jobconf variable specified by user using -jobconf key=value
setValueClass(Class) - Method in class org.apache.hadoop.io.ArrayWritable
 
setVerbose(boolean) - Method in class org.apache.hadoop.streaming.JarBuilder
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Set the working directory to the given directory.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapred.JobConf
Set the current working directory for the default file system
shippedCanonFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
shouldPreserveInput() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
shouldRetry(Exception, int) - Method in interface org.apache.hadoop.io.retry.RetryPolicy
Determines whether the framework should retry a method for the given exception, and the number of retries that have been made for that operation so far.
shutdown() - Method in class org.apache.hadoop.dfs.DataNode
Shut down this instance of the datanode.
shutdown() - Method in class org.apache.hadoop.dfs.SecondaryNameNode
Shut down this instance of the datanode.
shutdown() - Method in class org.apache.hadoop.mapred.TaskTracker
 
SimpleCharStream - Class in org.apache.hadoop.record.compiler.generated
An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing).
SimpleCharStream(Reader, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
simpleHostname(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a full hostname, return the word upto the first dot.
size() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the number of counters in this group.
size() - Method in class org.apache.hadoop.mapred.Counters
Returns the total number of counters, by summing the number of counters in each group.
size() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the number of elements currently stored in the PriorityQueue.
skip(long) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
skip(DataInput) - Static method in class org.apache.hadoop.io.Text
Skips over one Text in the input.
skip(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Skips over one UTF8 in the input.
skipCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
skipFully(DataInput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Skip len number of bytes in input streamin
Sort - Class in org.apache.hadoop.examples
This is the trivial map/reduce program that does absolutely nothing other than use the framework to fragment and sort the input values.
Sort() - Constructor for class org.apache.hadoop.examples.Sort
 
sort(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files into an output file.
sort(Path, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
The backwards compatible interface to sort.
sortAndIterate(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files and return an iterator.
sortByDistance(DatanodeDescriptor, DatanodeDescriptor[]) - Method in class org.apache.hadoop.net.NetworkTopology
Sorts nodes array by their distances to reader.
sortNodeList(ArrayList<DatanodeDescriptor>, String, String) - Method in class org.apache.hadoop.dfs.JspHelper
 
SOURCE_TAGS_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
specialConstructor - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This variable determines which constructor was used to create this object and thereby affects the semantics of the "getMessage" method (see below).
specialToken - Variable in class org.apache.hadoop.record.compiler.generated.Token
This field is used to access special tokens that occur prior to this token, but after the immediately preceding regular (non-special) token.
splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
start() - Method in class org.apache.hadoop.ipc.Server
Starts the service.
start() - Method in class org.apache.hadoop.mapred.StatusHttpServer
Start the server.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
startMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized map.
startMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a map to be serialized.
startMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Starts or restarts monitoring, by opening in append-mode, the file specified by the fileName attribute, if specified.
startMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Starts or restarts monitoring, the emitting of metrics records as they are updated.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Starts or restarts monitoring, the emitting of metrics records.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of startMonitoring
startNextPhase() - Method in class org.apache.hadoop.util.Progress
Called during execution to move to the next phase at this level in the tree.
startNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
startRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized record.
startRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a record to be serialized.
startRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startTracker(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker with given configuration.
startVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized vector.
startVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a vector to be serialized.
startVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
stateChangeLog - Static variable in class org.apache.hadoop.dfs.NameNode
 
staticFlag - Static variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
StatusHttpServer - Class in org.apache.hadoop.mapred
Create a Jetty embedded server to answer http requests.
StatusHttpServer(String, String, int, boolean) - Constructor for class org.apache.hadoop.mapred.StatusHttpServer
Create a status server on the given port.
StatusHttpServer.StackServlet - Class in org.apache.hadoop.mapred
A very simple servlet to serve up a text representation of the current stack traces.
StatusHttpServer.StackServlet() - Constructor for class org.apache.hadoop.mapred.StatusHttpServer.StackServlet
 
STILL_WAITING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
stop() - Method in class org.apache.hadoop.dfs.NameNode
Stop all NameNode threads and wait for all to finish.
stop() - Method in class org.apache.hadoop.ipc.Client
Stop all threads related to this client.
stop() - Method in class org.apache.hadoop.ipc.Server
Stops the service.
stop() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
set the thread state to STOPPING so that the thread will stop when it wakes up.
stop() - Method in class org.apache.hadoop.mapred.StatusHttpServer
stop the server
stopClient() - Static method in class org.apache.hadoop.ipc.RPC
Stop all RPC client connections
stopMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Stops monitoring, closing the file.
stopMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring.
stopMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring.
stopNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
stopTracker() - Static method in class org.apache.hadoop.mapred.JobTracker
 
storageID - Variable in class org.apache.hadoop.dfs.DatanodeID
 
storeBlock(Block, File) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
storeINode(Path, INode) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
StreamBaseRecordReader - Class in org.apache.hadoop.streaming
Shared functionality for hadoopStreaming formats.
StreamBaseRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamBaseRecordReader
 
streamBlockInAscii(InetSocketAddress, long, long, long, long, JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
StreamFile - Class in org.apache.hadoop.dfs
 
StreamFile() - Constructor for class org.apache.hadoop.dfs.StreamFile
 
StreamInputFormat - Class in org.apache.hadoop.streaming
An input format that selects a RecordReader based on a JobConf property.
StreamInputFormat() - Constructor for class org.apache.hadoop.streaming.StreamInputFormat
 
StreamJob - Class in org.apache.hadoop.streaming
All the client-side work happens here.
StreamJob(String[], boolean) - Constructor for class org.apache.hadoop.streaming.StreamJob
 
StreamLineRecordReader - Class in org.apache.hadoop.streaming
Deprecated.  
StreamLineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.streaming.StreamLineRecordReader
Deprecated.  
StreamOutputFormat - Class in org.apache.hadoop.streaming
Deprecated.  
StreamOutputFormat() - Constructor for class org.apache.hadoop.streaming.StreamOutputFormat
Deprecated.  
StreamSequenceRecordReader - Class in org.apache.hadoop.streaming
Deprecated.  
StreamSequenceRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.streaming.StreamSequenceRecordReader
Deprecated.  
StreamUtil - Class in org.apache.hadoop.streaming
Utilities not available elsewhere in Hadoop.
StreamUtil() - Constructor for class org.apache.hadoop.streaming.StreamUtil
 
StreamXmlRecordReader - Class in org.apache.hadoop.streaming
A way to interpret XML fragments as Mapper input records.
StreamXmlRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamXmlRecordReader
 
STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
stringifyException(Throwable) - Static method in class org.apache.hadoop.util.StringUtils
Make a string representation of the exception.
stringToPath(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
stringToURI(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
StringUtils - Class in org.apache.hadoop.util
General string utils
StringUtils() - Constructor for class org.apache.hadoop.util.StringUtils
 
StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the biggest of a sequence of strings.
StringValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
the default constructor
StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the smallest of a sequence of strings.
StringValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
the default constructor
submit() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Submit this job to mapred.
submitAndMonitorJob() - Method in class org.apache.hadoop.streaming.StreamJob
 
submitJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system
submitJob(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system
submitJob(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Submit a Job for execution.
submitJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
JobTracker.submitJob() kicks off a new job.
SUCCEEDED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
SUCCESS - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
suffix(String) - Method in class org.apache.hadoop.fs.Path
Adds a suffix to the final name in the path.
sum(Counters, Counters) - Static method in class org.apache.hadoop.mapred.Counters
Convenience method for computing the sum of two sets of counters.
suspend() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
suspend the running thread
SwitchTo(int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
symLink(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Create a soft link between a src and destination only on a local disk.
sync(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Seek to the next sync mark past a given position.
sync() - Method in class org.apache.hadoop.io.SequenceFile.Writer
create a sync point
SYNC_INTERVAL - Static variable in class org.apache.hadoop.io.SequenceFile
The number of bytes between sync points.
syncSeen() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true iff the previous call to next passed a sync mark.

T

tabSize - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
tag - Variable in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
TaggedMapOutput - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the values that flow from the mappers to the reducers in a data join job.
TaggedMapOutput() - Constructor for class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
TaskCompletionEvent - Class in org.apache.hadoop.mapred
This is used to track task completion events on job tracker.
TaskCompletionEvent() - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Default constructor for Writable.
TaskCompletionEvent(int, String, int, boolean, TaskCompletionEvent.Status, String) - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Constructor.
TaskCompletionEvent.Status - Enum in org.apache.hadoop.mapred
 
TaskLogAppender - Class in org.apache.hadoop.mapred
A simple log4j-appender for the task child's map-reduce system logs.
TaskLogAppender() - Constructor for class org.apache.hadoop.mapred.TaskLogAppender
 
TaskReport - Class in org.apache.hadoop.mapred
A report on the state of a task.
TaskReport() - Constructor for class org.apache.hadoop.mapred.TaskReport
 
TaskTracker - Class in org.apache.hadoop.mapred
TaskTracker is a process that starts and tracks MR Tasks in a networked environment.
TaskTracker(JobConf) - Constructor for class org.apache.hadoop.mapred.TaskTracker
Start with the local machine name, and the default JobTracker
TaskTracker.Child - Class in org.apache.hadoop.mapred
The main() for child processes.
TaskTracker.Child() - Constructor for class org.apache.hadoop.mapred.TaskTracker.Child
 
TaskTracker.MapOutputServlet - Class in org.apache.hadoop.mapred
This class is used in TaskTracker's Jetty to serve the map outputs to other nodes.
TaskTracker.MapOutputServlet() - Constructor for class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
taskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
 
Text - Class in org.apache.hadoop.io
This class stores text using standard UTF8 encoding.
Text() - Constructor for class org.apache.hadoop.io.Text
 
Text(String) - Constructor for class org.apache.hadoop.io.Text
Construct from a string.
Text(Text) - Constructor for class org.apache.hadoop.io.Text
Construct from another text.
Text(byte[]) - Constructor for class org.apache.hadoop.io.Text
Construct from a byte array.
Text.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for Text keys.
Text.Comparator() - Constructor for class org.apache.hadoop.io.Text.Comparator
 
TextInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for plain text files.
TextInputFormat() - Constructor for class org.apache.hadoop.mapred.TextInputFormat
 
TextOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes plain text files.
TextOutputFormat() - Constructor for class org.apache.hadoop.mapred.TextOutputFormat
 
TextOutputFormat.LineRecordWriter - Class in org.apache.hadoop.mapred
 
TextOutputFormat.LineRecordWriter(DataOutputStream) - Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
toArray() - Method in class org.apache.hadoop.io.ArrayWritable
 
toArray() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
token - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
Token - Class in org.apache.hadoop.record.compiler.generated
Describes the input token stream.
Token() - Constructor for class org.apache.hadoop.record.compiler.generated.Token
 
token_source - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
TokenCountMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that maps text values into pairs.
TokenCountMapper() - Constructor for class org.apache.hadoop.mapred.lib.TokenCountMapper
 
tokenImage - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is a reference to the "tokenImage" array of the generated parser within which the parse error occurred.
tokenImage - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
TokenMgrError - Error in org.apache.hadoop.record.compiler.generated
 
TokenMgrError() - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(String, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(boolean, int, int, int, String, char, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
Tool - Interface in org.apache.hadoop.util
A tool interface that support generic options handling
ToolBase - Class in org.apache.hadoop.util
This is a base class to support generic commonad options.
ToolBase() - Constructor for class org.apache.hadoop.util.ToolBase
 
top() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the least element of the PriorityQueue in constant time.
toString() - Method in class org.apache.hadoop.conf.Configuration
 
toString() - Method in class org.apache.hadoop.dfs.DataNode
 
toString() - Method in class org.apache.hadoop.dfs.DatanodeID
 
toString() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
toString() - Method in class org.apache.hadoop.fs.DF
 
toString() - Method in class org.apache.hadoop.fs.Path
 
toString() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
toString() - Method in class org.apache.hadoop.fs.s3.Block
 
toString() - Method in class org.apache.hadoop.io.BytesWritable
Generate the stream of bytes as hex pairs separated by ' '.
toString() - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Print the extension map out as a string.
toString() - Method in class org.apache.hadoop.io.FloatWritable
 
toString() - Method in class org.apache.hadoop.io.IntWritable
 
toString() - Method in class org.apache.hadoop.io.LongWritable
 
toString() - Method in class org.apache.hadoop.io.MD5Hash
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
toString() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the file.
toString() - Method in class org.apache.hadoop.io.Text
Convert text back to string
toString() - Method in class org.apache.hadoop.io.UTF8
Deprecated. Convert to a String.
toString() - Method in exception org.apache.hadoop.io.VersionMismatchException
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.VIntWritable
 
toString() - Method in class org.apache.hadoop.io.VLongWritable
 
toString() - Method in class org.apache.hadoop.mapred.FileSplit
 
toString() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
toString() - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
toString() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
toString() - Method in class org.apache.hadoop.net.NetworkTopology
convert a network tree to a string
toString() - Method in class org.apache.hadoop.net.NodeBase
Return this node's string representation
toString() - Method in class org.apache.hadoop.record.Buffer
 
toString(String) - Method in class org.apache.hadoop.record.Buffer
Convert the byte buffer to a string an specific character encoding
toString() - Method in class org.apache.hadoop.record.compiler.CodeBuffer
 
toString() - Method in class org.apache.hadoop.record.compiler.generated.Token
Returns the image.
toString() - Method in class org.apache.hadoop.record.Record
 
toString() - Method in class org.apache.hadoop.util.Progress
 
toStrings() - Method in class org.apache.hadoop.io.ArrayWritable
 
touch(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
toUri() - Method in class org.apache.hadoop.fs.Path
Convert this to a URI.
toURI(String) - Static method in class org.apache.hadoop.util.CopyFiles
 
transform(InputStream, InputStream, Writer) - Static method in class org.apache.hadoop.util.XMLUtils
Transform input xml given a stylesheet.
Trash - Class in org.apache.hadoop.fs
Provides a trash feature.
Trash(Configuration) - Constructor for class org.apache.hadoop.fs.Trash
Construct a trash can accessor.
truncate() - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing store to be the same as the current count of buffer.
TRY_ONCE_DONT_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail silently for void methods, or by re-throwing the exception for non-void methods.
TRY_ONCE_THEN_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail by re-throwing the exception.
TwoDArrayWritable - Class in org.apache.hadoop.io
A Writable for 2D arrays containing a matrix of instances of a class.
TwoDArrayWritable(Class) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
TwoDArrayWritable(Class, Writable[][]) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
Type() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
TYPE_SEPARATOR - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 

U

UNIQ_VALUE_COUNT - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
UniqValueCount - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that dedupes a sequence of objects.
UniqValueCount() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
the default constructor
unJar(File, File) - Static method in class org.apache.hadoop.util.RunJar
Unpack a jar file into a directory.
unregisterUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Removes a callback, if it exists.
unregisterUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Removes a callback, if it exists.
unZip(File, File) - Static method in class org.apache.hadoop.fs.FileUtil
Given a File input it will unzip the file in a the unzip directory passed as the second parameter
update() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.update().
update() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of update
UpdateLineColumn(char) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Updater - Interface in org.apache.hadoop.metrics
Call-back interface.
uriToString(URI[]) - Static method in class org.apache.hadoop.util.StringUtils
 
usage() - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
UserDefinedValueAggregatorDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a wrapper for a user defined value aggregator descriptor.
UserDefinedValueAggregatorDescriptor(String, JobConf) - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
userJobConfProps_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
USTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
UTF8 - Class in org.apache.hadoop.io
Deprecated. replaced by Text
UTF8() - Constructor for class org.apache.hadoop.io.UTF8
Deprecated.  
UTF8(String) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8(UTF8) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8.Comparator - Class in org.apache.hadoop.io
Deprecated. A WritableComparator optimized for UTF8 keys.
UTF8.Comparator() - Constructor for class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
UTF8ByteArrayUtils - Class in org.apache.hadoop.streaming
General utils for byte array containing UTF-8 encoded strings
UTF8ByteArrayUtils() - Constructor for class org.apache.hadoop.streaming.UTF8ByteArrayUtils
 
utf8Length(String) - Static method in class org.apache.hadoop.io.Text
For the given string, returns the number of UTF-8 bytes required to encode the string.
Util - Class in org.apache.hadoop.metrics.spi
Static utility methods
Utils - Class in org.apache.hadoop.record
Various utility functions for Hadooop record I/O runtime.

V

validateInput(JobConf) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
validateInput(JobConf) - Method in interface org.apache.hadoop.mapred.InputFormat
Are the input directories valid? This method is used to test the input directories when a job is submitted so that the framework can fail early with a useful error message when the input directory does not exist.
validateInput(JobConf) - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
This implementation always returns true.
validateUTF8(byte[]) - Static method in class org.apache.hadoop.io.Text
Check if a byte array contains valid utf-8
validateUTF8(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
Check to see if a byte array is valid utf-8
VALUE_HISTOGRAM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregator - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the minimal protocol for value aggregators.
ValueAggregatorBaseDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class.
ValueAggregatorBaseDescriptor() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregatorCombiner - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic combiner of Abacus.
ValueAggregatorCombiner() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
 
ValueAggregatorDescriptor - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the contract a value aggregator descriptor must support.
ValueAggregatorJob - Class in org.apache.hadoop.mapred.lib.aggregate
This is the main class for creating a map/reduce job using Abacus framework.
ValueAggregatorJob() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
ValueAggregatorJobBase - Class in org.apache.hadoop.mapred.lib.aggregate
This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Abacus.
ValueAggregatorJobBase() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
ValueAggregatorMapper - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic mapper of Abacus.
ValueAggregatorMapper() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
 
ValueAggregatorReducer - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic reducer of Abacus.
ValueAggregatorReducer() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
ValueHistogram - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that computes the histogram of a sequence of strings.
ValueHistogram() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
valueOf(String) - Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns the enum constant of this type with the specified name.
values() - Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns an array containing the constants of this enum type, in the order they're declared.
Vector() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
VECTOR_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
verbose - Variable in class org.apache.hadoop.streaming.JarBuilder
 
verbose_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
verifyRequest(DatanodeRegistration) - Method in class org.apache.hadoop.dfs.NameNode
Verify request.
verifyVersion(int) - Method in class org.apache.hadoop.dfs.NameNode
Verify version.
VersionedProtocol - Interface in org.apache.hadoop.ipc
Superclass of all protocols that use Hadoop RPC.
VersionedWritable - Class in org.apache.hadoop.io
A base class for Writables that provides version checking.
VersionedWritable() - Constructor for class org.apache.hadoop.io.VersionedWritable
 
versionID - Static variable in interface org.apache.hadoop.mapred.JobSubmissionProtocol
 
VersionInfo - Class in org.apache.hadoop.util
This class finds the package info for Hadoop and the HadoopVersionAnnotation information.
VersionInfo() - Constructor for class org.apache.hadoop.util.VersionInfo
 
VersionMismatchException - Exception in org.apache.hadoop.fs.s3
Thrown when Hadoop cannot read the version of the data stored in S3FileSystem.
VersionMismatchException(String, String) - Constructor for exception org.apache.hadoop.fs.s3.VersionMismatchException
 
VersionMismatchException - Exception in org.apache.hadoop.io
Thrown by VersionedWritable.readFields(DataInput) when the version of an object being read does not match the current implementation version as returned by VersionedWritable.getVersion().
VersionMismatchException(byte, byte) - Constructor for exception org.apache.hadoop.io.VersionMismatchException
 
versionRequest() - Method in class org.apache.hadoop.dfs.NameNode
 
VIntWritable - Class in org.apache.hadoop.io
A WritableComparable for integer values stored in variable-length format.
VIntWritable() - Constructor for class org.apache.hadoop.io.VIntWritable
 
VIntWritable(int) - Constructor for class org.apache.hadoop.io.VIntWritable
 
VLongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs in a variable-length format.
VLongWritable() - Constructor for class org.apache.hadoop.io.VLongWritable
 
VLongWritable(long) - Constructor for class org.apache.hadoop.io.VLongWritable
 

W

waitForCompletion() - Method in interface org.apache.hadoop.mapred.RunningJob
Blocks until the job is complete.
waitForProxy(Class, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
 
WAITING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
 
WithinMultiLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WithinOneLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WordCount - Class in org.apache.hadoop.examples
This is an example Hadoop Map/Reduce application.
WordCount() - Constructor for class org.apache.hadoop.examples.WordCount
 
WordCount.MapClass - Class in org.apache.hadoop.examples
Counts the words in each line.
WordCount.MapClass() - Constructor for class org.apache.hadoop.examples.WordCount.MapClass
 
WordCount.Reduce - Class in org.apache.hadoop.examples
A reducer class that just emits the sum of the input values.
WordCount.Reduce() - Constructor for class org.apache.hadoop.examples.WordCount.Reduce
 
Writable - Interface in org.apache.hadoop.io
A simple, efficient, serialization protocol, based on DataInput and DataOutput.
WritableComparable - Interface in org.apache.hadoop.io
An interface which extends both Writable and Comparable.
WritableComparator - Class in org.apache.hadoop.io
A Comparator for WritableComparables.
WritableComparator(Class) - Constructor for class org.apache.hadoop.io.WritableComparator
Construct for a WritableComparable implementation.
WritableFactories - Class in org.apache.hadoop.io
Factories for non-public writables.
WritableFactory - Interface in org.apache.hadoop.io
A factory for a class of Writable.
WritableName - Class in org.apache.hadoop.io
Utility to permit renaming of Writable implementation classes without invalidiating files that contain their class name.
WritableUtils - Class in org.apache.hadoop.io
 
WritableUtils() - Constructor for class org.apache.hadoop.io.WritableUtils
 
write(OutputStream) - Method in class org.apache.hadoop.conf.Configuration
Writes non-default properties in this configuration.
write(DataOutput) - Method in class org.apache.hadoop.dfs.DatanodeID
 
write(DataOutput) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
write(DataOutput) - Method in class org.apache.hadoop.io.ArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BooleanWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BytesWritable
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Write compressed bytes to the stream.
write(int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
 
write(DataInput, int) - Method in class org.apache.hadoop.io.DataOutputBuffer
Writes bytes from a DataInput directly into the buffer.
write(DataOutput) - Method in class org.apache.hadoop.io.FloatWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.GenericWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.IntWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.LongWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.MD5Hash
 
write(DataOutput) - Method in class org.apache.hadoop.io.NullWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.ObjectWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
write(DataOutput) - Method in class org.apache.hadoop.io.Text
serialize write this object to out length uses zero-compressed encoding
write(DataOutput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.io.VersionedWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VIntWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VLongWritable
 
write(DataOutput) - Method in interface org.apache.hadoop.io.Writable
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.Counters
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.FileSplit
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobProfile
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobStatus
 
write(WritableComparable, Writable) - Method in interface org.apache.hadoop.mapred.RecordWriter
Writes a key/value pair.
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskReport
 
write(WritableComparable, Writable) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
write(DataOutput) - Method in class org.apache.hadoop.record.Record
 
WRITE_COMPLETE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBool(boolean, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a boolean to serialized record.
writeBool(boolean, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBuffer(Buffer, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a buffer to serialized record.
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeByte(byte, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a byte to serialized record.
writeByte(byte, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeCompressed(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.write(DataOutput).
writeCompressedByteArray(DataOutput, byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Write compressed bytes to outStream.
writeCompressedString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeDouble(double, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a double precision floating point number to serialized record.
writeDouble(double, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeEnum(DataOutput, Enum) - Static method in class org.apache.hadoop.io.WritableUtils
writes String value of enum to DataOutput.
writeFile(SequenceFile.Sorter.RawKeyValueIterator, SequenceFile.Writer) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Writes records from RawKeyValueIterator into a file represented by the passed writer
writeFloat(float, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeFloat(float, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeFloat(float, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a single-precision float to serialized record.
writeFloat(float, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeInt(int, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeInt(int, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeInt(int, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write an integer to serialized record.
writeInt(int, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeLong(long, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeLong(long, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeLong(long, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a long integer to serialized record.
writeLong(long, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeObject(DataOutput, Object, Class, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Write a Writable, String, primitive type, or an array of the preceding.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.Text
Write a UTF8 encoded string to out
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Write a UTF-8 encoded string.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeString(String, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeString(String, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeString(String, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a unicode string to serialized record.
writeString(String, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeUncompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Writes the uncompressed bytes to the outStream.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes an integer to a binary stream with zero-compressed encoding.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.record.Utils
Serializes an int to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes a long to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.record.Utils
Serializes a long to a binary stream with zero-compressed encoding.

X

xceiverCount - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
XmlRecordInput - Class in org.apache.hadoop.record
XML Deserializer.
XmlRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.XmlRecordInput
Creates a new instance of XmlRecordInput
XmlRecordOutput - Class in org.apache.hadoop.record
XML Serializer.
XmlRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.XmlRecordOutput
Creates a new instance of XmlRecordOutput
XMLUtils - Class in org.apache.hadoop.util
General xml utilities.
XMLUtils() - Constructor for class org.apache.hadoop.util.XMLUtils
 

Z

ZlibCompressor - Class in org.apache.hadoop.io.compress.zlib
A Compressor based on the popular zlib compression algorithm.
ZlibCompressor(ZlibCompressor.CompressionLevel, ZlibCompressor.CompressionStrategy, ZlibCompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor using the specified compression level.
ZlibCompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor with the default compression level.
ZlibCompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The type of header for compressed data.
ZlibCompressor.CompressionLevel - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibDecompressor - Class in org.apache.hadoop.io.compress.zlib
A Decompressor based on the popular zlib compression algorithm.
ZlibDecompressor(ZlibDecompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Creates a new decompressor.
ZlibDecompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
ZlibDecompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The headers to detect from compressed data.
ZlibFactory - Class in org.apache.hadoop.io.compress.zlib
A collection of factories to create the right zlib/gzip compressor/decompressor instances.
ZlibFactory() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibFactory
 

A B C D E F G H I J K L M N O P Q R S T U V W X Z

Copyright © 2006 The Apache Software Foundation