|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
See:
Description
Core | |
---|---|
org.apache.hadoop | |
org.apache.hadoop.conf | Configuration of system parameters. |
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.dfs | A distributed implementation of FileSystem . |
org.apache.hadoop.filecache | |
org.apache.hadoop.fs | An abstract file system API. |
org.apache.hadoop.fs.s3 | A distributed implementation of FileSystem that uses Amazon S3. |
org.apache.hadoop.io | Generic i/o code for use when reading and writing data to the network, to databases, and to files. |
org.apache.hadoop.io.compress | |
org.apache.hadoop.io.compress.lzo | |
org.apache.hadoop.io.compress.zlib | |
org.apache.hadoop.io.retry | A mechanism for selectively retrying methods that throw exceptions under certain circumstances. |
org.apache.hadoop.ipc | Tools to help define network clients and servers. |
org.apache.hadoop.mapred | A system for scalable, fault-tolerant, distributed computation over large data collections. |
org.apache.hadoop.mapred.jobcontrol | Utilities for managing dependent jobs. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.aggregate | Classes for performing various counting and aggregations. |
org.apache.hadoop.metrics | This package defines an API for reporting performance metric information. |
org.apache.hadoop.metrics.file | Implementation of the metrics package that writes the metrics to a file. |
org.apache.hadoop.metrics.ganglia | Implementation of the metrics package that sends metric data to Ganglia. |
org.apache.hadoop.metrics.spi | The Service Provider Interface for the Metrics API. |
org.apache.hadoop.net | Network-related classes. |
org.apache.hadoop.record | Hadoop record I/O contains classes and a record description language translator for simplifying serialization and deserialization of records in a language-neutral manner. |
org.apache.hadoop.record.compiler | This package contains classes needed for code generation from the hadoop record compiler. |
org.apache.hadoop.record.compiler.ant | |
org.apache.hadoop.record.compiler.generated | This package contains code generated by JavaCC from the Hadoop record syntax file rcc.jj. |
org.apache.hadoop.tools | |
org.apache.hadoop.util | Common utilities. |
Examples | |
---|---|
org.apache.hadoop.examples | Hadoop example code. |
contrib: Streaming | |
---|---|
org.apache.hadoop.streaming |
Hadoop is a distributed computing platform.
Hadoop primarily consists of a distributed filesystem (DFS, in org.apache.hadoop.dfs) and an implementation of a MapReduce distributed data processor (in org.apache.hadoop.mapred ).
First, you need to get a copy of the Hadoop code.
You can download a nightly build from http://cvs.apache.org/dist/lucene/hadoop/nightly/. Unpack the release and connect to its top-level directory.
Or, check out the code from subversion and build it with Ant.
Edit the file conf/hadoop-env.sh to define at least JAVA_HOME.
Try the following command:
bin/hadoopThis will display the documentation for the Hadoop command script.
By default, Hadoop is configured to run things in a non-distributed mode, as a single Java process. This is useful for debugging, and can be demonstrated as follows:
mkdir inputThis will display counts for each match of the regular expression.
Note that input is specified as a directory containing input files and that output is also specified as a directory where parts are written.
NameNode
(Distributed Filesystem
master) host and port. This is specified with the configuration
property fs.default.name.
JobTracker
(MapReduce master)
host and port. This is specified with the configuration property
mapred.job.tracker.
(We also set the DFS replication level to 1 in order to reduce warnings when running on a single node.)
Now check that the command
ssh localhost
does not
require a password. If it does, execute the following commands:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
A new distributed filesystem must be formatted with the following command, run on the master node:
bin/hadoop namenode -format
The Hadoop daemons are started with the following command:
bin/start-all.sh
Daemon log output is written to the logs/ directory.
Input files are copied into the distributed filesystem as follows:
bin/hadoop dfs -put input input
Things are run as before, but output must be copied locally to examine it:
bin/hadoop org.apache.hadoop.mapred.demo.Grep input output 'dfs[a-z.]+'When you're done, stop the daemons with:
bin/stop-all.sh
Distributed operation is just like the pseudo-distributed operation described above, except:
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |