The Apache™ Hadoop™ project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
Welcome to Apache™ Hadoop™!


Hadoop is a powerful framework for automatic parallelization of computing tasks. Unfortunately programming for it poses certain challenges. It is really hard to understand and debug Hadoop programs. One way to make it a little easier is to have a simplified version of the Hadoop cluster that runs locally on the developer's machine. This tutorial describes how to set up such a cluster on a computer running Microsoft Windows. It also describes how to integrate this cluster with Eclipse, a prime Java development environment.