Hadoop: Components and Working

Main Article Content

Nitika Arora

Abstract

Nowadays, Companies need to process Multi Petabyte of data efficiently. The Data may not have schema for the large system. It has become costly to get reliability in each Application for processing Petabyte of datasets. If there is a problem of Nodes fails every day, some of the causes of failure may be expected, rather than exceptional. The number of nodes in a cluster is not constant. So there is a need for one common infrastructure to have Efficient, reliable, Open Source Apache License. The Hadoop platform was designed to solve problems where you have a lot of data perhaps a mixture of complex and structured data and it doesn’t fit perfectly into tables. It’s for situations where you want to go for analysis that is deep and computationally extensive, like clustering and targeting. That’s exactly what Google was doing when it was indexing the web and examining user behavior to improve performance algorithms. Hadoop has its origins in Apache Nutch, an open source web search engine which is a part of the Lucene project. Building a web search engine from scratch was an ambitious goal, for not only is the software required to index websites complex to write, but it is also a challenge to run without a dedicated operations team, since there are so many moving parts

Keywords: Hadoop, different domains, volume, variety, velocity, value, veracity

Downloads

Download data is not yet available.

Article Details

Section
Articles