Big Data & Analytics

Query, Analyze and Visualize Big Data With Minimal Cost - Build Hadoop on Cloud Architecture

Hadoop is an open source project from Apache that has evolved rapidly into a major technology movement. It has emerged as the best way to handle massive amounts of data, including not only structured data but also complex, unstructured data as well. Its popularity is due in part to its ability to store, analyze and access large amounts of data, quickly and cost effectively across clusters of commodity hardware.
Apache Hadoop is not actually a single product but instead a collection of several components including the following:

MapReduce

A framework for writing applications that processes large amounts of structured and unstructured data in parallel across large clusters of machines in a very reliable and fault-tolerant manner.

Hadoop Distributed File System (HDFS)

A reliable and distributed Java-based file system that allows large volumes of data to be stored and rapidly accessed across large clusters of commodity servers.

Hive

Built on the MapReduce framework, Hive is a data warehouse that enables easy data summarization and ad-hoc queries via an SQL-like interface for large datasets stored in HDFS.

Pig

A platform for processing and analyzing large data sets. Pig consists on a high-level language (Pig Latin) for expressing data analysis programs paired with the MapReduce framework for processing these programs.

HBase

A column-oriented NoSQL data storage system that provides random real-time read/write access to big data for user applications.

ZooKeeper

A highly available system for coordinating distributed processes. Distributed applications use ZooKeeper to store and mediate updates to important configuration information.

Ambari

An open source installation lifecycle management, administration and monitoring system for Apache Hadoop clusters.

HCatalog

A table and metadata management service that provides a centralized way for data processing systems to understand the structure and location of the data stored within Apache Hadoop.

Apache Hadoop is generally not a direct replacement for enterprise data warehouses, data marts and other data stores that are commonly used to manage structured or transactional data. Instead, it is used to augment enterprise data architectures by providing an efficient and cost-effective means for storing, processing, managing and analyzing the ever-increasing volumes of semi-structured or un-structured data being produced daily.

Apache Hadoop can be useful across a range of use cases spanning virtually every vertical industry. It is becoming popular anywhere that you need to store, process, and analyze large volumes of data. Examples include digital marketing automation, fraud detection and prevention, social network and relationship analysis, predictive modeling for new drugs, retail in-store behavior analysis, and mobile device location-based marketing.

Apache Hadoop is widely deployed at organizations around the globe, including many of the world’s leading Internet and social networking businesses. At Yahoo!, Apache Hadoop is literally behind every click, processing and analyzing petabytes of data to better detect spam, predicting user interests, target ads and determine ad effectiveness. Many of the key architects and Apache Hadoop committers from Yahoo! founded Hortonworks to further accelerate development and adoption and assist organizations achieve similar business value.


Recent Articles

Climate change is happening. We all are observing the climate changes around the globe in past few decades. Once in a while, one among us might have related the weather in our childhood

Read More »

Everyone is familiar with the social media. We all know that popular sites like Facebook, Twitter, Google Plus, LinkedIn, etc. are nothing but the social networking services that allow

Read More »

Gone are those days when finite amount of data would be accumulated in a database in a structural format, where everything would have definite pattern of data type and limited storage

Read More »

Everyone knows how important it is to save the never-ending Big data. But the degree of troubles is only experienced by the DBA who handles this and his/her other subordinates who are

Read More »