Apache Hadoop is a framework that offers its users range of solutions and tools in order to process Big Data. The framework is primarily helpful in analyzing the Big Data and taking the decisions based on it. Main components of Hadoop are Storage Unit –HDFS (NameNode DataNode) and Processing Framework –YARN (ResourceManager, NodeManager)
- Why adding and removing of nodes in done is a Hadoop Cluster frequently?
While using the commodity Hardware, “DataNode” crash quite frequently in a Hadoop cluster. Hadoop also offers the ease of scale along with increase in data volume. These two tasks require the Hadoop administrator to add and delete “Data Nodes” frequently in a Hadoop Cluster.
- Can NameNode and DataNode be a commodity hardware?
DataNodes are similar to the personal laptops and computers as they stores the data as well as needed in huge number. NameNode is the master node that stores metadata about all the blocks stores in HDFS. Since, the requirement is for high memory space, NameNode should essentially be a high-end machine with good memory space.
- Can you define Rack Awareness in Hadoop, if yes then how?
An algorithm through which the “NameNode” the way in which blocks and their replicas are placed. This is done based on the rack definitions to bring down the network traffic between “DataNodes” within the same rack.
- What are the three modes in which Hadoop can run?
Standalone or Local Mode
Fully Distributed Mode
- What is MapReduce and syntax of the same?
A framework used for processing large data sets over group of computers with the help of parallel programming. The Syntax for the same is hadoop_jar_file.jar /input_path /output_path.
- Why do you use “RecordReader” in Hadoop?
The above mentioned functionality is used to load the data from the sources and convert it into pairs that are feasible for reading by the “Mapper” task. The “RecordReader” instance is defined by the “InputFormat.
- What do you know about “SequenceFileInputFormat”?
An input format enabling to read within the sequence file. The compressed binary file format is suited best for transferring the data between the outputs of one “MapReduce” job to the input of some other “MapReduce” job. It is further possible to generate the Sequence file as the output of other MapReduce tasks.
- What are the different relational operations in “Pig Latin” you know about?
Different relational operators are:
- For each
- Order by
- What is “WAL” in HBase?
WAL or Write Ahead Log is a file linked to every Region Server underneath the distributed environment. The WAL comes in handy to store the new data that has not been committed to permanent storage. WAL is used in case data recovery is unsuccessful.
- Do you have working knowledge of Hadoop?
This you have to answer a little diplomatically wherein mention the Live Projects on which you have worked. In case you have completed the certification and still to apply the knowledge practically, now is the time to go for the Live Projects.