Sample Questions
 

 

1. For a MapReduce job, on a cluster running MapReduce v1 (MRv1), what’s the relationship between tasks and task templates?
A.    Once the write stream closes on the DataNode, the DataNode immediately initiates a black report to the NameNode.
B.    The change is written to the NameNode disk.
C.    The metadata in the RAM on the NameNode is flushed to disk.
D.    The metadata in RAM on the NameNode is flushed disk.
E.    The metadata in RAM on the NameNode is updated.
F.    The change is written to the edits file.

2. How does HDFS Federation help HDFS Scale horizontally?
A.    HDFS Federation improves the resiliency of HDFS in the face of network issues by removing the NameNode as a single-point-of-failure.
B.    HDFS Federation allows the Standby NameNode to automatically resume the services of an active NameNode.
C.    HDFS Federation provides cross-data center (non-local) support for HDFS, allowing a cluster administrator to split the Block Storage outside the local cluster.
D.    HDFS Federation reduces the load on any single NameNode by using the multiple, independent NameNode to manage individual pars of the filesystem namespace

3. What is the recommended disk configuration for slave nodes in your Hadoop cluster with 6 x 2 TB hard drives?
A.    RAID 10
B.    JBOD
C.    RAID 5
D.    RAID 1+0

4. Your developers request that you enable them to use Hive on your Hadoop cluster. What do install and/or configure?
A.    Install the Hive interpreter on the client machines only, and configure a shared remote Hive Metastore.
B.    Install the Hive Interpreter on the client machines and all the slave nodes, and configure a shared remote Hive Metastore.
C.    Install the Hive interpreter on the master node running the JobTracker, and configure a shared remote Hive Metastore.
D.    Install the Hive interpreter on the client machines and all nodes on the cluster

5. Which command does Hadoop offer to discover missing or corrupt HDFS data?
A.    The map-only checksum utility,
B.    Fsck
C.    Du
D.    Dskchk
E.    Hadoop does not provide any tools to discover missing or corrupt data; there is no need because three replicas are kept for each data block.


Answers:      1 (A), 2 (D), 3 (B), 4 (A), 5 (B)

More Practice Test-

https://www.vskills.in/practice/quiz/Big-Data-and-Apache-Hadoop

Apply for Certification

https://www.vskills.in/certification/Certified-Big-Data-and-Apache-Hadoop-Developer