
Spark Application with Kafka Consumer and Hbase
1. Maven Project pom.xml. Import all required dependencies, and package all dependencies, resources into a single jar. This is the best option else you will […]
1. Maven Project pom.xml. Import all required dependencies, and package all dependencies, resources into a single jar. This is the best option else you will […]
Step 1: Install and Setup Hadoop Cluster Follow the link here Step 2: Install and Setup Hbase Follow the link here Step 3: Install and […]
LOG4J has been very popular logging library in Java world for years. LOG4J2 is even better. In Aug 2015 Log4j development team officially announced end of life for […]
The HBase root znode path is configurable using hbase-site.xml, and by default the location is “/hbase”. All the znodes referenced below will be prefixed using […]
Currently, there are 2 ways to write and read from kafka, via producer and consumer or kafka stream. Data are write once to kafka via […]
1: Download the code and extract Download the 2.0.0 release and un-tar it. cd /usr/local tar -xzf kafka_2.12-2.0.0.tgz ln -s kafka_2.12-2.0.0 kafka cd kafka mkdir […]
1. Connect to HBase. $ hbase shell hbase(main):001:0> Display HBase Shell Help Text. Type help and press Enter, to display some basic usage information for […]
This is the preferred method for Spark Installation, as Yarn is started as ResourceManager in hadoop, this reduces the transaction time required when installing Spark […]
Hbase – Pseudo Distributed mode of installation: This is a method for Hbase Installation, known as Pseudo Distributed mode of Installation. Below are the steps […]
We are going to setup all the NameNode, DataNode, ResourceManager and NodeManager on a single machine. Step 1: Create user and group: groupadd hadoop useradd […]
Copyright © 2023 | WordPress Theme by MH Themes