A practical guide to implementing your enterprise data lake using Lambda Architecture as the base
- Build a full-fledged data lake for your organization with popular big data technologies using the Lambda architecture as the base
- Delve into the big data technologies required to meet modern day business strategies
- A highly practical guide to implementing enterprise data lakes with lots of examples and real-world use-cases
The term “Data Lake” has recently emerged as a prominent term in the big data industry. Data scientists can make use of it in deriving meaningful insights that can be used by businesses to redefine or transform the way they operate. Lambda architecture is also emerging as one of the very eminent patterns in the big data landscape, as it not only helps to derive useful information from historical data but also correlates real-time data to enable business to take critical decisions. This book tries to bring these two important aspects – data lake and lambda architecture-together.
This book is divided into three main sections. The first introduces you to the concept of data lakes, the importance of data lakes in enterprises, and getting you up-to-speed with the Lambda architecture. The second section delves into the principal components of building a data lake using the Lambda architecture. It introduces you to popular big data technologies such as Apache Hadoop, Spark, Sqoop, Flume, and ElasticSearch. The third section is a highly practical demonstration of putting it all together, and shows you how an enterprise data lake can be implemented, along with several real-world use-cases. It also shows you how other peripheral components can be added to the lake to make it more efficient.
By the end of this book, you will be able to choose the right big data technologies using the lambda architectural patterns to build your enterprise data lake.
What You Will Learn
- Build an enterprise-level data lake using the relevant big data technologies
- Understand the core of the Lambda architecture and how to apply it in an enterprise
- Learn the technical details around Sqoop and its functionalities
- Integrate Kafka with Hadoop components to acquire enterprise data
- Use flume with streaming technologies for stream-based processing
- Understand stream- based processing with reference to Apache Spark Streaming
- Incorporate Hadoop components and know the advantages they provide for enterprise data lakes
- Build fast, streaming, and high-performance applications using ElasticSearch
- Make your data ingestion process consistent across various data formats with configurability
- Process your data to derive intelligence using machine learning algorithms
Who This Book Is For
Java developers and architects who would like to implement a data lake for their enterprise will find this book useful. If you want to get hands-on experience with the Lambda Architecture and big data technologies by implementing a practical solution using these technologies, this book will also help you.
Table of Contents
- Introduction to Data
- Comprehensive Concepts of a Data Lake
- Lambda Architecture as a Pattern for Data Lake
- Applied Lambda for Data Lake
- Data Acquisition of Batch Data using Apache Sqoop
- Data Acquisition of Stream Data using Apache Flume
- Messaging Layer using Apache Kafka
- Data Processing using Apache Flink
- Data Store Using Apache Hadoop
- Indexed Data Store using Elasticsearch
- Data Lake Components Working Together
- Data Lake Use Case Suggestions