BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE : develop, optimize, and monitor data pipelines in databricks
Description
More Details
Notes
Also in this Series
Reviews from GoodReads
Citations
Girten, W. (2024). BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE: develop, optimize, and monitor data pipelines in databricks (1st edition.). Packt Publishing Ltd..
Chicago / Turabian - Author Date Citation, 17th Edition (style guide)Girten, Will. 2024. BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE: Develop, Optimize, and Monitor Data Pipelines in Databricks. Birmingham, UK: Packt Publishing Ltd.
Chicago / Turabian - Humanities (Notes and Bibliography) Citation, 17th Edition (style guide)Girten, Will. BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE: Develop, Optimize, and Monitor Data Pipelines in Databricks Birmingham, UK: Packt Publishing Ltd, 2024.
Harvard Citation (style guide)Girten, W. (2024). BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE: develop, optimize, and monitor data pipelines in databricks. 1st edn. Birmingham, UK: Packt Publishing Ltd.
MLA Citation, 9th Edition (style guide)Girten, Will. BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE: Develop, Optimize, and Monitor Data Pipelines in Databricks 1st edition., Packt Publishing Ltd., 2024.
Staff View
Grouping Information
Grouped Work ID | cc8f7c9d-1578-4f3b-c9c8-683268fcaddc-eng |
---|---|
Full title | building modern data applications using databricks lakehouse develop optimize and monitor data pipelines in databricks |
Author | girten will |
Grouping Category | book |
Last Update | 2025-01-24 12:33:29PM |
Last Indexed | 2025-05-03 03:33:34AM |
Book Cover Information
Image Source | default |
---|---|
First Loaded | Dec 18, 2024 |
Last Used | Apr 19, 2025 |
Marc Record
First Detected | Dec 16, 2024 11:30:34 PM |
---|---|
Last File Modification Time | Dec 17, 2024 08:39:35 AM |
Suppressed | Record had no items |
MARC Record
LEADER | 05034cam a22004337a 4500 | ||
---|---|---|---|
001 | on1463664486 | ||
003 | OCoLC | ||
005 | 20241217082913.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 241026s2024 enk o 000 0 eng d | ||
019 | |a 1463769218 | ||
020 | |a 9781804612873|q (electronic bk.) | ||
020 | |a 1804612871|q (electronic bk.) | ||
035 | |a (OCoLC)1463664486|z (OCoLC)1463769218 | ||
037 | |a 10740990|b IEEE | ||
037 | |a 9781801073233|b O'Reilly Media | ||
040 | |a YDX|b eng|c YDX|d OCLCO|d EBLCP|d OCLCQ|d IEEEE|d ORMDA|d OCLCO | ||
049 | |a MAIN | ||
050 | 4 | |a TK5105.88813|b .G57 2024 | |
082 | 0 | 4 | |a 006.7/6|2 23/eng/20241112 |
100 | 1 | |a Girten, Will,|e author. | |
245 | 1 | 0 | |a BUILDING MODERN DATA APPLICATIONS USING DATABRICKS LAKEHOUSE :|b develop, optimize, and monitor data pipelines in databricks /|c Will Girten. |
250 | |a 1st edition. | ||
260 | |a Birmingham, UK :|b Packt Publishing Ltd.,|c 2024. | ||
300 | |a 1 online resource | ||
505 | 0 | |a Table of Contents An Introduction to Delta Live Tables Applying Data Transformations Using Delta Live Tables Managing Data Quality Using Delta Live Tables Scaling DLT Pipelines Mastering Data Governance in the Lakehouse with Unity Catalog Managing Data Locations in Unity Catalog Viewing Data Lineage Using Unity Catalog Deploying, Maintaining, and Administrating DLT Pipelines Using Terraform Leveraging Databricks Asset Bundles to Streamline Data Pipeline Deployment Monitoring Data Pipelines in Production. | |
520 | |a Get up to speed with the Databricks Data Intelligence Platform to build and scale modern data applications, leveraging the latest advancements in data engineering Key Features Learn how to work with real-time data using Delta Live Tables Unlock insights into the performance of data pipelines using Delta Live Tables Apply your knowledge to Unity Catalog for robust data security and governance Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionWith so many tools to choose from in today's data engineering development stack as well as operational complexity, this often overwhelms data engineers, causing them to spend less time gleaning value from their data and more time maintaining complex data pipelines. Guided by a lead specialist solutions architect at Databricks with 10+ years of experience in data and AI, this book shows you how the Delta Live Tables framework simplifies data pipeline development by allowing you to focus on defining input data sources, transformation logic, and output table destinations. This book gives you an overview of the Delta Lake format, the Databricks Data Intelligence Platform, and the Delta Live Tables framework. It teaches you how to apply data transformations by implementing the Databricks medallion architecture and continuously monitor the data quality of your pipelines. You'll learn how to handle incoming data using the Databricks Auto Loader feature and automate real-time data processing using Databricks workflows. You'll master how to recover from runtime errors automatically. By the end of this book, you'll be able to build a real-time data pipeline from scratch using Delta Live Tables, leverage CI/CD tools to deploy data pipeline changes automatically across deployment environments, and monitor, control, and optimize cloud costs. What you will learn Deploy near-real-time data pipelines in Databricks using Delta Live Tables Orchestrate data pipelines using Databricks workflows Implement data validation policies and monitor/quarantine bad data Apply slowly changing dimensions (SCD), Type 1 and 2, data to lakehouse tables Secure data access across different groups and users using Unity Catalog Automate continuous data pipeline deployment by integrating Git with build tools such as Terraform and Databricks Asset Bundles Who this book is for This book is for data engineers looking to streamline data ingestion, transformation, and orchestration tasks. Data analysts responsible for managing and processing lakehouse data for analysis, reporting, and visualization will also find this book beneficial. Additionally, DataOps/DevOps engineers will find this book helpful for automating the testing and deployment of data pipelines, optimizing table tasks, and tracking data lineage within the lakehouse. Beginner-level knowledge of Apache Spark and Python is needed to make the most out of this book. | ||
590 | |a O'Reilly|b O'Reilly Online Learning: Academic/Public Library Edition | ||
650 | 0 | |a Microsoft Azure (Computing platform)|9 422702 | |
650 | 0 | |a Databases. | |
650 | 0 | |a Electronic data processing.|9 37046 | |
655 | 0 | |a Electronic books. | |
776 | 0 | 8 | |c Original|z 1801073236|z 9781801073233|w (OCoLC)1461741971 |
856 | 4 | 0 | |u https://library.access.arlingtonva.us/login?url=https://learning.oreilly.com/library/view/~/9781801073233/?ar|x O'Reilly|z eBook |
938 | |a YBP Library Services|b YANK|n 306734943 | ||
938 | |a ProQuest Ebook Central|b EBLB|n EBL31735853 | ||
994 | |a 92|b VIA | ||
999 | |c 361355|d 361355 |