Beginning Apache Spark Using Azure Databricks
- Paperback: 292 pages
- Publisher: WOW! eBook; 1st edition (June 12, 2020)
- Language: English
- ISBN-10: 1484257804
- ISBN-13: 978-1484257807
Beginning Apache Spark Using Azure Databricks: Unleashing Large Cluster Analytics in the Cloud
Analyze vast amounts of data in record time using Apache Spark with Databricks in the Cloud. Learn the fundamentals, and more, of running analytics on large clusters in Azure and AWS, using Apache Spark with Databricks on top. Discover how to squeeze the most value out of your data at a mere fraction of what classical analytics solutions cost, while at the same time getting the results you need, incrementally faster.
This book explains how the confluence of these pivotal technologies gives you enormous power, and cheaply, when it comes to huge datasets. You will begin by learning how cloud infrastructure makes it possible to scale your code to large amounts of processing units, without having to pay for the machinery in advance. From there you will learn how Apache Spark, an open source framework, can enable all those CPUs for data analytics use. Finally, you will see how services such as Databricks provide the power of Apache Spark, without you having to know anything about configuring hardware or software. By removing the need for expensive experts and hardware, your resources can instead be allocated to actually finding business value in the data.
What You Will Learn
- Discover the value of big data analytics that leverage the power of the cloud
- Get started with Databricks using SQL and Python in either Microsoft Azure or AWS
- Understand the underlying technology, and how the cloud and Apache Spark fit into the bigger picture
- See how these tools are used in the real world
- Run basic analytics, including machine learning, on billions of rows at a fraction of a cost or free
This Beginning Apache Spark Using Azure Databricks book guides you through some advanced topics such as analytics in the cloud, data lakes, data ingestion, architecture, machine learning, and tools, including Apache Spark, Apache Hadoop, Apache Hive, Python, and SQL. Valuable exercises help reinforce what you have learned.