Skip to content

PacktPublishing/Data-Engineering-with-AWS

Repository files navigation

Data Engineering with AWS

Data Engineering with AWS

This is the code repository for Data Engineering with AWS, published by Packt.

Learn how to design and build cloud-based data transformation pipelines using AWS

What is this book about?

Knowing how to architect and implement complex data pipelines is a highly sought-after skill. Data engineers are responsible for building these pipelines that ingest, transform, and join raw datasets - creating new value from the data in the process.

Amazon Web Services (AWS) offers a range of tools to simplify a data engineer's job, making it the preferred platform for performing data engineering tasks. This book will take you through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. The book also teaches you about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.

By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.

This book covers the following exciting features:

  • Understand data engineering concepts and emerging technologies
  • Ingest streaming data with Amazon Kinesis Data Firehose
  • Optimize, denormalize, and join datasets with AWS Glue Studio
  • Use Amazon S3 events to trigger a Lambda process to transform a file
  • Run complex SQL queries on data lake data using Amazon Athena
  • Load data into a Redshift data warehouse and run queries
  • Create a visualization of your data using Amazon QuickSight
  • Extract sentiment data from a dataset using Amazon Comprehend

If you feel this book is for you, get your copy today!

https://www.packtpub.com/

Instructions and Navigations

All of the code is organized into folders.

The code will look like the following:

import boto3
import awswrangler as wr
from urllib.parse import unquote_plus

Following is what you need for this book: This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone who is new to data engineering and wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful. A basic understanding of big data-related topics and Python coding will help you get the most out of this book but is not needed. Familiarity with the AWS console and core services is also useful but not necessary.

With the following software and hardware list you can run all code files present in the book (Chapter 1-14).

Software and Hardware List

Chapter Software required OS required
1-14 AWS Web Services(AWS) with a recent version of a modern web browser(Chrome, Edge, etc.) Any OS

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.

Related products

Get to Know the Author

Gareth Eagar has worked in the IT industry for over 25 years, starting in South Africa, then working in the United Kingdom, and now based in the United States. In 2017, he started working at Amazon Web Services (AWS) as a solution architect, working with enterprise customers in the NYC metro area. Gareth has become a recognized subject matter expert for building data lakes on AWS, and in 2019 he launched the Data Lake Day educational event at the AWS Lofts in NYC and San Francisco. He has also delivered a number of public talks and webinars on topics relating to big data, and in 2020 Gareth transitioned to the AWS Professional Services organization as a senior data architect, helping customers architect and build complex data pipelines.

Note from the author:

You can use the resources provided in this GitHub repo as you work through the hands-on activities includes in each chapter of the book. This repo is laid out with resources matched to each chapter of the book - such as the JSON used to define IAM policies, sample files, relevant links, etc.

Download a free PDF

If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.

https://packt.link/free-ebook/9781800560413

About

Data Engineering with AWS, Published by Packt

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages