How to read data from Google Sheets
Connecting your Google Sheets to Data Pipelines
Connecting your Google Sheets to Data Pipelines
Data Pipelines allows you to mix pipeline builder widgets and native Spark SQL in the same pipeline
Updating a data pipeline that is already scheduled is a single step process
Increase your data pipeline's efficiency by using an initial SQL query to load data.
By default, partition files output by your data pipelines are named dynamically. This tutorial shows you how to predefine a fixed name.
Use the generated DAG to get an overview of your Data Pipelines
Data Pipelines offers tiered pricing to suit everyone's use case and budget.
Data Pipelines lets users connect to various SQL databases via JDBC
Mapping and adding columns using our built-in widget is a powerful feature you can use to build your data pipelines
Incorporate Apache Spark SQL directly in your pipeline
Download your reports delivered to AWS S3 directly via Data Pipelines
Learn how to use dynamic external variables as part of a collaborative data pipeline process
Data Pipelines lets you move and combine data between AWS DynamoDB and Google Sheets.
Learn how to change the structure of your connected data using our no code SQL themed tools
Create a simple data pipeline in a few clicks
Organize your data pipelines into logical groups
Shared resources within your Data Pipelines organization
Important information about disconnecting Google services
How data connections work
How Data Pipelines makes pipeline building efficient by using a cache
Run recurring workloads using Data Pipelines' built-in scheduler
How to interpret data pipeline error messages and debug your pipeline
Take advantage of data pipelines JSON definitions
Share pipelines and collaborate
Use views to load multiple tables with wildcards
Learn how to connect Google Analytics to Data Pipelines.
Learn how to connect Google Big Query to Data Pipelines.
Use Data Pipelines to connect S3 to Google Sheets & schedule data to feed reports.