-
Notifications
You must be signed in to change notification settings - Fork 91
Tutorials
Learn how to use Data Accelerator step by step and get started setting up your big data pipeline in minutes. Data Accelerator provides all the tools necessary to go from simple to complex requirements, all within easy-to-use portal.
To unleash the full power Data Accelerator, deploy to Azure and check the tutorials for cloud mode below. We have also enabled a "hello world" experience that you can try out locally by running a docker container. When running locally there are no dependencies on Azure, however, the functionality is very limited and only there to give you a very cursory overview of Data Accelerator. Deploy locally using these instructions and then check out the tutorials of local mode below.
Tutorials will walk you through both, the local mode as well as the cloud mode, step by step.
- Running samples
- Create a pipeline locally with no cloud dependencies in 5 minutes!
- Set up simple alert without writing any code
- Set up aggregated alert without writing any code
- Output to disk
- Tagging - Simple Rules
- Tagging - Aggregate Rules
- SQL queries - More powerful queries using SQL
- Create new Metric chart
- Debug jobs using Spark logs
- Use Reference Data to augment streaming data
- Windowing functions
- Use UDF and UDAF in your code
- Customize the schema
- Scale docker host
- Create a pipeline in 5 minutes!
- Live Query - Save hours by validating query in seconds!
- Set up simple alert without writing any code
- Set up aggregate alert without writing any code
- Set up new outputs without writing any code
- Tagging - Simple Rules
- Tagging - Aggregate Rules
- Tagged data flowing to CosmosDB
- SQL Queries - More powerful queries using SQL
- Create new Metric chart
- Windowing functions
- Using Reference data
- Use UDF, UDAF, Azure Functions in your query
- Use Accumulator to store data in-memory for jobs
- Scale up a deployment
- Diagnose issues using Spark logs
- Diagnose issues using Telemetry
- Inviting others and Roles based access
- Generate custom data with the Simulator
- Customize a Cloud Deployment
- Use input EventHub/IotHub in a different tenant
- Local Cloud Debugging
- Schedule a batch job
- Output data to Azure SQL Database
- Run Data Accelerator Flows on Databricks