- Layanan konsultasi
3 DAY SYNAPSE ANALYTICS WORKSHOP
PRACTICAL WORKSHOP TO UNDERSTAND SYNAPSE ESSENTIALS: ARCHITECTURE, ENVIRONMENT, DATA FLOWS AND MONITORING
Azure Synapse is an enterprise analytics service that accelerates the time required to gain insights from big data and data warehouse systems. Azure Synapse brings together the best of SQL technologies used in enterprise data warehouse and Spark technologies used for big data, Data Explorer for time-series and log analytics, Pipelines for data integration and ETL/ELT, and deep integration with other Azure services such as Power BI, CosmosDB and AzureML.
It integrates in a single tool the entire data lifecycle from ingestion to transformation and making updated information available to analysts and data scientists in real time, if needed.
You will learn about the different components to identify the most appropriate in each case, adjusting power and cost according to the needs of each company.
With this 3 DAY workshop, you will learn in 9 hour (a 3 hour session) the advantages of working with a tool based on Apache Spark vs classic SQL-based Datawarehouse.
Synapse is the preferred tool for the different data roles (Engineers, Scientists and Analysts) it allows to work in the language that each one feels more comfortable with, SQL, .Net, Python, Spark, etc.
Schedule:
● Day 1 - Connectors and Data Ingestion ∘ Typical architectures. ∘ In your Azure Cloud or ours, we will create together a Synapse Analytics resource on which we will run the whole Workshop. ∘ We will use test data or data from your own company to create ingestion processes and explain the different options offered by Synapse Analytics. ∘ Installation of IR to retrieve data on-premise or private networks, how to get it back from a Cloud (AWS, Google Azure, etc.). ∘ Incremental loads. ∘ Secure connections with Keyvault.
● Day 2 - Data Transformation - Data Lake - Spark Pool ∘ Deploy the Spark Pool, setup Spark, options and resources. ∘ Transformations with Synapse notebooks in Python language and PySpark bookstore. ∘ Parquet format in Delta Lake (ACID, etc.)
● Day 3 - Consume Data with Views & Power BI and Process Monitoring ∘ Lakehouse concept. ∘ Views creation. ∘ Access data in SQL. ∘ Connect to Power BI. ∘ Monitoring Pipelines, triggers, usage, etc.