Smarter ideas worth writing about.

Navigating the Azure Data Factory v2 Enhancements

Tags: Azure

The introduction of Azure Data Factory (ADF) was originally released as an Azure platform service in the cloud environment in 2015 – the same year that it became Generally Available (GA) to end-users. The service was released to be the leading resource for all data orchestration activities in the cloud. Whether the requirement is to simply copy data from source to destination, or kick off a Data Lake Analytics transformation job, ADF is the answer. The ADF service is a fully managed cloud service built for complex data hybrid ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) and data integration processing.

At the end of 2017, Azure publicly released ADF version 2 (v2) which introduced various enhancements and incorporated customer feedback through the initial version release. While still in public preview, it’s open for all to test the functionality and changes from ADF v1. Let’s walk through some of the key enhancements – including SSIS capabilities (finally!) – in the below sections.

Capabilities Introduced in ADF v2

The overall concept of datasets, activities and pipelines remain intact within v2; however, the new version brings a few changes. A few highlights in the new version include:

  • Control Flow Concepts:Common ETL concepts such as chaining activities, branching activities, pipeline parameterization, custom state passing, looping containers, delta workflows, get metadata, web activity and many more. Let’s dig deeper on a few of these concepts.
    • Chaining activities:The introduction of a new activity property, dependsOn, allows the user to set the value equal to the name of the dependent activity.
    • Branching activities: Similar to programming languages, the new If-condition property allows the user to evaluate a Boolean expression to determine downstream activity processing based on an output of ‘True’ or ‘False’.
    • Looping containers: The ForEach activity is a repeating control flow in the pipeline which will iterate over specified collection of activities in a loop. Similarly, there is an Until activity which loops until the provided condition equals ‘True’.
    • Delta workflows: Common in ETL processing, delta loads are implemented to only read in data which has changed since the last execution. ADF v2 includes a lookup activity which allows for the natural implementation of delta loads.
  • SSIS Functionality: Much anticipated through customer feedback, users can “lift and shift” SSIS solutions from on-premise to the cloud with ADF v2. The service allows the user to spin up an Azure-SSIS Integration Runtime – essentially a fully managed cluster of VM’s dedicated to running the packages. SSISDB can be hosted on PaaS instance of Azure SQL DB to orchestrate project deployments. Within the ADF authoring visual tool, a pipeline can be created which contains a stored procedure task that executes packages in SSISDB. The pipeline can then be tied to a trigger, which will schedule it for all future executions.
  • Flexible Scheduling:Triggers contain properties which determine when pipelines need to be kicked off and executed. There are two types of triggers used in ADF v2:
    • Schedule Trigger: triggers based on wall-clock schedules
    • Tumbling Window Trigger: triggers which operate on a periodic interval while retaining state
  • Visual Authoring & Monitoring: In early 2018, based on customer feedback, Microsoft released rich interactive visuals to the authoring and monitoring of ADF pipelines. This allows users to publish pipelines without writing a single line of code. Integration with VSTS GIT for source control allows for full transparency.

An Overview of the Version Differences

The following table introduces high-level differences between the two ADF services.

Feature  ADF v1 ADF v2
Datasets
Datasets refer to source & destination data stores – tables, files, folders, etc. Availability property refers to the processing window time slice (hourly, daily, monthly)

Removed the availability feature – triggers replace this need
Linked Services Connection string information for external resources
Added functionality of connectVia property to utilize Integration Runtimes
Pipelines  Logical grouping of activities with properties for start time, end time and paused state
Removed the properties for start time, end time and paused state – rather use the triggers
Activities  Actions to perform within a pipeline. Data movement and transformation  Added control flow activities
Hybrid Data Movement Data Management Gateways orchestrate on-premise to cloud data transfer; Integration Runtimes:
Azure (cloud only)
Self-hosted (hybrid on-premise & cloud)
Azure-SSIS (SSIS execution)
Parameters  N/A Key-value parameters passed to pipeline activities via manual execution or triggers
Expressions Built-in system variables & functions JSON values which are evaluated at runtime and return another JSON value
Pipeline Runs N/A Single instance of pipeline execution assigned unique GUID value
Activity Runs N/A Instance of activity execution within pipeline 
Trigger Runs N/A  Instance of trigger execution
Scheduling Pipeline start & end time, dataset availability Trigger executions

One of the larger changes is the transfer from the concept of time slices and dataset availability to a more traditional ETL approach scheduling process. Instead of waiting for a dataset to become available for an activity when a pipeline is executing, the pipeline itself is triggered and kicks off the activity regardless of the state of the dataset.

The Integration Runtimes (IR) are the compute infrastructure used by ADF v2 for data movement, activity execution and SSIS package executions. The IR provides the bridge between the linked services referenced in the activity and the activity itself. The IR is referenced by the linked service, which then provides the compute environment where the activity will be run in the nearest region to provide the most efficient performance based on the target datastore.

  • Azure IR instances, as mentioned in the table above, can perform activities between cloud data stores only. Azure IR is fully a fully managed, serverless compute in Azure.
  • Self-hosted IR can perform activities between cloud data and private network data stores. This can be installed on-premise or in a virtual private network.
  • Azure-SSIS IR instances are built specifically for executing SSIS packages. If the IR is installed in the public cloud, to read on-premise data, the IR must join a Virtual Network (VNet) which connects to on-premise data.

The Future of Azure Data Factory

The introduction to native SSIS capabilities in ADF v2 was a key addition for the cloud data orchestration service. It provides a stepping stone for customers to get off their on-premise servers and move to a cloud first strategy, rather than completely rearchitecting their existing data integration process from SSIS to ADF v1. Along with the SSIS integration, many other features such as control flow tasks and triggers allow for greater flexibility in pipeline executions. As the new ADF v2 service approaches GA and is no longer in public preview, users can submit feedback to Microsoft for further enhancements to the service.

Share:

About The Author

Data Solutions Consultant

Jeff is a member of the Data Solutions team in the Nashville branch of Cardinal Solutions. He has comprehensive experience across the complete Business Intelligence lifecycle, while also leading various clients in their migration from on-premise solutions to the cloud utilizing Microsoft Azure services.