In my last article, I discussed about monitoring Azure synapse using alerts feature. Today we will try to achieve the same using Log analytics feature. The log analytics will monitor the synapse pipelines and provide us more insights once the job fails.
The Azure Synapse integration with Log Analytics is particularly useful in the following scenarios:
- You want to write complex queries on a rich set of metrics that are published by Azure Synapse to Log Analytics. Custom alerts on these queries can be created via Azure Monitor.
- You want to monitor across workspaces. You can route data from multiple workspaces to a single Log Analytics workspace.
Setting up the Log analytics workspace
To begin with, create a log analytic workspace for now, we will come to it later.
Browse to your synapse workspaceà monitoringà Diagnostic option as shown below. You can see the option Add diagnostic setting which will push all the logs from your synapse workspace to the log analytics. Below are the log categories that will be available for you to choose based on your requirement.
Once you have made your selection provide a name and save it.
Once done now go to the log analytics workspace, General-> Logs and select the scope. Once this is done, we have the integration complete between the synapse workspace and the log analytics workspace but you wouldn’t be able to see any logs simply because the pipelines has not been triggered yet. The setting will only record the information of log sets of future runs but it won’t pick any historical data.
I am going to test this by re-executing the pipelines that we have in our synapse workspace.
Now go back to the log analytics logs tab where you can see the table created. The table holds key information like activity name, runID, type, category, etc.
Querying the Logs from ‘Log Analytics’
All the logs have started to move to log analytics, next step is to query this log information. To query this in the log analytics workspace you must use a modern query language called KUSTO or KQL.
As per the Microsoft official documentation, the KQL query is basically a read-only request to process and return results. The request is stated in plain text, using a data-flow model that is easy to read, author, and automate. Kusto queries are made of one or more query statements
I have queried to filter only ‘InProgress’ from the OperationName column. You can query based on your need, I just wanted to show you a demo where all the rows in the OperationName column contains the words ‘InProgress’ in it.
If you want to see only a limited number of columns, use the project keyword.
You can use the +New alert rule option to setup email alert. The steps to create alert and send email notifications has been explained in the previous article.
In the subsequent steps you must provide the Scope, Condition and Actions for the alert to be created, mostly they are pre-populated except for the few minor fields where you must enter manually.
We can see Log Query is pre-populated with query that we wrote to extract subset of data. In Alert Logic section, we can specify what is our criteria, I have selected that if query return is greater than 0 it records an alert should get fired, which means we have a failure reported.
Action Group is where we group actions i.e., either send Email, message, run Azure Functions or Web-hooks etc. I have an action group available already hence I am choosing that, in case if you don’t have one you can create it on the go.
Alert Rule Name – Provide a rule name which will then also be published in email.
Severity – As name suggests what should be the severity of alert (1, 2, or 3 based on its criticality)
Enable Alert on Completion – Alert will automatically be enabled once created else need to manually enable it post creation
With now everything being set, let’s trigger the pipeline and wait for few mins to check if we receive the email alert.
After couple of mins, I have successfully received the alert including the details like violation count and search query.
Summary: This is the second method to monitor the synapse analytics by using Log analytics method. In the previous blog I wrote about monitoring synapse using azure monitor, I strongly suggest you take a look at it too.