Using Azure CLI 2.0 behind a web proxy with mitmproxy or Fiddler

The Azure CLI is a wonderful tool to manage Azure resources but at times, you'll run into a bizarre error (or want to reverse engineer what API call is being made for a given comment) and need more information. HTTP session capture tools like Fiddler or mitmproxy are excellent for tracing HTTP calls, but the since the Azure CLI constructs requests directly using the requests Python library, it ignores the Windows or macOS default proxy settings.

Here's how you can call the Azure CLI forcing it to use the HTTP web proxy:

export HTTP_PROXY="http://localhost:8080" HTTPS_PROXY="http://localhost:8080"
az rest --debug --method put --uri "$URL" --body "$BODY"

Note that unless you just want to use a HTTP proxy, mitmproxy or Fiddler will also be intercepting HTTPS requests and presenting its own certificate. Even if you it trusted in the system certificate store, again - Python's requests uses its own resulting in something like this error message: : HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /subscriptions/subid/resourceGroups/vmname/providers/microsoft.Security/locations/westus2/jitNetworkAccessPolicies/default/Initiate?api-version=2015-06-01-preview (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')])")))

HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /subscriptions/subid/resourceGroups/vmname/providers/microsoft.Security/locations/westus2/jitNetworkAccessPolicies/default/Initiate?api-version=2015-06-01-preview (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')])")))

Update June 2021: Azure CLI now has published guidance on this scenario, and permits for customization of the certificate authority bundles by setting REQUESTS_CA_BUNDLE - see here for details.

Disabling SSL entirely as originally noted below should no longer be used unless you are stuck on an old version of the Azure CLI:

Set AZURE_CLI_DISABLE_CONNECTION_VERIFICATION=1 to also disable SSL certificate verification for the Azure CLI:


Good to go!


Automating database restores with SQL Managed Instance

Provisioning if an often overlooked aspect when architecting a SaaS solution, but it's one of the most important ones. How do you make sure new users and tenants can be automatically created and ready to use?

Recently, I worked on a provisioning API for Azure SQL Managed Instance and ran into some interesting quirks.

Database restores

When restoring databases, SQL Managed Instance will not provide any STATS or restore completion events to the user; this is because internally, the SQL Managed Instance has a background worker that does the actual restore - this is well documented by Brent Ozar on his blog. You can see this in action by restoring a large backup (say, >2GB) and then querying active restores:

-- Query active restores
SELECT query = a.text, start_time, percent_complete, eta = dateadd(second,estimated_completion_time/1000, getdate())
FROM sys.dm_exec_requests r CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) a WHERE r.command IN ('RESTORE DATABASE', 'DROP DATABASE');

for example:

name database_id query percent_complete
master 1 RESTORE DATABASE [9f288db4-12af-49a6-ab9d-b414afaf467a] FROM URL = N'' WITH STATS=10, BUFFERCOUNT=8, MAXTRANSFERSIZE=3145728, NORECOVERY, REPLACE, MOVE N'v3starterdb_data' TO N'C:\WFRoot\DB16C.2\Fabric\work\Applications\Worker.CL_App15\work\data\9f288db4-12af-49a6-ab9d-b414afaf467a.mdf' , [truncated for brevity] 37.5055

In short, SQL MI accepts a restore command and begins a restore to a database named after a random GUID. Once completed, it renames it to the user-requested database name.

If you need to map the physical database name to the logical (user-facing) database names, check the physical_database_name column value from sys.databases:

SELECT name, physical_database_name FROM sys.databases

Note that this value only becomes available after restore is complete as a result, detecting restore progress programmatically is impractical (theoretically possible if OK with throwing a bunch of string matching-kludges into the query above... but I'm not).

Detecting restoring completion

As a result of the physical vs logical DB name mappings, detecting database restore progress or completion is non-trivial and can be a blocker if you have downstream tasks to perform, e.g. running some T-SQL against your newly restored DB to prepare it for use.

When a restore is completed and the physical (GUID) database is renamed to the user-requested one, it appears to be online:

    DATABASEPROPERTYEX(name,'Updatability') AS 'Updatability',
    DATABASEPROPERTYEX(name,'Status') AS 'Status'
FROM dbo.sysdatabases;
name Updatability Status

However attempting to query that database (either read or write) will fail with Unable to access database 'mydb' because it lacks a quorum of nodes for high availability. Try the operation again later. So what gives?

SQL MI's Business Critical tier uses something similar to Always On to prevent data loss. Those availability groups are configured for you automatically - but while initial replication occurs and until quorum is established, the database is present on the master but not queriable!

You can use the following query to determine if a DB is still under replication, and therefore ready or not on the Business Critical tier:

-- Query synchronization state
SELECT sdb.database_id,, sdb.physical_database_name, t.is_ready FROM sys.databases sdb
    SELECT r.database_id, is_ready = CASE WHEN COUNT(CASE WHEN r.synchronization_state = 2 THEN 1 ELSE NULL END) > 1 THEN 1 ELSE 0 END
    FROM sys.dm_hadr_database_replica_states r
    GROUP BY r.database_id
) t ON t.database_id = sdb.database_id
WHERE name LIKE 'Works_tid%'

Once is_ready=1 your database is good to go, and you can safely execute additional queries against it. Note that this is during normal operation - if you are doing something fancy like resizing your instance, the ready detection not work as expected.

One last word of caution regarding SqlConnection and SMO

This is not specific to SQL Managed Instance, but a word of caution for those automating query execution with the C# SqlConnection class from System.Data.SqlClient: it does not necessarily open a new physical connection (see connection pooling for details) and it can only support one command at a time.

Attempting to re-use a SqlConnection (e.g. via multiple RestoreAsync() in SMO) before it is ready can result in very bizarre behavior (the SqlConnection closes itself and the active thread may stop executing). Best practice is:

  1. Avoid SMO altogether in favor of T-SQL, which is what SMO produces anyways
  2. Create a new SqlConnection instance for every T-SQL command you wish to execute, and close the connection after. Wrapping the creation in a using statement will do this for you.

Automatically and securely mounting encrypted ZFS filesystems at boot with Azure Key Vault

The need for automation

As noted in my prior blogs, I use ZFS on Linux for my home fileserver and have been very impressed - it's been extremely stable, versatile and the command line utilities have simple syntax that work exactly as you'd expect them to.

A few months back native encryption was introduce into master branch for testing (you can read more here), and I have been using it to encrypt all my data. I chose not encrypt my root drive since it doesn't host any user data, and I do not want my boot to be blocked on password input - for example what if there's a power failure while I'm travelling for work?

However that still leaves two nagging problems:
1. It became tedious to manually SSH into my machine every time it restarts to type in numerous encrypted filesystem passphrases
2. A bunch of my systemd services depend on user data; issue in systemd (#8587) prevents using auto-generated mount dependenices to wait for the filesystems to be mounted so I have to start them menually.

Introducing zfs-keyvault

I decided to kill two birds with one stone and am happy to introduce zfs-keyvault, available on GitHub. It provides both a systemd service that can be depended upon by other services, as well automation for securely mounting encrypted ZFS filesystems.

On the client (with ZFS filesystems), a zkv utility is installed that can be used to manage an encrypted repository containing one or more ZFS filesystem's encryption keys. This repository is locally stored and its encryption key is placed in an Azure Key Vault.

On your preferred webhost or public cloud, a small Flask webserver called zkvgateway gates access to this repository key in Key Vault and can release under certain conditions.

On boot, the systemd service runs zkv which will reach out to the gateway, who in turn SMSs you with a PIN for approval. The inclusion of a PIN stops people from blindly hitting your endpoint to approve requests, and also prevents replay attacks. The gateway is also rate-limited to 1 request/s to stop brute-force attacks.

Once the PIN is confirmed over SMS, repository key is released from Azure Key Vault and the zkv utility can now decrypts the ZFS filesystem encryption keys which are locally stored, and begins mounting the filesystems. The filesystem encryption keys never leave your machine!

I've uploaded the server-side components as a Docker image named stewartadam/zkvgateway so it can be pulled and run easily. Enjoy!

Connecting Azure IoT Hub to Azure Functions and other Event-Hub compatible services

IoT Hub is a great, scalable way to manage your IoT devices. Did you know you can trigger an Azure Function when receiving a device-to-cloud message the same way you can for an Event Hub message?

Building the Event Hub connection string

Every IoT hub offers a read-only Event Hub-compatible endpoint that can be used with normal Event Hub consumers. This includes Azure Functions, which provides an Event Hub trigger.

You can view your IoT Hub's Event Hub-compatible by opening your IoT Hub resource in the Azure portal and navigating to the Endpoints blade, then clicking the Events (messages/events) endpoint. You will find the Event Hub-compatible name, and Event Hub-compatible endpoint (in the format of sb:// - keep note of these for later.

IoT Hub - Access Policy

Next, navigate to the IoT hub's Shared Access Policies blade. You will need to either use one of the existing policies or add your own, and the policy you choose must have the Service connect permission. Click any policy to view its associated primary key.

IoT Hub - Access Policy

Now, we can use these four pieces of information to construct an Event Hub connection string, which follows the format Endpoint=sb://EH_COMPATIBLE_ENDPOINT;EntityPath=EH_COMPATIBLE_NAME;SharedAccessKeyNa‌​me=IOTHUB_POLICY_NAME;Share‌​dAccessKey=IOTHUB_POLICY_KEY.

Configuring Azure Functions

We will now setup an Azure Function with a trigger using the Event Hub-compatible information.

Open the Azure Function resource and switch to the Integrate tab. Add a new trigger and choose Event Hub, which will reveal the following settings page:

Azure Functions - Trigger configuration

Enter the Event Hub-compatible name collected from IoT Hub earlier under Event Hub name and then press new to enter the Event Hub connection string derived earlier.

Azure Functions - Configure connection string

Lastly, we need to add code under the Develop tab with a parameter that matches the Event parameter name field, so use the following:

using System;

public static void Run(string myEventHubMessage, TraceWriter log)
    log.Info($"C# Queue trigger function processed: {myEventHubMessage}");

Testing the connection

Under the Function's Develop tab, click the Logs button from the toolbar to open the execution log. Next, send a cloud-to-device message -- if you haven't setup a device, simple_sample_device.js from the IoT Hub Node.js SDK does nicely.

After sending your message, you should see that your Function triggered successfully.

What about in ARM templates?

This manual setup is fine for testing, but what about automation for when moving to production?

ARM templates provide a reference() function that resolves a resource's runtime state. It's perfect for obtaining the Event Hub-compatible name that was provisioned as it was created:

    "parameters": {
        "iotHubName": {
            "defaultValue": "iothub2functions",
            "type": "String"
    "variables": {
        "iotHubApiVersion": "2016-02-03",
        "iotHubPolicyName": "service"
    "outputs": {
        "eventHubConnectionString": {
            "type": "string",
            "value": "[concat('Endpoint=',reference(resourceId('Microsoft.Devices/IoTHubs',parameters('iotHubName'))),';EntityPath=',reference(resourceId('Microsoft.Devices/IoTHubs',parameters('iotHubName'))),';SharedAccessKeyName=',variables('iotHubPolicyName'),';SharedAccessKey=',listKeys(resourceId('Microsoft.Devices/IotHubs/Iothubkeys', parameters('iotHubName'),variables('iotHubPolicyName')), variables('iotHubApiVersion')).primaryKey)]"

Getting started with the Skype for Business SDKs

The Skype for Business App SDK and Skype Web SDK are great ways to build the unified communication experience provided by the Skype for Business (previously Lync) server directly into web and mobile applications and deliver new interactive and collaborative experiences into your application.

If you're getting started with the Skype SDKs and/or Office 365 for the first time, the documentation available on the The Skype Developer Platform is a great place to start, but there are a number of steps you'll need to work through before coding.

In this post, I'll detail how to get setup with Skype for Business Online (Office 365) using a developer tenant and get coding as quickly as possible. As well, this is a good time to note that that at this time, some features of the SDKs are in preview. More features are coming out with every release!

Good to know: compatibility & platform features

As of writing (Jan. 2017), both the App SDK and Web SDK support connecting to on-premise and online (via Office 365) deployments.

However, at this time there are some important distinctions between the capabilities of the SDKs on the two platforms and in compatibility with O365 vs on-premise for meeting join. These differences will be rounded out in future SDK releases (for details on roadmap, see the Andrew Bybee's Ignite 2016 talk).


The App SDK is available for iOS and Android apps, and supports the unauthenticated workflow. Meeting join, IM and audio-video communication is possible but done anonymously, using only a meeting link. There is no opportunity for a user to sign-in, so tasks like contact management are not available.


Contrary to the App SDK, the Web SDK has better support for authenticated workflows. Contact management, 1:1 conversations and more is available from the Web SDK after the user has authenticated. Anonymous meeting join, where the end-user joins a conference without signing in, is available but only when connecting to on-premise Skype for Business deployments at this time.

It's also important to note that both Chrome and Firefox have announced the deprecation of NPAPI plugins, which prevents the use of the Skype for Business Web App Plug-In. As such, audio-video functionality will be limited in Chrome and Firefox at this time.

IE11 and Safari both work with the web plug-in, and Edge supports A/V calling without any plugins via its built-in ORTC stack.

Setup an Office developer tenant

In order to get started, you'll need to be an administrator on Office 365 tenant and have 2-3 users that are licensed with Skype for Business. If you do not already have such an account, sign up at to obtain a free Office developer account.

Once your account is activated, visit, click the Admin tile, then add 2-3 users and assign them a developer license.


If you intent to build a mobile app, you are now ready to go - all you need is your app and a meeting link! Check out the meeting join samples for Android and iOS, and the App SDK documentation

You can generate a meeting link using the Skype for Business integration with Outlook, the desktop clients or the Lync Web Scheduler. To create them programtically, use the UCWA API. Meeting links follow the format


Because the Web SDK supports authenticated workflows, it requires some additional setup in order to authenticate users against Azure AD.

Creating an Azure AD Application

Registering an application will allow the Web SDK to perform OAuth authentication aginst users in an Azure AD directory. Previously, managing Azure AD applications required delving into the classic portal (instructions here) but fortunately Azure AD management is now available (in preview) in the new portal:

  1. Go to
  2. Select Azure Active Directory > App registrations > Add
    Azure Portal - AAD
  3. Choose a human-readable name for your application and a unique sign-on URL. If unsure, use something like - you can change it later once you move to production. Leave the type as Web app / API.

    Azure Portal - AAD - Create

  4. Select the Manifest button to open the manifest editor:

    Azure Portal - AAD - Manage

    Then set oauth2AllowImplicitFlow is set to true:

    Azure Portal - AAD - Manifest

  5. Next, navigate to the application's Properties blade and change Multi-tenanted to Yes. This will ensure that people outside of your Office tenant can use this application.

    Enabling multi-tenantancy on the application

  6. Navigate to the Reply URLs blade and enter the URLs at which your application will be hosted. Note that while developing your application, adding http://localhost is convinient but be aware that it means anyone who stands up their own site on http://localhost and knows your client ID can authenticate against your application. Tread carefully!

  7. Navigate to the Required Permissions blade and add delegated permissions for Skype for Business Online:

    Azure Portal - AAD - Permissions

    Azure Portal - AAD - Permissions 2

That's it - your application is now configured. However before it can authenticate users with the Web SDK, you must consent to the application using an admin user from your Office tenant. Open a In-Private/Incognito browsing window and visit this URL, replacing both CLIENT_ID and REDIRECT_URI as configured above:

Now check out the Web SDK documentation try running the Web SDK samples, or use the interactive Web SDK sample.