Last month at TechEd Microsoft announced the public preview of the Azure File Service (or Azure Files, or DFS-as-a-Service). This new feature allows us to access a Storage Account using the SMB protocol which you can access your Storage Account as if it were a DFS or a share.
The storage team recently blogged about some basic guidance to use the File Service in Virtual Machines and in Web/Worker Roles. In this article we’ll be looking in detail at how you can use the File Service in your Web and Worker Roles.
Before we start
Keep in mind that this is still a preview feature, so in order to use the new File Service you’ll need to sign up for the preview on the Preview features page:
Once the preview feature is activated go ahead and create a new Storage Account. After the account is created you’ll see an additional endpoint showing up (*.file.core.windows.net):
Mounting Shares
The first thing you’ll need to do after you account has been created is to create a share (CloudFileShare). A share is a top level entity in the File Service. You could for example have a reports share in which your application will be saving reports generated by users. In this share you’ll be able to create directories, upload files, … But the most important part about this is that you can mount a share as a mapped drive, allowing you to access the data from multiple Web/Worker Role Instances and Virtual Machines. If you want to use this from Web Sites or on-premises (which cannot use the SMB feature at the moment), you do it through the REST API (or use the Storage SDK, PowerShell, …).
Let’s start by creating the reports share (you will need to update the Storage SDK to version 4.0.0 or higher):
var share = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString")) .CreateCloudFileClient() .GetShareReference("reports"); share.CreateIfNotExists();
Now that our reports share exists we’ll be able to mount it as a mapped drive and upload/download files, create directories, … In order to do this from code we’ll use some P/Invoke magic to create or remove a mapped drive. You might be wondering why we’re not using a startup task for this, but things will get clear when we start to talk about the context of a mapped drive.
internal static class NetworkApi { [DllImport("mpr.dll", EntryPoint = "WNetAddConnection2")] public static extern uint WNetAddConnection2(NETRESOURCE lpNetResource, string lpPassword, string lpUsername, uint dwFlags); [DllImport("mpr.dll", EntryPoint = "WNetCancelConnection2")] public static extern uint WNetCancelConnection2(string lpName, uint dwFlags, bool fForce); [DllImport("mpr.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int WNetGetConnection([MarshalAs(UnmanagedType.LPTStr)] string localName, MarshalAs(UnmanagedType.LPTStr)] StringBuilder remoteName, ref int length); }
And here is some sample code that will allow you to mount and unmount the mapped drives:
/// <summary> /// Create a mapped drive pointing to Azure files. /// </summary> /// <param name="driveLetter"></param> /// <param name="filesPath"></param> /// <param name="accountName"></param> /// <param name="accountKey"></param> /// <param name="force"></param> public static void Mount(string driveLetter, string filesPath, string accountName, string accountKey, bool force = true) { if (String.IsNullOrEmpty(filesPath)) throw new ArgumentException("The filesPath is required.", "filesPath"); if (String.IsNullOrEmpty(accountName)) throw new ArgumentException("The accountName is required.", "accountName"); if (String.IsNullOrEmpty(accountKey)) throw new ArgumentException("The accountKey is required.", "accountKey"); driveLetter = ParseDriveLetter(driveLetter); // Define the new resource. var resource = new NETRESOURCE { dwScope = (ResourceScope)2, dwType = (ResourceType)1, dwDisplayType = (ResourceDisplayType)3, dwUsage = (ResourceUsage)1, lpRemoteName = filesPath, lpLocalName = driveLetter }; // Close connection if it exists. if (force) { NetworkApi.WNetCancelConnection2(driveLetter, 0, true); } // Create the connection. var result = NetworkApi.WNetAddConnection2(resource, accountKey, accountName, 0); if (result != 0) { throw new FilesMappedDriveException(String.Format(MountError, driveLetter, filesPath, (SYSTEM_ERROR)result), result); } } /// <summary> /// Unmount a mapped drive. /// </summary> /// <param name="driveLetter"></param> public static void Unmount(string driveLetter) { driveLetter = ParseDriveLetter(driveLetter); // Unmount. var result = NetworkApi.WNetCancelConnection2(driveLetter, 0, true); if (result != 0) { throw new FilesMappedDriveException(String.Format(UnmountError, driveLetter, (SYSTEM_ERROR)result), result); } }
The context is king
Here’s the challenging part, a mapped drive always exists in the context of a specific user. And on a machine a user could have one or more contexts in which mapped drives are defined. An example why a user could have more than one context is when the user is running some tools in limited mode and other tools in elevated mode (eg: running cmd.exe normally and right-clicking cmd.exe and choosing “Run as administrator”).
Because of this we’ll need to take a closer look at how this will work when deploying Web Roles that use the File Service. Because when your Web Role is deployed your code runs in 2 processes: WaIISHost.exe (the code in your WebRole.cs, when the instance starts) and the w3wp.exe process (which runs your web application).
When exactly will our context be the same for 2 processes and when will it be different? By default, the context will be the same because for both your WebRole.cs and your web application the current user will be something like RD0003FF412670$ (a computer account).
So this means we’ll be able to mount the mapped drive in the OnStart method of our WebRole.cs:
public class WebRole : RoleEntryPoint { public override bool OnStart() { // Define the new resource. var resource = new NETRESOURCE { dwScope = (ResourceScope)2, dwType = (ResourceType)1, dwDisplayType = (ResourceDisplayType)3, dwUsage = (ResourceUsage)1, lpRemoteName = "\\sandibox.file.core.windows.net\reports", lpLocalName = "Z:" }; // Create the connection. var result = NetworkApi.WNetAddConnection2(resource, "sandibox", "aaaaabbbbbF90SFqOSV5k336akFF/ay/Q4dKL8qHHv1EV6y3msgAalO8sBsOm4h4yebOwof0UMLQDs0R1wZ5rQ==", 0); if (result != 0) { throw new FilesMappedDriveException(String.Format(MountError, driveLetter, filesPath, (SYSTEM_ERROR)result), result); } return base.OnStart(); } }
After that, in our web application we’ll be able to access to share and work with it as if it were a local disk:
The following code which uses the Directoy class in System.IO can now be used to list all files from the share in my Storage Account:
public ActionResult ShowFiles() { var files = Directory.GetFiles("Z:\\"); return View(files); }
Can you imagine how powerful this is? You get the scalability, reliability, size, speed, … of Azure Storage while using the traditional System.IO API (FileStream, FileInfo, DirectoryInfo…). This is also a advantage for applications that should run both on-premises and in the cloud. Or consider legacy applications that you want to migrate to Cloud Services. Even if they (or a third party component) really depend on the local file system you’ll be able to easily lift-and-shift them to Cloud Services thanks to these new features.
In the past the Cloud Drive (or XDrive) was also a solution for this problem, but the disadvantage was that a disk could only be mounted to 1 instances, while the File Service can be used to mount a mapped drive on multiple instances. This also applies to Data Disks in Virtual Machines, so even there it would be an advantage to use the File Service.
But let’s get back to the context part. There will be times that your WebRole.cs and your web application could run in different contexts:
- The executionContext of your role is set to elevated. In that case the code in your WebRole.cs will be running onder the SYSTEM account, while your web application will remain under the computer account.
- You change the identity of the application pool of your Web Application
If we run the same code in the OnStart of our WebRole.cs, the following will happen:
Since our WebRole.cs runs under the SYSTEM context, the mapped drive will only be available in this context. Our web application will not be able to see the mapped drive or access it. In that case, we’ll also need to mount the mapped drive when the web application starts, if we want the share to be available in our web application (eg: being able to upload files, download files, list files, …).
In addition to mapping the drive in the OnStart of my WebRole.cs this means I’ll need to setup the mapping when the application starts, in the Application_Start of my MvcApplication for example.
public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { // Define the new resource. var resource = new NETRESOURCE { dwScope = (ResourceScope)2, dwType = (ResourceType)1, dwDisplayType = (ResourceDisplayType)3, dwUsage = (ResourceUsage)1, lpRemoteName = "\\sandibox.file.core.windows.net\reports", lpLocalName = "Z:" }; // Create the connection. var result = NetworkApi.WNetAddConnection2(resource, "sandibox", "aaaaabbbbbF90SFqOSV5k336akFF/ay/Q4dKL8qHHv1EV6y3msgAalO8sBsOm4h4yebOwof0UMLQDs0R1wZ5rQ==", 0); if (result != 0) { throw new FilesMappedDriveException(String.Format(MountError, driveLetter, filesPath, (SYSTEM_ERROR)result), result); } AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } }
And that’s it. If you want your Worker Role to also use the share you can add the same logic in the OnStart method of your Worker Role. Oh and this will also work for your applications running in Virtual Machines (not Web Sites).
Introducing RedDog.Storage
The code samples in this article all mention the P/Invoke signatures and the native calls to mpr.dll required to create mapped drives. To save time and reduce the friction of setting up the P/Invoke calls I created a repository on GitHub (https://github.com/sandrinodimattia/RedDog) that contains all the code you need to get started with the Azure File Service. The goal of this repository is to have somewhere I can put all my helpers, extension methods, tools, … This will make it easier for me to use my own tools in current and future Azure projects, but hopefully you’ll also find it useful for what you’re doing in Microsoft Azure.
PM> Install-Package RedDog.Storage
So the first thing I did was create a few helpers that make it easy to mount an Azure Files share as a mapped drive. These methods will make it easy to mount, unmount and list mapped drives without having to worry about any native P/Invoke calls. And remember that this will work for Web/Worker Roles but also for Virtual Machines.
using RedDog.Storage.Files; namespaceMyWebRole { public class WebRole : RoleEntryPoint { public override bool OnStart() { // Mount a drive. FilesMappedDrive.Mount("P:", @"\\sandibox.file.core.windows.net\reports", "sandibox", "aaaaabbbbbF90SFqOSV5k336akFF/ay/Q4dKL8qHHv1EV6y3msgAalO8sBsOm4h4yebOwof0UMLQDs0R1wZ5rQ=="); // Unmount a drive. FilesMappedDrive.Unmount("P:"); // Mount a drive for a CloudFileShare. CloudFileShare share = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString")) .CreateCloudFileClient() .GetShareReference("reports"); share.Mount("P:"); // List drives mapped to an Azure Files share. foreach (var mappedDrive in FilesMappedDrive.GetMountedShares()) { Trace.WriteLine(String.Format("{0} - {1}", mappedDrive.DriveLetter, mappedDrive.Path)); } return base.OnStart(); } } }
Enjoy!