Shaun Xu

The Sheep-Pen of the Shaun


News

logo

Shaun, the author of this blog is a semi-geek, clumsy developer, passionate speaker and incapable architect with about 10 years experience in .NET. He hopes to prove that software development is art rather than manufacturing. He's into cloud computing platform and technologies (Windows Azure, Aliyun) as well as WCF and ASP.NET MVC. Recently he's falling in love with JavaScript and Node.js.

Currently Shaun is working at IGT Technology Development (Beijing) Co., Ltd. as the architect responsible for product framework design and development.

MVP


Today Microsoft announced the new service tiers in Azure SQL Database in the blog in preview phase. Currently SQL Database offers two type of service: web and business with the database size limitation from 100MB to 150GB. And now, the new service tiers provides database size up to 500GB with many cool features. And one of them I'm interested in is the point-in-time restore (a.k.a. PITR).

 

Create SQL Database in New Edition

First of all, we need to sign up to activate this preview feature. Just go to azure account page and select preview features tab, find the "New Service Tiers for SQL Database" and click "try it now". Below is the page I was approved for this feature.

01

 

Once we got this preview feature available we can back to the portal and create a new SQL Database. In the popup dialog we will find that there are three new editions next the original "web" and "business". These are the new service tiers introduced in this update, "basic", "standard" and "premium".

02

Below is the official description about these editions.

1, Basic: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate.
2, Standard: Standard is the go-to option for getting started with cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates.
3, Premium: Designed for mission-critical databases, Premium offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates.

The database size limitation for "basic" is 100MB - 2GB, "standard" is up to 250GB while "premium" is up to 500GB. I believe 500GB is big enough for all services.

Besides the database size, each tiers offers different performance level which is in the unit named DTU (database throughput units) that is a combination of CPU, memory, physical reads, etc. into a single unit, which we can compare with other editions simply. More information of DTU is described here.

The performance levels for each editions are listed below that we can select when we create a new database, and modify when it's running.

Edition

Performance Level

DTU

Max Database Size

Basic   1 2GB
Standard S1 5 250GB
  S2 25  
Premium P1 100 500GB
  P2 200  
  P3 800  

Below I selected "Premium" edition with P1 performance level, which provides at least 100 times power than the "Basic" edition.

03

Next I need to create a new SQL Database Server and we must ensure that the server must support the new editions. You will receive an error message if you attempt to create a database in new editions on a server that supports old editions.

04

Then after several minutes our database (and the server) will be created and read to use.

05

As below I created a new table named "account" and insert some data.

06

 

Restore Database to a Previously Time Point

With new editions SQL Database provides automatically backup feature, which means our database will be backup by Azure and we can restore them in case any data broken, such as forgot adding the "WHERE" statement when we want to delete some records.

The backup executed automatically in background, without any manual effort, stored the database in Azure Storage with geo-replication. "Basic" edition provides daily backup and stores in the past 24 hours. Both "Standard" and "Premium" editions provides point-in-time backup. The only different is that, "Standard" stores backups in the past 7 days while "Premium" stores 35 days.

 

Now let's back to the portal we will found there's a button named "Restore" in SQL Database list page.

07

When I clicked there will a dialog appeared where I can specify which time point I would like to restore for this database. As below I was going to restore my database to the first point.

08

Then we will find there's a new SQL database appeared in the list which is the one I was restoring. So this is another feature called "side-by-side restore", which means SQL Database will restore in a new database instead of overwrite. With this feature we can restore one database with multiple points and compare the data between them.

09

After several minutes the restore operation was finished and it's a full, dedicate SQL Database that we can use.

10

For example we can run query on it and as you can see, since I restore the first time point there's no "account" table I created.

11

 

Restore a Deleted Database

Besides point-in-time restore, SQL Database also provides the feature that we can restore a deleted database. For example, I deleted my two databases form portal.

12

Then if we open "Deleted Databases" tab we will find these two databases listed. 13

And we can select the database we wanted to restore, click "Restore" button and specify the time point we'd like. SQL Database will restore it back to us.

14

Below is the screenshot that I performed query on the database I restored from the deleted list, and you can see the table and data are all backed.

16

If we wants to remove all databases, backups we need to delete the SQL Database Server.

 

Summary

In this post I described the new service tiers for Azure SQL Database, especially its automatically backup feature.

We need to be very carefully when dealing with data, especially working on the production environment. I recommended to backup all data before we perform any actions so that we can restore in any mistakes. But if we forgot backup, then we will have a big trouble if deleted or dropped anything wrong.

The new SQL Database edition provides point-in-time restore which protect us. It performs automatically backup so that we can restore the database is anything wrong we did.

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


Last several weeks I was writing some unit test codes against a legacy class. This class was designed as a static class without any tests. When I tried to add test codes I found as it's a static class, some status was remained when some test methods were invoked, which might failed some following methods. So I need to reset all status (public and private) before each test methods.

 

If the status was stored in public properties of this static class, it should be very easy to reset. Just set NULL to this property in a method with TestInitialize attribute. It should be easy as well if I need to reset some status stored in private variants. We can use .NET Reflection to set NULL to them, some codes like below.

   1: var field = typeof(MyStaticClass).GetField("_privateVariant", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static);
   2: field.SetValue(null, new InstanceOfThisPrivateVar());

But if we need to verify the static event handlers' behaviors this might be a little bit difficult. For example in the first test I was to check an event should be raised with proper event argument. So I added a event handler function and some assert codes inside.

   1: MyStaticClass.AStaticEvent += (sender, e) =>
   2: {
   3:     Assert.IsNotNull(e);
   4:     Assert.AreEqual("Exptected value", e.SomeValue);
   5: };

While in the next test I'd like to verify the event argument should equal to another value in another scenario.

   1: MyStaticClass.AStaticEvent += (sender, e) =>
   2: {
   3:     Assert.IsNotNull(e);
   4:     Assert.AreEqual("Another expected value", e.SomeValue);
   5: };

If I run these two tests one by one separately, each of them should be passed. But if I ran them in one batch, one of them should be failed. This was because, when I ran them in one batch and when the first test was executed, it added its delegate to the event and performed assert code.

image

But when the second test was executed, the event handler delegate defined in the first test still connected with the event. Then the second event handler will be added. Then when this event was fired both of them are triggered, the first handler will be failed.

image

The solution is to remove all event handlers against this event but this is not easy if we don't want to store all event handlers in another place. After about 2 days research and found the solution on how to remove event handlers from a particular event through Reflection. The code is very simple.

   1: public static void RemoveAllEventHandlers(this Type self)
   2: {
   3:     foreach (var ei in self.GetEvents(AllBindingFlags))
   4:     {
   5:         var declaringType = ei.DeclaringType;
   6:         var field = declaringType.GetField(ei.Name, AllBindingFlags);
   7:         if (field != null)
   8:         {
   9:             var del = field.GetValue(null) as Delegate;
  10:             if (del != null)
  11:             {
  12:                 foreach (var sub in del.GetInvocationList())
  13:                 {
  14:                     ei.RemoveEventHandler(null, sub);
  15:                 }
  16:             }
  17:         }
  18:     }
  19: }

This extension method will remove all delegates registered on every static event handlers.

   1: typeof(MyStaticClass).RemoveAllEventHandlers();

And if you like you can add some condition codes to filter events by name to remove handlers against particular events.

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


In my project there is a platform component which takes the responsible for controlling and monitoring all other components and it need to lead all other components' metadata when starting. We defined metadata in each component assembly with attributes and will be retrieved through .NET reflection. At the beginning of this project ,about 3 years ago, there are only 5 - 6 components and it takes about 10 - 20 seconds to reflect them. But with more and more components were introduced currently there will be more than 30 components installed in our system. Then we met a performance problem when system start up, it takes 60 - 80 seconds to scan all those assemblies to retrieve the metadata in my development workstation. And it takes several minutes in some of customers and testers machine since less hardware capability (slower CPU and less memory).

After reviewed the code I found there is no problem in our implementation. The main problem is we utilizes System.Reflection to retrieve metadata, the performance of System.Reflection is not that satisficed. After some investigation and research I found an alternative solution which is using Mono.Cecil to reflect .NET assembly.

We thought the best solution is to move all metadata information from assembly attributes to database or configuration file so that we can load them without using reflection technology. But since we also need to maintain backward capability, we must allow all existing components works for our new platform component. So we have to continue using attributes to store metadata.

 

We can find Mono.Cecil here. We also can add Mono.Cecil into our .NET project reference from NuGet package management in Visual Studio.

image

Below are the operations we are using defined in Mono.Cecil to replace System.Reflection.

System.Reflection Mono.Cecil
Assembly.LoadFrom() AssemblyDefinition.ReadAssembly()
CustomAttributeData.GetCustomAttributes(Assembly) AssemblyDefinition.CustomAttributes
Aassmebly.GetTypes() AssemblyDefinition.Modules.SelectMany(m => m.GetTypes())
Type.GetInterfaces() TypeDefinition.Interfaces
CustomAttributeData.GetCustomAttributes(Type) TypeDefinition.CustomAttributes
CustomAttributeData.Constructor.GetParameters() CustomAttribute.Constructor.Resolve().Parameters

Most of them are easy to learn and use. We load an assembly from the static class `AssemblyDefinition` and the static method `ReadAssembly` by passing the file path. Different from System.Reflection, it will NOT lock the assembly file. This means by using Mono.Cecil we don't need to create a new AppDomain to load the assembly and unload it at the end.

With the `AssemblyDefinition` we can retrieve attributes and types. Then we can use the `TypeDefinition` and `CustomAttribute` instances to retrieve the type name, interfaces it implements and the value defined in the attribute.

Just one thing need to be highlighted here, by default Mono.Cecil will reflect limited information from the targeting assembly. For example, let's say I have an assembly `MyClass.dll` and there's a class with a custom attribute attached, as well as implemented an interface those are defined in separated assemblies.

image

When we invoked `AssemblyDefinition.ReadAssembly` to `MyClass.dll`, Mono will only load this assembly but will NOT lock it.

image

Now we can retrieve the types defined in this assembly and the attributes, interfaces it associates. But since the attribute and interface were defined in other assemblies, when we invoke `CustomAttributes` and `Interfaces` they will return metadata that can be found in this assembly. For example we can retrieve the constructor of `MyAttrib` in type of `MethodReference`, which contains the parameter values and the arguments. But we cannot retrieve the parameter names since the definition of this attribute was in another assembly. To retrieve more information we need to invoke `Resolve` method to let Mono find the definition assembly of this type and load more information. We can also define the behavior how Mono find the relevant assemblies, which path Mono need to lookup. I will show the code later.

image

Similarly if we want to retrieve more information of the interface our class was implemented, we need to invoke `TypeReference.Resolve()`.

 

Below is the code I implemented in my project that using Mono.Cecil to retrieve metadata information from assemblies. In order to decouple from my business logic I created a set of interfaces represents all operations I need for reflection as below.

The `IReflector` interface is the main entry to let user load assembly.

   1: public interface IReflector
   2: {
   3:     IAssemblyReflector LoadAssembly(string path);
   4: }

`IAssemblyReflector` interface isolates operations that interact with assembly, such as retrieving attributes, types, name, file path, etc..

   1: public interface IAssemblyReflector
   2: {
   3:     IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute;
   4:  
   5:     IEnumerable<ITypeReflector> GetTypes();
   6:  
   7:     string Location { get; }
   8:  
   9:     string FileName { get; }
  10:  
  11:     string FullName { get; }
  12: }

With `ITypeReflector` interface we can retrieve attributes attached as well as interfaces it implements.

   1: public interface ITypeReflector
   2: {
   3:     IEnumerable<ITypeReflector> GetInterfaces();
   4:  
   5:     IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute;
   6:  
   7:     string FullName { get; }
   8:  
   9:     string Name { get; }
  10: }

With `IAttributeReflector` we can get values of the arguments and named property values from its constructor.

   1: public interface IAttributeReflector
   2: {
   3:     IDictionary<string, string> Values { get; }
   4: }

 

Below is the Mono.Cecil implementation of these interfaces. Notice that I invoked `Resolve` to load the definition assembly when retrieving interfaces and attribute values. I also tell Mono.Cecil to lookup relevant assemblies in the same folder of the assembly I'm loading.

   1: public class MonoReflector : IReflector
   2: {
   3:     public IAssemblyReflector LoadAssembly(string path)
   4:     {
   5:         var resolver = new DefaultAssemblyResolver();
   6:         resolver.AddSearchDirectory(Path.GetDirectoryName(path));
   7:         var reader = new ReaderParameters()
   8:         {
   9:             AssemblyResolver = resolver
  10:         };
  11:  
  12:         var assembly = AssemblyDefinition.ReadAssembly(path, reader);
  13:         return new MonoAssemblyReflector(assembly);
  14:     }
  15: }
  16:  
  17: public class MonoAssemblyReflector : IAssemblyReflector
  18: {
  19:     private AssemblyDefinition _assembly;
  20:  
  21:     public MonoAssemblyReflector(AssemblyDefinition assembly)
  22:     {
  23:         _assembly = assembly;
  24:     }
  25:  
  26:     public IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute
  27:     {
  28:         if (_assembly.HasCustomAttributes)
  29:         {
  30:             var expectedTypeName = typeof(T).Name;
  31:             return _assembly.CustomAttributes
  32:                 .Where(a => a.AttributeType.Name == expectedTypeName)
  33:                 .Select(a => new MonoAttributeReflector(a))
  34:                 .ToList();
  35:         }
  36:         else
  37:         {
  38:             return new IAttributeReflector[] { };
  39:         }
  40:     }
  41:  
  42:     public IEnumerable<ITypeReflector> GetTypes()
  43:     {
  44:         var result = new List<ITypeReflector>();
  45:         var modules = _assembly.Modules;
  46:         foreach (var module in modules)
  47:         {
  48:             var types = module.GetTypes();
  49:             foreach (var type in types)
  50:             {
  51:                 result.Add(new MonoTypeReflector(type));
  52:             }
  53:         }
  54:         return result;
  55:     }
  56:  
  57:     public string Location
  58:     {
  59:         get
  60:         {
  61:             return _assembly.MainModule.FullyQualifiedName;
  62:         }
  63:     }
  64:  
  65:     public string FileName
  66:     {
  67:         get
  68:         {
  69:             return _assembly.MainModule.Name;
  70:         }
  71:     }
  72:  
  73:     public string FullName
  74:     {
  75:         get
  76:         {
  77:             return _assembly.FullName;
  78:         }
  79:     }
  80: }
  81:  
  82: public class MonoTypeReflector : ITypeReflector
  83: {
  84:     private TypeDefinition _type;
  85:  
  86:     public MonoTypeReflector(TypeDefinition type)
  87:     {
  88:         _type = type;
  89:     }
  90:  
  91:     public IEnumerable<ITypeReflector> GetInterfaces()
  92:     {
  93:         return _type.Interfaces.Select(i => new MonoTypeReflector(i.Resolve()));
  94:     }
  95:  
  96:     public IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute
  97:     {
  98:         if (_type.HasCustomAttributes)
  99:         {
 100:             var expectedTypeName = typeof(T).Name;
 101:             return _type.CustomAttributes
 102:                 .Where(a => a.AttributeType.Name == expectedTypeName)
 103:                 .Select(a => new MonoAttributeReflector(a))
 104:                 .ToList();
 105:         }
 106:         else
 107:         {
 108:             return new IAttributeReflector[] { };
 109:         }
 110:     }
 111:  
 112:     public string FullName
 113:     {
 114:         get
 115:         {
 116:             return _type.FullName;
 117:         }
 118:     }
 119:  
 120:     public string Name
 121:     {
 122:         get
 123:         {
 124:             return _type.Name;
 125:         }
 126:     }
 127: }
 128:  
 129: public class MonoAttributeReflector : IAttributeReflector
 130: {
 131:     private CustomAttribute _attribute;
 132:     private IDictionary<string, string> _values;
 133:  
 134:     public MonoAttributeReflector(CustomAttribute attribute)
 135:     {
 136:         _attribute = attribute;
 137:     }
 138:  
 139:     public IDictionary<string, string> Values
 140:     {
 141:         get
 142:         {
 143:             if (_values == null)
 144:             {
 145:                 _values = new Dictionary<string, string>();
 146:                 var constructorArguments = _attribute.Constructor.Resolve().Parameters.Select(p => p.Name).ToList();
 147:                 var constructorParameters = _attribute.ConstructorArguments.Select(a => a.Value.ToString()).ToList();
 148:                 for (var i = 0; i < constructorArguments.Count; i++)
 149:                 {
 150:                     _values.Add(constructorArguments[i], constructorParameters[i]);
 151:                 }
 152:                 foreach (var prop in _attribute.Properties)
 153:                 {
 154:                     _values.Add(prop.Name, prop.Argument.Value.ToString());
 155:                 }
 156:             }
 157:             return _values;
 158:         }
 159:     }
 160: }

Below is the implementation by System.Reflection.

   1: public class DotNetAssemblyReflector : IAssemblyReflector
   2: {
   3:     private Assembly _assmebly;
   4:  
   5:     public DotNetAssemblyReflector(Assembly assmebly)
   6:     {
   7:         _assmebly = assmebly;
   8:     }
   9:  
  10:     public virtual IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute
  11:     {
  12:         List<CustomAttributeData> returnValue = new List<CustomAttributeData>();
  13:         var pCustomAttributeType = typeof(T);
  14:  
  15:         foreach (CustomAttributeData customAttributeData in CustomAttributeData.GetCustomAttributes(_assmebly))
  16:         {
  17:             if (customAttributeData.Constructor.DeclaringType.Name == pCustomAttributeType.Name)
  18:             {
  19:                 returnValue.Add(customAttributeData);
  20:             }
  21:         }
  22:  
  23:         return returnValue.Select(x => new DotNetAttributeReflector(x)).ToList();
  24:     }
  25:  
  26:     public string GetVersion()
  27:     {
  28:         string version = string.Empty;
  29:         var assemblyFileVersionCustomAttributeData = GetAttributes<AssemblyFileVersionAttribute>();
  30:         if (assemblyFileVersionCustomAttributeData.Count() == 1)
  31:         {
  32:             try
  33:             {
  34:                 var assemblyFileVersion = assemblyFileVersionCustomAttributeData.First().Values;
  35:                 version = assemblyFileVersion["version"];
  36:             }
  37:             catch (FormatException ex)
  38:             {
  39:                 // // Console.WriteLine(String.Format("Problem getting the assembly version: {0}", assembly.FullName));
  40:                 // // Console.WriteLine(ex);
  41:             }
  42:         }
  43:         return version;
  44:     }
  45:  
  46:     public IEnumerable<ITypeReflector> GetTypes()
  47:     {
  48:         return _assmebly.GetTypes().Select(t => new DotNetTypeReflector(t)).ToList();
  49:     }
  50:  
  51:     public string Location
  52:     {
  53:         get 
  54:         {
  55:             return _assmebly.Location;
  56:         }
  57:     }
  58:  
  59:     public string FileName
  60:     {
  61:         get 
  62:         {
  63:             return _assmebly.ManifestModule.Name;
  64:         }
  65:     }
  66:     
  67:     public string FullName
  68:     {
  69:         get 
  70:         {
  71:             return _assmebly.FullName;
  72:         }
  73:     }
  74: }
  75:  
  76: public class DotNetTypeReflector : ITypeReflector
  77: {
  78:     private Type _type;
  79:  
  80:     public DotNetTypeReflector(Type type)
  81:     {
  82:         _type = type;
  83:     }
  84:  
  85:     public IEnumerable<ITypeReflector> GetInterfaces()
  86:     {
  87:         return _type.GetInterfaces().Select(i => new DotNetTypeReflector(i)).ToList();
  88:     }
  89:  
  90:     public IEnumerable<IAttributeReflector> GetAttributes<T>() where T : Attribute
  91:     {
  92:         List<CustomAttributeData> returnValue = new List<CustomAttributeData>();
  93:         var pCustomAttributeType = typeof(T);
  94:  
  95:         foreach (CustomAttributeData customAttributeData in CustomAttributeData.GetCustomAttributes(_type))
  96:         {
  97:             if (customAttributeData.Constructor.DeclaringType.Name == pCustomAttributeType.Name)
  98:             {
  99:                 returnValue.Add(customAttributeData);
 100:             }
 101:         }
 102:  
 103:         return returnValue.Select(a => new DotNetAttributeReflector(a)).ToList();
 104:     }
 105:  
 106:     public string FullName
 107:     {
 108:         get 
 109:         {
 110:             return _type.FullName;
 111:         }
 112:     }
 113:  
 114:     public string Name
 115:     {
 116:         get 
 117:         {
 118:             return _type.Name;
 119:         }
 120:     }
 121: }
 122:  
 123:  
 124: public class DotNetAttributeReflector : IAttributeReflector
 125: {
 126:     private CustomAttributeData _attribute;
 127:     private IDictionary<string, string> _values;
 128:  
 129:     public IDictionary<string, string> Values
 130:     {
 131:         get
 132:         {
 133:             if (_values == null)
 134:             {
 135:                 Dictionary<string, string> returnValue = new Dictionary<string, string>();
 136:                 try
 137:                 {
 138:                     ParameterInfo[] ConstructorParameters = _attribute.Constructor.GetParameters();
 139:                     for (int i = 0; i < ConstructorParameters.Length; i++)
 140:                     {
 141:                         returnValue.Add(ConstructorParameters[i].Name, _attribute.ConstructorArguments[i].Value.ToString());
 142:                     }
 143:                 }
 144:                 catch (KeyNotFoundException ex)
 145:                 {
 146:                     // something is not matching up with constructor parameters and arguments
 147:                     // this will be a FormatException for our purposes, so we will wrap and rethrow
 148:                     throw;
 149:                 }
 150:                 foreach (CustomAttributeNamedArgument argument in _attribute.NamedArguments)
 151:                 {
 152:                     returnValue.Add(argument.MemberInfo.Name, argument.TypedValue.Value.ToString());
 153:                 }
 154:                 _values = returnValue;
 155:             }
 156:             return _values;
 157:         }
 158:     }
 159:  
 160:     public DotNetAttributeReflector(CustomAttributeData attribute)
 161:     {
 162:         _attribute = attribute;
 163:     }
 164: }
 165:  
 166: public class DotNetReflector : IReflector
 167: {
 168:     public IAssemblyReflector LoadAssembly(string path)
 169:     {
 170:         return new DotNetAssemblyReflector(Assembly.LoadFrom(path));
 171:     }
 172: }

 

Finally let's have a look on the performance result. I'm using Mono.Cecil and System.Reflection to retrieve the metadata information from all 38 components in my system. In .NET 4.0 and 4.5.1, Mono.Cecil took 4 seconds while System.Reflection took 21 seconds.

image

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


On 16th Jan, the "Gu" announced several features in Windows Azure in his blog, one of them is "Website Staging Support ". With this feature we can deploy our application to the staging slot of our Windows Azure Website (a.k.a. WAWS) for test purpose. And if everything is fine we can simply "SWAP" to the production slot without any down time and within few seconds. If you have been working with Windows Azure Cloud Service, this feature is very similar.

But since this feature is still in preview phase, there's some restricts. If you specified the deployment source of your WAWS, when you opened the staging feature, you might be encounter some problem and in this post I'd like to explain and the workaround.

 

Common Deployment Pattern before Staging Feature

Before we have staging feature available, if we'd like to continue deploy the Beta version our application to WAWS, we might be using the branch feature of our source code provider. For example, when we are using GitHub, we can create two branches in my repository: master and beta. Then I created two Website one of it connected with master another with beta.

image

In this way I can deploy the new version in beta branch and keep my beta WAWS deployed automatically for test purpose. If the test was passed I will merge my changes from beta to master branch. This will trigger the master WAWS deployed so that the new version will be available in my production site.

image

 

Staging Slot and Swap

By enable WAWS Staging feature, we don't need to create two websites and two branches as below. First let's enable the staging feature in my WAWS. Just in the general tab of my WAWS, let's click "Enable staged publishing" button.

image

You have to upgrade your WAWS to standard mode to enable this feature. Free and shared mode doesn't support staging right now.

Then in the WAWS list we will find another item appeared under my original WAWS with "staging" in its name. Also it has a URL with "-staging" as well.

image

Then let's click into the staging item, set the deployment from my GitHub repository.

image

Once I push any code changes the staging site will pull the code and deploy them into the staging website. Below is a MVC website deployed to my staging site.

image

And when I visited the staging website I will find the application had been deployed. In order to make future discussion clear I displayed the application version in the site content.

image

Now if we opened the website production URL there will be nothing since our code was not deployed.

image

Now I can clicked the "SWAP" button in azure portal. This will switch my staging and production site, very similar as the feature in Cloud Service.

image

After that we will find the website appeared in my production slot while my staging slot turned to empty.

image

 

Problem When Next Commit

Now let's change our code and commit again. Since I had enabled the GitHub deployment in my staging slot, in theory it should be deployed to my staging slot automatically. But after waiting for several minutes there's no deployment appeared.

image

And in the production slot since we didn't enabled the deployment source there's no "deployment" tab we can click.

image

So this is the problem if we are using GitHub deployment against the staging feature.

I didn't tested other deployment sources such as TFS, local Git but I think they should have the same problem since all of them are using the same deployment mechanism.

 

(My Guessing) Reason and How Staging Implemented

From the phenomenon of swapping slots I guess this is because deployment repository on WAWS are separated between production and staging slots and the swap operation just changes the DNS setting.

So when we enabled the staging feature in my WAWS and integrated with my GitHub, my staging slot contains the deployment repository and staging slot was register as "integrated". Then when I committed the code staging slot was get updated successfully.

image

When I clicked "Swap" the DNS setting was changed.

image

Now if I committed code changes, my staging slot was registered as integrated but since currently it was linked with my production environment, the source code cannot be pulled since no repository available. My production slot linked with the environment which contains repository but it was not registered as integrated. So I think this is the reason why my next code commit was not getting deployed once swapped.

image

If I swapped back and committed another version then my staging slot deployed automatically.

image

And the website are updated as well.

image

But this is NOT what I wanted. I wanted the staging slot getting deployed automatically while production slot should be updated only by swapping.

 

Workaround by David Ebbo

This problem was reported in Windows Azure MVP mail-list. David Ebbo from Microsoft confirmed that this is an issue of WAWS staging feature and will be solved in the future releases. For now he provided a workaround to make staging slot continue deployed with GitHub only.

Now let's create a new WAWS and enable the staging feature. After that we need to enable GitHub deployment on both of the production and staging slot.

I also recommended to create a new clean GitHub repository before you set up the deployment. Because once you set up the deployment WAWS will fetch your code and run the deployment. If you have existing code in your repository both staging and production will be deployed and this will cause some problem in the future.

Next step, we need to go to GitHub and cut down the hook to the WAWS production slot.

image

Selected "Service Hooks" and "Webhooks URLs" there should be two items. We should find "__staging" string in one of them which is the hook to our WAWS staging slot, and the other one is the hook to production slot. What we should do is to remove the hook WITHOUT "__staging" and click "Update settings."

image

After this step our GitHub repository hooks should be like this.

image

Now let's create a new MVC website and push the codes to GitHub then we will find only the staging slot got deployed.

image

Then swap the staging and production slot.

image

Now let's commit some changes and push to GitHub. We will find the staging slot got deployed with version 2 while the production slot still in version 1.

image

Now swap again to move the version 2 to production slot.

image

And then update our code to version 3 and push, the staging getting deployed again successfully.

image

 

(My Guessing) How Workaround Works

The workaround mentioned above utilizes the GitHub hook feature. Firstly we enabled the deployment on both staging and production slots so that in Windows Azure side both of them could connect to GitHub to deploy our application.

image

When we removed the production hook in GitHub, this slot will not be able to deployed automatically due to failed to connect to GitHub.

image

Then we when pushed a new version, only staging slot got the hook notification and began the deployment.

image

When we swapped the slots, the DNS was changed but the hook still connected with the staging slot with the URL (I guess). So the staging still be able to get the GitHub hook notification now.

image

Now if we pushed a new version to GitHub it will notify the hook to the staging URL and now the staging URL connected to another slot (previously production slot). So the staging can be deployed again.

image

And when we swapped the DNS switched again and the new version will be deployed to staging if we pushed version 3.

image

If we went back to azure portal, the deployments will be shown in our WAWS. If we visited the staging slot the deployments should be initial commit, v1 and v3.

image

While the production slot shows initial commit and v2. Just the same as what I guessed above.

image

 

Summary

In order to make the cloud environment testing easier Microsoft introduced the staging feature in Windows Azure Website. With this new feature we can deploy our application to staging slot and test, then swap to the production slot within few seconds.

If we want only staging slot continue deployed with our source control source we need some workaround at this stage. But I think in the near future this problem will be fixed, maybe when it's GA.

I only covered the case when using GitHub deployment. But I think we can leverage the same approach against other source control services like Visual Studio Online, CodePlex, etc..

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


On Nov 5th, Microsoft announced a new service named Windows Azure Scheduler, which allows us to invoke actions (such as calling HTTP/S endpoints or posting a message to a storage queue) on any schedule. Then on Dec 13th, Scott Guthrie published a blog post introduced this new feature. In this post Scott demonstrated how to use this feature to create a blog poll job each 5 minutes through the Windows Azure portal. At the end of his article he mentioned this feature can be used through .NET API package.

I think using Windows Azure Scheduler from our code would be more frequently than from the portal, since normally we might need define, update, enable or disable jobs from our application or through our application instead of asking our end users to update jobs from azure portal directly. So in this post I would like to introduce how to use this feature from a .NET code.

 

Windows Azure Scheduler Management Library on NuGet

Currently Windows Azure Scheduler SDK is still in preview phase on NuGet. So if we search the package from Visual Studio Manage NuGet Packages dialog we will not be able to find it.

Hence we must open the Package Manager Console window in Visual Studio and install it in command-line mode with the preview flag specified. Let's create a new console application and open the Package Manager Console then type of command as below. Do not forget the last "-Pre" argument.

image

 

Create Cloud Service

If you have read Scott's blog you should understand that in Windows Azure Scheduler each job will be defined in a job collection. But you might not know that each job collection should be located under a "Cloud Service". And more confused, this "Cloud Service" was totally a different entity than the "Cloud Service" we are familiar with which can be used to host web role, worker role and virtual machines.

In this case the "Cloud Service" is a container for resources intended to be more utilized in future versions of the service management API. And there are some resource providers that implement this new internal API such as Windows Azure store, HDInsight and some others. Windows Azure Scheduler is another resource that will be located in this "Cloud Service" so before we created job collection and job, we must firstly create a new Cloud Service, define the region where our scheduler will be hosted.

 

We will create a new cloud service through Windows Azure Scheduler SDK. First of all we need to populate a valid Windows Azure credentials with a management certificate configured properly. The code below retrieved the management certificate from local machine and initialized a certificate cloud credential instance by specifying its thumbprint.

Please refer this document on how to create, upload a Windows Azure Management Certificate. And make sure you have install this certificate on your machine at Current User.

   1: static void Main(string[] args)
   2: {
   3:     // retrieve the windows azure managment certificate from current user
   4:     var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
   5:     store.Open(OpenFlags.ReadWrite);
   6:     var certificate = store.Certificates.Find(X509FindType.FindByThumbprint, "{THE-CERTIFICATE-THUMBPRINT}", false)[0];
   7:     store.Close();
   8:  
   9: }

Then I created a subscription credential from this certificate and its Windows Azure Subscription ID.

   1: static void Main(string[] args)
   2: {
   3:     // retrieve the windows azure managment certificate from current user
   4:     ... ...
   5:  
   6:     // create management credencials and cloud service management client
   7:     var credentials = new CertificateCloudCredentials("SUBSCRIPTION-ID", certificate);
   8:     var cloudServiceMgmCli = new CloudServiceManagementClient(credentials);
   9:  
  10: }

Now we would be able to create a cloud service for our Windows Azure Scheduler from the management client. In the code below I created a new cloud service located at South Center US.

   1: static void Main(string[] args)
   2: {
   3:     // retrieve the windows azure managment certificate from current user
   4:     ... ...
   5:  
   6:     // create management credencials and cloud service management client
   7:     ... ...
   8:  
   9:     // create cloud service
  10:     var cloudServiceCreateParameters = new CloudServiceCreateParameters()
  11:     {
  12:         Description = "shaun-wasch-demo",
  13:         Email = "jfarrio@gmail.com",
  14:         GeoRegion = "South Central US",
  15:         Label = "shaun-wasch-demo"
  16:     };
  17:     var cloudService = cloudServiceMgmCli.CloudServices.Create("shaun-wasch-demo", cloudServiceCreateParameters);
  18: }

 

Create Job Collection and Job

Now we have a cloud service available, the next step is to create a job collection inside it. In the code below I initialized a new management client named "SchedulerManagementClient" which takes the responsible for managing the scheduler job collection as well as the resource provider.

You might need to invoke "SchedulerManagementClient.RegisterResourceProvider" to register a new resource provider if you got the exception "The subscription is not entitled to use the resource" when you executed the code listed below.

Ref the resource provider problem please refer to Sandrino Di Mattia's blog.

Once we have "SchedulerManagementClient" ready we can create a new job collection with the plan and quota specified. Since I used "Free" plan so the quote must be smaller than once per hour, and assigned to the cloud service I had just created.

   1: static void Main(string[] args)
   2: {
   3:     // retrieve the windows azure managment certificate from current user
   4:     ... ...
   5:  
   6:     // create management credencials and cloud service management client
   7:     ... ...
   8:  
   9:     // create cloud service
  10:     ... ...
  11:  
  12:     // create job collection
  13:     var schedulerMgmCli = new SchedulerManagementClient(credentials);
  14:     var jobCollectionIntrinsicSettings = new JobCollectionIntrinsicSettings()
  15:     {
  16:         Plan= JobCollectionPlan.Free,
  17:         Quota = new JobCollectionQuota()
  18:         {
  19:             MaxJobCount = 5,
  20:             MaxJobOccurrence = 1,
  21:             MaxRecurrence = new JobCollectionMaxRecurrence()
  22:             {
  23:                 Frequency = JobCollectionRecurrenceFrequency.Hour,
  24:                 Interval = 1
  25:             }
  26:         }
  27:     };
  28:     var jobCollectionCreateParameters=new JobCollectionCreateParameters()
  29:     {
  30:          IntrinsicSettings=jobCollectionIntrinsicSettings,
  31:          Label = "jc1"
  32:     };
  33:     var jobCollectionCreateResponse = schedulerMgmCli.JobCollections.Create("shaun-wasch-demo", "jc1", jobCollectionCreateParameters);
  34: }

Finally I will create a new job in this job collection. To manage jobs I need to initialize "SchedulerClient" which can be used to create, update and delete jobs inside a job collection. In this example I defined this job with a HTTP polling action to this blog website once an hour, with the job named "poll_blog".

   1: static void Main(string[] args)
   2: {
   3:     // retrieve the windows azure managment certificate from current user
   4:     ... ...
   5:  
   6:     // create management credencials and cloud service management client
   7:     ... ...
   8:  
   9:     // create cloud service
  10:     ... ...
  11:  
  12:     // create job collection
  13:     ... ...
  14:  
  15:     // create job
  16:     var schedulerClient = new SchedulerClient(credentials, "shaun-wasch-demo", "jc1");
  17:     var jobAction = new JobAction()
  18:     {
  19:         Type = JobActionType.Http,
  20:         Request = new JobHttpRequest()
  21:         {
  22:             Uri = new Uri("http://blog.shaunxu.me"),
  23:             Method = "GET"
  24:         }
  25:     };
  26:     var jobRecurrence = new JobRecurrence()
  27:     {
  28:         Frequency = JobRecurrenceFrequency.Hour,
  29:         Interval = 1
  30:     };
  31:     var jobCreateOrUpdateParameters = new JobCreateOrUpdateParameters()
  32:     {
  33:         Action = jobAction,
  34:         Recurrence = jobRecurrence
  35:     };
  36:     var jobCreateResponse = schedulerClient.Jobs.CreateOrUpdate("poll_blog", jobCreateOrUpdateParameters);
  37: }

You might notice that I invoked "SchedulerClient.Jobs.CreateOrUpdate" instead of "SchedulerClient.Jobs.Create". This because "CreateOrUpdate" will trigger an HTTP PUT request to Windows Azure Management RESTful API with job name provided, while "Create" will trigger an HTTP POST request that job name will be generated by Windows Azure automatically.

So if you utilized "SchedulerClient.Jobs.Create" you will find the job name would be an GUID.

Now we have finished all codes to create a cloud service, job collection and job from the .NET SDK. The full code listed below.

   1: using Microsoft.WindowsAzure;
   2: using Microsoft.WindowsAzure.Management.Scheduler;
   3: using Microsoft.WindowsAzure.Management.Scheduler.Models;
   4: using Microsoft.WindowsAzure.Scheduler;
   5: using Microsoft.WindowsAzure.Scheduler.Models;
   6: using System;
   7: using System.Collections.Generic;
   8: using System.Linq;
   9: using System.Security.Cryptography.X509Certificates;
  10: using System.Text;
  11: using System.Threading.Tasks;
  12:  
  13: namespace AzureSchedulerDemo
  14: {
  15:     class Program
  16:     {
  17:         static void Main(string[] args)
  18:         {
  19:             // retrieve the windows azure managment certificate from current user
  20:             var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
  21:             store.Open(OpenFlags.ReadWrite);
  22:             var certificate = store.Certificates.Find(X509FindType.FindByThumbprint, "MANAGEMENT-CERTIFICATE-THUMBPRINT", false)[0];
  23:             store.Close();
  24:  
  25:             // create management credencials and cloud service management client
  26:             var credentials = new CertificateCloudCredentials("SUBSCRIPTION-ID", certificate);
  27:             var cloudServiceMgmCli = new CloudServiceManagementClient(credentials);
  28:  
  29:             // create cloud service
  30:             var cloudServiceCreateParameters = new CloudServiceCreateParameters()
  31:             {
  32:                 Description = "shaun-wasch-demo",
  33:                 Email = "jfarrio@gmail.com",
  34:                 GeoRegion = "South Central US",
  35:                 Label = "shaun-wasch-demo"
  36:             };
  37:             var cloudService = cloudServiceMgmCli.CloudServices.Create("shaun-wasch-demo", cloudServiceCreateParameters);
  38:  
  39:             // create job collection
  40:             var schedulerMgmCli = new SchedulerManagementClient(credentials);
  41:             var jobCollectionIntrinsicSettings = new JobCollectionIntrinsicSettings()
  42:             {
  43:                 Plan= JobCollectionPlan.Free,
  44:                 Quota = new JobCollectionQuota()
  45:                 {
  46:                     MaxJobCount = 5,
  47:                     MaxJobOccurrence = 1,
  48:                     MaxRecurrence = new JobCollectionMaxRecurrence()
  49:                     {
  50:                         Frequency = JobCollectionRecurrenceFrequency.Hour,
  51:                         Interval = 1
  52:                     }
  53:                 }
  54:             };
  55:             var jobCollectionCreateParameters=new JobCollectionCreateParameters()
  56:             {
  57:                  IntrinsicSettings=jobCollectionIntrinsicSettings,
  58:                  Label = "jc1"
  59:             };
  60:             var jobCollectionCreateResponse = schedulerMgmCli.JobCollections.Create("shaun-wasch-demo", "jc1", jobCollectionCreateParameters);
  61:  
  62:             // create job
  63:             var schedulerClient = new SchedulerClient(credentials, "shaun-wasch-demo", "jc1");
  64:             var jobAction = new JobAction()
  65:             {
  66:                 Type = JobActionType.Http,
  67:                 Request = new JobHttpRequest()
  68:                 {
  69:                     Uri = new Uri("http://blog.shaunxu.me"),
  70:                     Method = "GET"
  71:                 }
  72:             };
  73:             var jobRecurrence = new JobRecurrence()
  74:             {
  75:                 Frequency = JobRecurrenceFrequency.Hour,
  76:                 Interval = 1
  77:             };
  78:             var jobCreateOrUpdateParameters = new JobCreateOrUpdateParameters()
  79:             {
  80:                 Action = jobAction,
  81:                 Recurrence = jobRecurrence
  82:             };
  83:             var jobCreateResponse = schedulerClient.Jobs.CreateOrUpdate("poll_blog", jobCreateOrUpdateParameters);
  84:  
  85:             Console.WriteLine("Press any key to exit");
  86:             Console.ReadKey();
  87:         }
  88:     }
  89: }

Now let's execute this application. This will invoke Windows Azure Management API to create the job collection and job. Then we can back to Windows Azure portal to see a new job collection had been created at the region we defined.

image

Click into this job collection we will find the plan and quota is the same as what we wanted.

image

And the job we created from our application will be shown in "Jobs" tab with the name we specified.

image

And we will find that this job had been executed at least once after we submitted from the "History" tab.

image

If we back to the "Dashboard" tab of this job collection, we will find the URI display at the right side of the page. The cloud service was shown inside the URI as below.

image

 

View Job History and More

Besides the job creation, we can retrieve the job execution history through this SDK.

   1: // retrieve job history
   2: var schedulerClient = new SchedulerClient(credentials, "shaun-wasch-demo", "jc1");
   3: var jobGetHistoryParameters = new JobGetHistoryParameters()
   4: {
   5:     Skip = 0,
   6:     Top = 100
   7: };
   8: var history = schedulerClient.Jobs.GetHistory("poll_blog", jobGetHistoryParameters);
   9: foreach(var action in history)
  10: {
  11:     Console.WriteLine("{0}\t{1}\t{2}\t{3}\t{4}", action.Status, action.Message, action.RetryCount, action.RepeatCount, action.Timestamp);
  12: }

The history will be displayed in the command window.

image

Which the same what we saw in Windows Azure portal.

image

We can also retrieve the job collections from a given cloud service, jobs from a given job collection. And we can update, delete job collections and jobs as well.

When define a job, we could be able define the retry policy and error action of the job executed failed. This two parameter can only be specified through .NET SDK.

 

Summary

Windows Azure Scheduler allows us to define a time-based jobs in Windows Azure platform. We can use this feature to invoke a website or web service through HTTP request regularly. We can also let the job insert message into a Windows Azure Storage Queue so that some background worker roles can be trigger to start some backend processes, and this might be more useful in our application design and development.

For example in one of my project there will be some backend jobs which retrieve some data from some web services and insert into Windows Azure SQL Database, and some other jobs will be started hourly or monthly to process those data. In my original deign, all jobs will be stored in Windows Azure Table and I utilized a dedicate worker role to check if it's time to start a job, generate a message into a queue then a related worker role will handle this message and start process the job. In this mode I need to implement how to store jobs (action and recurrence settings) into Table, how to go though the table and calculate when a job should be started. I also had to deal with concurrency if more than one worker roles to check the jobs.

image

Now by using the Windows Azure Scheduler I can simplify my design as below. Those stuff such as how to store the jobs definition, how to start a job cloud be leave to Windows Azure Scheduler. What I need to do is to implement the relevant business logic for each job.

image

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.