Shaun Xu

The Sheep-Pen of the Shaun


News

logo

Shaun, the author of this blog is a semi-geek, clumsy developer, passionate speaker and incapable architect with about 10 years experience in .NET. He hopes to prove that software development is art rather than manufacturing. He's into cloud computing platform and technologies (Windows Azure, Aliyun) as well as WCF and ASP.NET MVC. Recently he's falling in love with JavaScript and Node.js.

Currently Shaun is working at IGT Technology Development (Beijing) Co., Ltd. as the architect responsible for product framework design and development.

MVP

My Stats

  • Posts - 101
  • Comments - 357
  • Trackbacks - 0

Tag Cloud


Recent Comments


Recent Posts


Archives


Post Categories



During the Chinese New Year holiday, Microsoft had just announced a new feature in V12 SQL Database named Dynamic Data Masking. This feature limits sensitive data exposure by masking it to non-privileged users.

We often have similar requirement in our project, that for some users they cannot view some of the data by displaying masked value. For example, email address may need to be displayed as j******@gmail.com in user profile page for normal visitor. In this case what we need to do is to implement the masking logic in our code. But this not very secured, and adds effort in our application layer.

SQL Database Dynamic Data Masking helps us preventing unauthorized access to sensitive data. Since it's inside SQL Database, there is almost no impact on the application layer.

 

Enable Dynamic Data Masking

To enabled this feature just open the SQL Database in azure new preview portal, open Dynamic Data Masking icon and enable it.

imagePlease ensure your SQL Database supports V12 and latest updates. This is general available in some regions, but may still be in public preview stage in others. For example it is in preview stage in East Asia so you have to check the item below.

image

And make sure the pricing tier you selected supports this feature.

image

Now everything is OK. We can create our tables and insert data records into this new SQL Database. Assuming we have a table named Contacts with several columns:

1, ID: Integer, no need to protect.

2, Name: String, user name, no need to protect.

3, Email: String, need to be masked for normal user.

4, Credit Card Number: String, need to be masked for normal user.

5, Password Hint: String, need to be masked for normal user.

 

Configure Masking Policy

Even though we have data in tables and columns, we can add masking policies without data modification. Just configure the policy in azure portal by opening the Dynamic Data Masking icon.

First we need to define which SQL Server Logins have the permission to view unmasked data, which is called Privileged Login. In this case I already have two logins in my SQL Database Server: super_user and normal_user. I added super_user to the privileged logins.

image

Then specify the table and column name as well as the masking policy. For example for the Email column I was using build-in email masking policy.

image

I can add more masking policies for columns I'd like to protect as below.

image

 

View Data from ADO.NET Client

Below I created a simple console application in C# and connect to the database I've just created. In order to make the dynamic data masking feature work, I need to use security enabled connection string rather than the original one.

image

The console application source code is very simple. Note that I'm using security enabled connection string with the super_user login.

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Data.SqlClient;
   4: using System.Linq;
   5: using System.Text;
   6: using System.Threading.Tasks;
   7:  
   8: namespace shx_maskingdatademo
   9: {
  10:     class Program
  11:     {
  12:         static void Main(string[] args)
  13:         {
  14:             var connectionString = ""
  15:                 + "Server=tcp:insider.database.secure.windows.net,1433;"
  16:                 + "Database=shx-maskingdatademo;"
  17:                 + "User ID=superuser@insider;"
  18:                 + "Password={xxxxxxxxx};"
  19:                 + "Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;";
  20:             var builder = new SqlConnectionStringBuilder(connectionString);
  21:             using (var conn = new SqlConnection(connectionString))
  22:             {
  23:                 using (var cmd = conn.CreateCommand())
  24:                 {
  25:                     cmd.CommandText = "SELECT * FROM Contacts";
  26:                     conn.Open();
  27:                     Console.WriteLine("Server: '{0}'", builder.DataSource);
  28:                     Console.WriteLine("Login : '{0}'", builder.UserID);
  29:                     using (var reader = cmd.ExecuteReader())
  30:                     {
  31:                         while (reader.Read())
  32:                         {
  33:                             Console.WriteLine("{0}\t{1}\t{2}\t{3}\t{4}", reader[0], reader[1], reader[2], reader[3], reader[4]);
  34:                         }
  35:                     }
  36:                 }
  37:             }
  38:  
  39:             Console.WriteLine("Press any key to exit.");
  40:             Console.ReadKey();
  41:         }
  42:     }
  43: }

I can view all data without masking.

image

But when I switched to normal_use.

   1: using System;
   2: using System.Collections.Generic;
   3: using System.Data.SqlClient;
   4: using System.Linq;
   5: using System.Text;
   6: using System.Threading.Tasks;
   7:  
   8: namespace shx_maskingdatademo
   9: {
  10:     class Program
  11:     {
  12:         static void Main(string[] args)
  13:         {
  14:             var connectionString = ""
  15:                 + "Server=tcp:insider.database.secure.windows.net,1433;"
  16:                 + "Database=shx-maskingdatademo;"
  17:                 + "User ID=normaluser@insider;"
  18:                 + "Password={xxxxxxxxx};"
  19:                 + "Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;";
  20:             var builder = new SqlConnectionStringBuilder(connectionString);
  21:             using (var conn = new SqlConnection(connectionString))
  22:             {
  23:                 ... ...
  24:             }
  25:  
  26:             Console.WriteLine("Press any key to exit.");
  27:             Console.ReadKey();
  28:         }
  29:     }
  30: }

All sensitive data were masked automatically.

image

 

Security Connection String Only

In order to make my masking policy enabled I need to connect to my database though the security enabled connection string. If I was using the original connection string you will find all sensitive data were returned as it is even though I was using normal_user login.

image

In order to protect my data in all cases, I will back to azure portal to switch the Security Enable Access from "optional" to "required". This means my database only allows security enabled connection string.

image

Now if I tried to connect to my database through the original connection string, I will receive an exception.

image

 

Summary

SQL Database Dynamic Data Masking limits sensitive data exposure by masking it to non-privileged users. Dynamic data masking is in preview for Basic, Standard, and Premium service tiers in the V12 version of Azure SQL Database. It’s a policy-based security feature that hides the sensitive data in the result set of a query over designated database fields, while the data in the database is not changed. This means we can have those kind of data protected, upgrade the pricing tier and enabled V12 without migrate them to another database, and almost without any code changes.

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


Below are some gulp plugins I'm using in my Angular.JS website for build and deployment. Basically what I need are

1, Generate <script> and <link> element in "index.html" page based on packages installed through Bower.

2, Generate <script> elements for all Angular.JS JavaScript files we wrote.

3, Generate configuration file based on environment variants.

4, Combine and minify (except those had been minified) JavaScript and CSS files in release mode, but NOT in debug mode.

Now let's go though gulp plugins I'm using one by one.

 

main-bower-files

This plugin loads the "bower.json" file of my application, retrieve files for each packages based on the "main" property defined in its own "bower.json", for future usage. So if I have packages installed through the command "bower install [package-name] --save", then I can retrieve the files it needs into my gulp task, and pipe to the next step, for example generate <script> and <link> elements.

I can specify where the "bower.json" for my project was located through "{path: 'app'}", also not let the plugin read the file content by using "{read: false}" if I don't need to deal with the files' content.

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3:  
   4: gulp.task('TASKNAME', function() {
   5:     return gulp.src(bower({ paths: 'app' }), { read: false }))
   6:         .pipe(/* next step */)
   7: });

In some cases we need to specify which files should be referenced in a package. For example, by default, only "jquery.js" is necessary for jQuery package. But if we want to use "jquery.min.js" as well as "jquery.min.map", we can override it in our project level "bower.json" through its "overrides" property, as below.

   1: {
   2:   "name": "app",
   3:   "main": "app.js",
   4:   "version": "0.0.0",
   5:   "ignore": [
   6:     "**/.*",
   7:     "node_modules",
   8:     "bower_components",
   9:     "test",
  10:     "tests"
  11:   ],
  12:   "dependencies": {
  13:     "jquery": "~2.1.3",
  14:     "bootstrap": "~3.3.1",
  15:     "node-uuid": "~1.4.2",
  16:     "signalr": "~2.2.0",
  17:     "angular": "~1.3.9",
  18:     "angular-ui-router": "~0.2.13"
  19:     "angular-growl": "~0.4.0",
  20:     "moment": "~2.9.0",
  21:     "fontawesome": "~4.3.0"
  22:   },
  23:   "overrides": {
  24:     "jquery": {
  25:       "main": [
  26:         "dist/jquery.min.js",
  27:         "dist/jquery.min.map"
  28:       ]
  29:     }
  30:   }
  31: }

 

gulp-inject

This plugin reads the file source, transforms each of them to a string and injects into placeholders in the target stream files, such as an HTML file. I used it to generate <script> and <link> elements into the "index.html" file based on files detected by "main-bower-files".

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4:  
   5: gulp.task('TASKNAME', function () {
   6:     return gulp.src('index.tpl.html')
   7:         .pipe(inject(
   8:             gulp.src(bower({ paths: 'app' }), { read: false }),
   9:             { name: 'bower', relative: true, transform: gulpInjectVersioningTranform }))
  10:         .pipe(inject(
  11:             gulp.src(javaScriptFiles, { read: false }),
  12:             { relative: true, transform: gulpInjectVersioningTranform }))
  13:         .pipe(inject(
  14:             gulp.src(cssFiles, { read: false }),
  15:             { relative: true, transform: gulpInjectVersioningTranform }))
  16:         .pipe(/* next step */);
  17: });

By default, gulp-inject will generate <script> elements in targeting file between comments

   1: <!-- inject:js -->
   2: <!-- endinject -->

and <link> elements between comments

   1: <!-- inject:css -->
   2: <!-- endinject -->

But we can specify more targeting placeholders in gulp-inject name property. In the code above, JavaScript and CSS elements will be generated to the placeholders named "bower", while others will go default. Then the "index.tpl.html" would be like this.

   1: <head lang="en">
   2:     <meta charset="UTF-8">
   3:     <meta http-equiv="X-UA-Compatible" content="IE=edge">
   4:     <title></title>
   5:     <base href="/">
   6:  
   7:     <!-- bower:css -->
   8:     <!-- <link> elements detected by bower will be here. -->
   9:     <!-- endinject -->
  10:  
  11:     <!-- inject:css -->
  12:     <!-- <link> elements specified in gulp will be here. -->
  13:     <!-- endinject -->
  14:  
  15:     <!-- bower:js -->
  16:     <!-- <script> elements detected by bower will be here. -->
  17:     <!-- endinject -->
  18:  
  19:     <!-- inject:js -->
  20:     <!-- <script> elements specified in gulp will be here. -->
  21:     <!-- endinject -->
  22: </head>

I also specified "relevant: true" means the <script> and <link> elements will use relevant path.

And in order to add timestamp suffixing for each elements, I specified the transform function of the inject plugin. The function is very simple.

   1: var gulpInjectVersioningTranform = function (filepath, i, length, sourceFile, targetFile) {
   2:     var extname = path.extname(filepath);
   3:     if (extname === '.js' || extname === '.css') {
   4:         filepath += '?v=' + version;
   5:         return inject.transform.apply(inject.transform, [filepath, i, length, sourceFile, targetFile]);
   6:     }
   7:     else {
   8:         return inject.transform.apply(inject.transform, arguments);
   9:     }
  10: };

With these settings the output file content would be like this.

   1: <head lang="en">
   2:     <meta charset="UTF-8">
   3:     <meta http-equiv="X-UA-Compatible" content="IE=edge">
   4:     <title></title>
   5:     <base href="/">
   6:  
   7:     <!-- bower:css -->
   8:     <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.css?v=20150216161421">
   9:     <link rel="stylesheet" href="bower_components/fontawesome/css/font-awesome.css?v=20150216161421">
  10:     <!-- endinject -->
  11:  
  12:     <!-- inject:css -->
  13:     <link rel="stylesheet" href="styles/kendo.common-bootstrap.min.css?v=20150216161421">
  14:     <link rel="stylesheet" href="styles/kendo.bootstrap.min.css?v=20150216161421">
  15:     <link rel="stylesheet" href="styles/app.css?v=20150216161421">
  16:     <link rel="stylesheet" href="modules/module_1/k1.css?v=20150216161421">
  17:     <link rel="stylesheet" href="modules/shared/style.css?v=20150216161421">
  18:     <link rel="stylesheet" href="modules/shared/login/login.css?v=20150216161421">
  19:     <link rel="stylesheet" href="modules/shared/validation/validation.css?v=20150216161421">
  20:     <!-- endinject -->
  21:  
  22:     <!-- bower:js -->
  23:     <script src="bower_components/jquery/dist/jquery.js?v=20150216161421"></script>
   1:  
   2:     <script src="bower_components/bootstrap/dist/js/bootstrap.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/node-uuid/uuid.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/signalr/jquery.signalR.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/angular/angular.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/angular-ui-router/release/angular-ui-router.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/angular-cookies/angular-cookies.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/angular-local-storage/dist/angular-local-storage.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/angular-growl/build/angular-growl.js?v=20150216161421">
   1: </script>
   2:     <script src="bower_components/moment/moment.js?v=20150216161421">
   1: </script>
   2:     <!-- endinject -->
   3:  
   4:     <!-- inject:js -->
   5:     <script src="app.conf.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/module_1/module.conf.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/module_2/module.conf.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/module.conf.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/module_1/controllers.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/module_2/controllers.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/authorization.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/loadingIndicator.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/logger.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/message.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/security.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/signalr.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/utilities.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/wix.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/home/controller_home.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/login/controller_login.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/login/controller_session.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/validation/validation.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/task_status/task_status.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/view1/controller_view1.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/view2/controller_view2.js?v=20150216161421">
   1: </script>
   2:     <script src="modules/shared/widgets/serverTimeWidget.js?v=20150216161421">
   1: </script>
   2:     <script src="app.env.js?v=20150216161421">
   1: </script>
   2:     <script src="app.js?v=20150216161421">
   1: </script>
   2:     <script src="js/search.js?v=20150216161421">
   1: </script>
   2:     <script src="js/layout.js?v=20150216161421">
</script>
  24:     <!-- endinject -->
  25: </head>

 

gulp-rename

This plugin is very simple, it's to be used to rename and file. In my project I have a template of "index.html" named "index.tpl.html" and <script>, <link> elements are generated into this file stream in previous step. Then I need to save the file content and I need rename to "index.html" which is done by this plugin.

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4: var rename = require('gulp-rename');
   5:  
   6: gulp.task('TASKNAME', function () {
   7:     return gulp.src('index.tpl.html')
   8:         .pipe(inject(
   9:             gulp.src(bower({ paths: 'app' }), { read: false }),
  10:             { name: 'bower', relative: true, transform: gulpInjectVersioningTranform }))
  11:         .pipe(inject(
  12:             gulp.src(javaScriptFiles, { read: false }),
  13:             { relative: true, transform: gulpInjectVersioningTranform }))
  14:         .pipe(inject(
  15:             gulp.src(cssFiles, { read: false }),
  16:             { relative: true, transform: gulpInjectVersioningTranform }))
  17:         .pipe(rename(target))
  18:         .pipe(gulp.dest('app'));
  19: });

 

gulp-chmod

When we are working with some version control system, for example Team Foundation Server, if the workspace was set to Server Mode, all local files will be read-only. Then when using gulp-rename and dest function to write the output file, it will maintain read-only mode. This will make the second time we run gulp task failed since you cannot overwrite a read-only file.

In this case we need to use this plugin to change the file mode. It uses Linux "chmod" argument syntax. So if I want to remove the read-only flag, I need the code as below.

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4: var rename = require('gulp-rename');
   5: var chmod = require('gulp-chmod');
   6:  
   7: gulp.task('TASKNAME', function () {
   8:     return gulp.src('index.tpl.html')
   9:         .pipe(/* load script and css files then inject */)
  10:         .pipe(rename(target))
  11:         .pipe(chmod(666))
  12:         .pipe(gulp.dest('app'));
  13: });

 

gulp-concat, gulp-uglify and gulp-minify-css

In release build, I need to combine all JavaScript and CSS files, minify them and inject into "index.html". I used these 3 plugins for combination and minification.

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4: var rename = require('gulp-rename');
   5: var chmod = require('gulp-chmod');
   6: var uglify = require('gulp-uglify');
   7: var minifyCSS = require('gulp-minify-css');
   8:  
   9: gulp.task('TASKNAME1', function () {
  10:     return gulp.src(javaScriptFiles)
  11:         .pipe(uglify())
  12:         .pipe(concat('app.min.js'))
  13:         .pipe(chmod(666))
  14:         .pipe(gulp.dest(build + '/js'));
  15: });
  16:  
  17: gulp.task('TASKNAME2', function () {
  18:     return gulp.src(cssFiles)
  19:         .pipe(minifyCSS())
  20:         .pipe(concat('app.min.css'))
  21:         .pipe(chmod(666))
  22:         .pipe(gulp.dest(build + '/css'));
  23: });

 

gulp-filter

The code works well for JavaScript and CSS files we created, but didn't work for files installed through bower. Since the files detected by "main-bower-file" includes JavaScript and CSS files, we need to somehow filter them and ran "gulp-uglify" and "gulp-minifyCSS".

"gulp-filter" enables us to work based on a subset of the original files by filtering them using globbing. Now we can get all JavaScript files from "main-bower-file", by specifying "**/*.js" into "gulp-filter", and pipe "gulp-uglify", while "**/*.css" and pipe "gulp-minifyCSS".

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4: var rename = require('gulp-rename');
   5: var chmod = require('gulp-chmod');
   6: var uglify = require('gulp-uglify');
   7: var minifyCSS = require('gulp-minify-css');
   8: ar filter = require('gulp-filter');
   9:  
  10: gulp.task('TASKNAME1', function () {
  11:     return gulp.src(bower({ paths: 'app' }))
  12:         .pipe(filter('**/*.js'))
  13:         .pipe(uglify())
  14:         .pipe(concat('bower.min.js'))
  15:         .pipe(chmod(666))
  16:         .pipe(gulp.dest('.build/js'));
  17: });
  18:  
  19: gulp.task('TASKNAME2', function () {
  20:     return gulp.src(bower({ paths: 'app' }))
  21:         .pipe(filter('**/*.css'))
  22:         .pipe(minifyCSS())
  23:         .pipe(concat('bower.min.css'))
  24:         .pipe(chmod(666))
  25:         .pipe(gulp.dest('.build/css'));
  26: });

 

gulp-if

Some bower package specifies original JavaScript and CSS files while some specified minified version. I don't want to re-minify those files in my gulp task. So I need to use "gulp-if" it filter then out.

"gulp-if" allows me to use a function to check input files, pipe plugins for those pass the condition check. In this case I tested files' name, and perform "gulp-uglify" or "gulp-minifyCSS" only if their extension name were not "min.js" or "min.css".

   1: var gulp = require('gulp');
   2: var bower = require('main-bower-files');
   3: var inject = require('gulp-inject');
   4: var rename = require('gulp-rename');
   5: var chmod = require('gulp-chmod');
   6: var uglify = require('gulp-uglify');
   7: var minifyCSS = require('gulp-minify-css');
   8: var filter = require('gulp-filter');
   9: var gulpif = require('gulp-if');
  10:  
  11: var isNotMinified = function (file) {
  12:     var extname = path.extname(file.path);
  13:     if (extname === '.js' || extname === '.css') {
  14:         return path.extname(file.path.substr(0, file.path.length - extname.length)) !== '.min';
  15:     }
  16:     else {
  17:         return false;
  18:     }
  19: };
  20:  
  21: gulp.task('TASKNAME1', function () {
  22:     return gulp.src(bower({ paths: 'app' }))
  23:         .pipe(filter('**/*.js'))
  24:         .pipe(gulpif(isNotMinified, uglify()))
  25:         .pipe(concat('bower.min.js'))
  26:         .pipe(chmod(666))
  27:         .pipe(gulp.dest('.build/js'));
  28: });
  29:  
  30: gulp.task('TASKNAME2', function () {
  31:     return gulp.src(bower({ paths: 'app' }))
  32:         .pipe(filter('**/*.css'))
  33:         .pipe(gulpif(isNotMinified, minifyCSS()))
  34:         .pipe(concat('bower.min.css'))
  35:         .pipe(chmod(666))
  36:         .pipe(gulp.dest('.build/css'));
  37: });

 

gulp-preprocess

In order to generate some configuration files based on the system environment variant, such as the WebAPI endpoint, protocol and debug flag, I need to use "gulp-preprocess".

   1: var gulp = require('gulp');
   2: var preprocess = require('gulp-preprocess');
   3:  
   4: gulp.task('app.env.js', function () {
   5:     return gulp.src('app/app.env.tpl.js')
   6:         .pipe(preprocess())
   7:         .pipe(rename('app.env.js'))
   8:         .pipe(chmod(666))
   9:         .pipe(gulp.dest('app'));
  10: });

The content of the template file "app.env.tpl.js" specified which environment variant should be replaced.

   1: (function (window) {
   2:     angular.module('environment', [])
   3:         /* @ifdef DEBUG */
   4:         .value('debug', true)
   5:         /* @endif */
   6:         .factory('wixEndpoint', [ function () {
   7:             var scheme = '/* @echo WIX_ENDPOINT_SCHEME */';
   8:             var address = '/* @echo WIX_ENDPOINT_ADDRESS */';
   9:             var port = '/* @echo WIX_ENDPOINT_PORT */';
  10:             return scheme + '://' + address + ':' + port;
  11:         }])
  12:         .factory('apiEndpoint', [ 'wixEndpoint', function (wixEndpoint) {
  13:             var api = '/* @echo WIX_ENDPOINT_API */';
  14:             return wixEndpoint + api;
  15:         }]);
  16: })(window);

It will output debug value if DEBUG was specified in environment variant. It will also load values for WIX_ENDPOINT_SCHEME, WIX_ENDPOINT_ADDRESS, WIX_ENDPOINT_PORT and WIX_ENDPOINT_API from environment variant, and write the value into this file. So the result in one of my development lab would be like this.

   1: (function (window) {
   2:     angular.module('environment', [])
   3:         .value('debug', true)
   4:         .factory('wixEndpoint', [ function () {
   5:             var scheme = 'http';
   6:             var address = '10.222.115.220';
   7:             var port = '8080';
   8:             return scheme + '://' + address + ':' + port;
   9:         }])
  10:         .factory('apiEndpoint', [ 'wixEndpoint', function (wixEndpoint) {
  11:             var api = '/api';
  12:             return wixEndpoint + api;
  13:         }]);
  14: })(window);

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


One of my project needs a C++ assembly for data encrypt and decrypt. We built that assembly from Visual Studio 2013 and tested in local machine. Everything ran well. But when I published to Microsoft Azure Website, it failed.

We spent half a day to get it resolved and I think it's good to write down what we tried for future reference.

 

Bad Image Format Exception

The first exception we met is BadImageFormatException (Exception from HRESULT: 0x8007000B). This is a common exception when an Azure application tried to load a C++ assembly. In azure our application are deployed in x64 Windows system. So if your C++ assembly was built in x86 you will see this exception.

One resolution is, if you are building a web application deployed under IIS, you can specify 'Enable 32bit Application' in the advance setting of your application pool.

image

If you are deploying your application under Azure Website, you can login the management portal and switch your Website to 32bit mode if its web hosting mode is standard. This will be the same as turning Enable 32-Bit Application to True.

image

Unfortunately we cannot change it since we also need some other assemblies in x64 mode. So we need to make sure the C++ assembly we built was x64.

 

Check Assembly x86 or x64

There are a lot of questions in StackOverflow asking how to find a DLL was compiled as x86 or x64. You can use DUMPBIN with /headers or /all flag. It will print "machine (x86)" or "machine (x64)".

If you have Cygwin installed, or have Linux or Mac system available, you can use "file" command to test the assembly more quickly.

Below I'm using x86 and x64 version of Internet Explorer as an example. As you can see for x86 assembly it returned "PE32" while x64 returned "PE32+".

image

 

File Not Found Exception

After we ensured our C++ assembly was built in x64 and published to Azure, we got another exception said

"System.IO.FileNotFoundException (Exception from HRESULT: 0x8007007E)". In some articles they said this is because your application cannot find the assembly and you'd better put it into %windir%\system32. You can try and if it still said "FileNotFoundException", this is mostly because the assembly depends something that are missing on your machine.

In order to check which were missing we ran Dependency Walker on azure machine and it reported MSVCP120D.DLL and MSVCR120D.DLL were missed.

IMG_0113

These DLLs are included in Visual C++ Redistributed Package. But note that both of them with "D" at the end of the name. That means they need some debug mode VC++ assemblies. These should not be necessary in production environment and the reason our assembly needs is because we built in debug mode.

Now the resolution is clear, build C++ assembly in x64 release mode and published, then everything works smoothly.

 

Summary

Load C++ assembly from .NET project is very common. But it often introduces some problem once we published to Azure while worked well in local. In this post I talked about what I met and how I solve this kind of problem. Basically when working with C++ in Azure, we need to keep in mind

1, Is it built in x86 or x64?

2, Is it built in release or debug?

3, Is the hosting environment support x86?

And in order to find the problem as early as possible, we'd better have a dedicate local test machine

1, Windows Server x64 (English), 2008 R2 or 2012 based on what we need.

2, .NET Framework 4 or 4.5.

3, DO NOT INSTALL Visual Studio or any other development packages.

 

PS: Happy Chinese New Year!

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


My working in a company which was inside a Crop. network in China. As you may know, in China there are some websites we cannot connect such as Facebook, Twitter, YouTube. And the connectivity with Google is always unstable, too. As a software developer I need Google everyday to search technical actuals, look for resolution and best practices. Besides, our Crop. network only support 80 and 443 port. This means I cannot use FTP, Remote Desktop and SSH, etc. from my workstation to the service on the cloud. But it was changed when I began to use Azure RemoteApp.

Azure RemoteApp was powered by application virtualization provided Microsoft. It means we can use, in theory, any application installed in our Windows virtual machine in Azure from any of our device: Windows or Mac OS, iOS or Android.

 

Create New RemoteApp Account

There is a blog post by Scott Guthrie introducing how to create a RemoteApp account. We just need to go to azure portal, select RemoteApp, specify the name of my service, where it will be provisioned (region), price plan and the template (Windows Server 2012 or Office 365). Then that's all. Several minutes later it will be ready.

image

It's very simple to use RemoteApp. Just open it from azure portal and download the client application.

image

By default it will show the client download link for the device we are using now. But we can check downloads for all clients available here.

image

After downloaded and installed the client we can launch and login with the Microsoft Account in Azure. It will show some programs already published: Calculator, cmd, Internet Explorer, Paint, PowerShell, PowerShell ISE.

Now let me open Internet Explorer and check my IP. Since I selected East Asia region when I created the account, the IP shows that I'm in Hong Kong. Also this Internet Explorer was launched as a normal application in my workstation, an icon was in my task bar. Only different is that, there is a indicator shows that this Internet Explorer was a remote application instead of local.

image

 

Publish More Applications

RemoteApp allows us publish applications through Start Menu or path in azure portal. In this case I want to view the file system of the virtual machine my RemoteApp located. So I need to publish Windows Explorer.

Go to my RemoteApp on azure portal and select "Publishing" tab, we will find applications already exposed. Click "Publish" button in the bottom and select "Publish program using path".

image

Specify the application name and path. We use some environment variants when specifying the path.

image

Then Click "OK", RemoteApp will try to find the application and publish it. After several seconds we can see Windows Explorer was in the list.

If RemoteApp found your application successfully, it will refresh the icon with the one application uses. When you find the icon was not changed accordingly it might because the path you specified was incorrect.

image

Next, back to the RemoteApp client and refresh we can see Windows Explorer appeared.

image

 

Publish More 3rd Party Applications

As you can see I had published some 3rd party applications in my RemoteApp account such as PuTTY, FileZilla. It's very simple as well.

First of all, we need to ensure the application we want to publish can be used by simple copy-and-paste. Which means we don't need to install them on RemoteApp machine. This is because the virtual machine the RemoteApp was located contains an administrator account named "RDSAdmin" which we don't have the credential. The user when we launched a remote application was a guest account which doesn't have the permission to install or uninstall a program.

Something in details. I logged in azure portal with shaun@live.com and created my RemoteApp account, it will create a new virtual machine, install and configured Remote Desktop Service and added this account as guest permission to the system. Then when I launched any applications in RemoteApp client it will firstly access the virtual machine with this guest account in background and launch the application I selected, and show in my desktop. But if I tried to install something, the UAC will be popped up ask for the password of the administrator named "RDSAdmin" that I don't know.

Second, find the application we want to publish. We can open Internet Explorer via RemoteApp and download it directly to the virtual machine. Alternatively we can copy files from our local machine to RemoteApp machine through the "remote-ed" Windows Explorer.

Finally, publish this application by path in azure portal, same as what we did for Windows Explorer.

 

For example, below I copied "putty.exe" from my local machine to RemoteApp machine through the "remote-ed" Windows Explorer.

image

And published on azure portal as below.

image

Refresh my local RemoteApp client and launched the PuTTY from my local machine.

image

Note that even though I'm in the Crop. network that doesn't allow port 22, I can use it to connect a Linux machine on another Azure region in West US.

image

 

Remote Desktop to Virtual Machine with App Installed

In the section above I mentioned in RemoteApp we cannot install application due to the guest account restriction. But we can resolve it by introducing another virtual machine.

Normally we uses RemoteApp as below.

image

Now let's create a new virtual machine in Azure. Since this is a standard virtual machine we have administrator rights. So we can launch Remote Desktop from RemoteApp, connect to this virtual machine, install applications we want and use them. In this mode we can launch any application (copy-and-paste, or install) and use them inside the Crop. network via RemoteApp.

image

Let's launch "cmd" from my RemoteApp client, execute "mstsc".

image

Specify the virtual machine IP address or DNS name with its Remote Desktop port (normally it's NOT the default 3389), type login and password them we can see the desktop and launch Visual Studio.

image

 

Summary

In this post I introduced Azure RemoteApp feature on how to create a new account, how to publish an existing application. I also introduced on how to publish 3rd party application by copy-paste them into the virtual machine RemoteApp located.

At last I shared how to use RemoteApp to connect to another virtual machine in Azure so that we can use to install and launch more applications.

Azure RemoteApp supports hybrid deployment that allows us to publish applications need to be installed. But it's complex and time-consuming. I think by using my approach (remote to remote VM) it become very easy to use any application regardless which device we are using, what kind of network and firewall in front of.

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


When I'm developing "My DocumentDB" I decided to enhance the JSON input part by introducing a designer. After Google-ed I found JSONEditor is good to me. It's a web-based tool allows to view, edit and format JSON with simple API to integrate into a web page. Then I was going to use this cool thing into my project.

 

Use Directive

I firstly created a directive that will apply JSONEditor in its DOM. The JSON data will be specified from the "ng-model" attribute. The code is very simple as below.

   1: app.directive('uiJsonEditor', [function (jsoneditorOptions) {
   2:     'use strict';
   3:     return {
   4:         restrict: 'A',
   5:         scope: {
   6:             json: '=ngModel'
   7:         },
   8:         link: function (scope, elem) {
   9:             var opts = {
  10:                 change: function () {
  11:                         if (scope.editor) {
  12:                             scope.$apply(function () {
  13:                                 scope.json = scope.editor.get();
  14:                             });
  15:                         }
  16:                     }
  17:             };
  18:             scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  19:         }
  20:     };
  21: }]);

One thing need to be paid attention. I specified JSONEditor "change" event function so that it will update the JSON value back to the scope variant. Since this event triggered outside of AngularJS event-loop I need to wrap the update code into "scope.$apply".

Then I can use JSONEditor in my page. Below is the web page I used for prototype. I attach this directive in a DIV. And I also displayed the scope variant that ensured JSON value was updated accordingly.

   1: <!DOCTYPE html>
   2: <html ng-app="MyApp">
   3: <head>
   4:     <link rel="stylesheet" href="jsoneditor.css" />
   5: </head>
   6:  
   7: <body>
   8:     <h1>Hello AngularJS-JSONEditor</h1>
   9:  
  10:     <div ng-controller="MyCtrl">
  11:     <p>
  12:         <div data-ui-json-editor data-ng-model="json"></div>
  13:         <p>
  14:             {{json}}
  15:         </p>
  16:     </p>
  17:     </div>
  18:  
  19:     <script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.js"></script>
   1:  
   2:     <script src="http://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.js">
   1: </script>
   2:     <script src="jsoneditor.js">
   1: </script>
   2:  
   3:     <script>
   4:         var app = angular.module('MyApp', []);
   5:  
   6:         app.controller('MyCtrl', function($scope) {
   7:  
   8:             $scope.json = {
   9:                 firstName: 'Shaun',
  10:                 lastName: 'Xu',
  11:                 skills: [
  12:                     'C#',
  13:                     'JavaScript'
  14:                 ],
  15:                 roles: [
  16:                     'dev',
  17:                     'speaker'
  18:                 ]
  19:             };
  20:         });
  21:  
  22:         app.directive('uiJsonEditor', [function (jsoneditorOptions) {
  23:             'use strict';
  24:             return {
  25:                 restrict: 'A',
  26:                 scope: {
  27:                     json: '=ngModel'
  28:                 },
  29:                 link: function (scope, elem) {
  30:                     var opts = {
  31:                         change: function () {
  32:                                 if (scope.editor) {
  33:                                     scope.$apply(function () {
  34:                                         scope.json = scope.editor.get();
  35:                                     });
  36:                                 }
  37:                             }
  38:                     };
  39:                     scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  40:                 }
  41:             };
  42:         }]);
  43:     
</script>
  20: </body>
  21:  
  22: </html>

After launched the web page JSON data will be shown in both JSONEditor and the text area.

image

If I changed something in JSONEditor we will find the data was changed automatically.

image

 

Use Module

This is good for "My DocumentDB" project, but I was thinking if I can change it as a standalone UI control being used in any my and others' projects. This is not a big deal. In AngularJS we can use module to group controllers, factories, services and directives. In this case what I need to do is to create a module and put the directive into it, and then changed my main module that depends on it.

   1: <script>
   1:  
   2:     angular.module('ui.jsoneditor', [])
   3:         .directive('uiJsonEditor', [function (jsoneditorOptions) {
   4:             'use strict';
   5:             return {
   6:                 restrict: 'A',
   7:                 scope: {
   8:                     json: '=ngModel'
   9:                 },
  10:                 link: function (scope, elem) {
  11:                     var opts = {
  12:                         change: function () {
  13:                                 if (scope.editor) {
  14:                                     scope.$apply(function () {
  15:                                         scope.json = scope.editor.get();
  16:                                     });
  17:                                 }
  18:                             }
  19:                     };
  20:                     scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  21:                 }
  22:             };
  23:         }]);
</script>
   2:  
   3: <script>
   1:  
   2:     var app = angular.module('MyApp', ['ui.jsoneditor']);
   3:     app.controller('MyCtrl', function($scope) {
   4:  
   5:         $scope.json = {
   6:             firstName: 'Shaun',
   7:             lastName: 'Xu',
   8:             skills: [
   9:                 'C#',
  10:                 'JavaScript'
  11:             ],
  12:             roles: [
  13:                 'dev',
  14:                 'speaker'
  15:             ]
  16:         };
  17:     });
</script>

In HTML part I don't need to change anything the page loaded successfully and JSONEditor works well.

image

 

Better Configuration

This is better, but not perfect. I knew JSONEditor allows developer specify some options. This can be done by introducing more scope variants into the directive. In the code below I added "options" variant. So we can tell the directive which scope variant will be used as the JSONEditor configuration.

   1: angular.module('ui.jsoneditor', [])
   2:     .directive('uiJsonEditor', [function (jsoneditorOptions) {
   3:         'use strict';
   4:         return {
   5:             restrict: 'A',
   6:             scope: {
   7:                 json: '=ngModel',
   8:                 options: '=options'
   9:             },
  10:             link: function (scope, elem) {
  11:                 var opts = scope.options || {};
  12:                 opts.change = opts.change || function () {
  13:                             if (scope.editor) {
  14:                                 scope.$apply(function () {
  15:                                     scope.json = scope.editor.get();
  16:                                 });
  17:                             }
  18:                         };
  19:                 scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  20:             }
  21:         };
  22:     }]);

In HTML part I specified which scope variant will be used as the options as below.

   1: <p>
   2:     <div data-ui-json-editor data-ng-model="json" data-options="options"></div>
   3:     <p>
   4:         {{json}}
   5:     </p>
   6: </p>

And in the controller I specified the options, defined the root node text and modes of JSONEditor.

   1: var app = angular.module('MyApp', ['ui.jsoneditor']);
   2: app.controller('MyCtrl', function($scope) {
   3:  
   4:     $scope.options = {
   5:         name: 'root',
   6:         modes: ['tree', 'text']
   7:     };
   8:  
   9:     $scope.json = {
  10:         firstName: 'Shaun',
  11:         lastName: 'Xu',
  12:         skills: [
  13:             'C#',
  14:             'JavaScript'
  15:         ],
  16:         roles: [
  17:             'dev',
  18:             'speaker'
  19:         ]
  20:     };
  21: });

Refresh the web page we will see the options was changed.

image

Now it's almost perfect. But if I have more than one JSONEditor controls in my application I might  wanted to have a default options. This can be done by using AngularJS provider. In the help page it said

You should use the Provider recipe only when you want to expose an API for application-wide configuration that must be made before the application starts. This is usually interesting only for reusable services whose behavior might need to vary slightly between applications.

Provider holds some variants which can be exposed by a special function named "$get". We can defined functions in it which can be used before the application starts, typically in our AngularJS main module's "config" function.

In our case we just need to create a provider, define a local variant stores JSONEditor options, expose two function. "$get" function will return this variant, "setOptions" function will set options into this value.

And in JSONEditor directive we referenced the provider just created, retrieved the value and merged with the scope variant. Now we have a global options, and developer can specify some options for a particular JSONEditor control as well.

   1: angular.module('ui.jsoneditor', [])
   2:     .directive('uiJsonEditor', ['jsoneditorOptions', function (jsoneditorOptions) {
   3:         'use strict';
   4:         return {
   5:             restrict: 'A',
   6:             scope: {
   7:                 json: '=ngModel',
   8:                 options: '=options'
   9:             },
  10:             link: function (scope, elem) {
  11:                 var opts = angular.extend({}, jsoneditorOptions, scope.options);
  12:                 opts.change = opts.change || function () {
  13:                             if (scope.editor) {
  14:                                 scope.$apply(function () {
  15:                                     scope.json = scope.editor.get();
  16:                                 });
  17:                             }
  18:                         };
  19:                 scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  20:             }
  21:         };
  22:     }])
  23:     .provider('jsoneditorOptions', function () {
  24:         'use strict';
  25:         this.options = {};
  26:  
  27:         this.$get = function () {
  28:             var opts = this.options;
  29:             return opts;
  30:         };
  31:  
  32:         this.setOptions = function (value) {
  33:             this.options = value;
  34:         };
  35:     });

Then back to the main module we can define default options of JSONEditor in "app.config" function.

   1: app.config(['jsoneditorOptionsProvider', function (jsoneditorOptionsProvider) {
   2:     jsoneditorOptionsProvider.setOptions({
   3:         name: 'root',
   4:         modes: ['text', 'tree']
   5:     });
   6: }]);

After refresh the web page we will see that the JSONEditor options changed even I had removed the options from controller scope.

image

And if I specified the options in the controller scope it will be updated, but the global options still remained.

   1: $scope.options = {
   2:     name: 'this'
   3: };

image

 

Summary

In this post I introduced how to create an AngularJS module wraps a UI control. In AngularJS we should use directive when dealing with DOM. Then I moved the code into a standalone module to make it useable for any other projects. At the end I added the functionality for global configuration.

The full sample code is as below.

   1: <!DOCTYPE html>
   2: <html ng-app="MyApp">
   3: <head>
   4:     <link rel="stylesheet" href="jsoneditor.css" />
   5: </head>
   6:  
   7: <body>
   8:     <h1>Hello AngularJS-JSONEditor</h1>
   9:  
  10:     <div ng-controller="MyCtrl">
  11:     <p>
  12:         Default Options
  13:     </p>
  14:     <p>
  15:         <div data-ui-json-editor data-ng-model="json"></div>
  16:     </p>
  17:     <p>
  18:         Instance Options
  19:     </p>
  20:     <p>
  21:         <div data-ui-json-editor data-ng-model="json" data-options="options"></div>
  22:     </p>
  23:     <p>
  24:         {{json}}
  25:     </p>
  26:     </div>
  27:  
  28:     <script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.js"></script>
   1:  
   2:     <script src="http://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.js">
   1: </script>
   2:     <script src="jsoneditor.js">
   1: </script>
   2:  
   3:     <script>
   4:         angular.module('ui.jsoneditor', [])
   5:             .directive('uiJsonEditor', ['jsoneditorOptions', function (jsoneditorOptions) {
   6:                 'use strict';
   7:                 return {
   8:                     restrict: 'A',
   9:                     scope: {
  10:                         json: '=ngModel',
  11:                         options: '=options'
  12:                     },
  13:                     link: function (scope, elem) {
  14:                         var opts = angular.extend({}, jsoneditorOptions, scope.options);
  15:                         opts.change = opts.change || function () {
  16:                                     if (scope.editor) {
  17:                                         scope.$apply(function () {
  18:                                             scope.json = scope.editor.get();
  19:                                         });
  20:                                     }
  21:                                 };
  22:                         scope.editor = new JSONEditor(elem[0], opts, scope.json || {});
  23:                     }
  24:                 };
  25:             }])
  26:             .provider('jsoneditorOptions', function () {
  27:                 'use strict';
  28:                 this.options = {};
  29:  
  30:                 this.$get = function () {
  31:                     var opts = this.options;
  32:                     return opts;
  33:                 };
  34:  
  35:                 this.setOptions = function (value) {
  36:                     this.options = value;
  37:                 };
  38:             });
  39:     
</script>
   1:  
   2:  
   3:     <script>
   4:         var app = angular.module('MyApp', ['ui.jsoneditor']);
   5:  
   6:         app.config(['jsoneditorOptionsProvider', function (jsoneditorOptionsProvider) {
   7:             jsoneditorOptionsProvider.setOptions({
   8:                 name: 'root',
   9:                 modes: ['text', 'tree']
  10:             });
  11:         }]);
  12:  
  13:         app.controller('MyCtrl', function($scope) {
  14:  
  15:             $scope.options = {
  16:                 name: 'this'
  17:             };
  18:  
  19:             $scope.json = {
  20:                 firstName: 'Shaun',
  21:                 lastName: 'Xu',
  22:                 skills: [
  23:                     'C#',
  24:                     'JavaScript'
  25:                 ],
  26:                 roles: [
  27:                     'dev',
  28:                     'speaker'
  29:                 ]
  30:             };
  31:         });
  32:     
</script>
  29: </body>
  30:  
  31: </html>

And you can find the module I created in GitHub.

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.