Electron, Win10, UWP

I have been following Microsoft's adventure on HTML5/Javascript based desktop UIs for a while – actually, did participate on the journey while working for Microsoft back in 2003 on a Microsoft POC project codenamed IceFace which was doing something akin of websockets (using a proprietary ActiveX and doing low level TCP from it) and something akin of an HTA application. In the recent years, I was watching when they announced WinRT and one of the programming models being HTML5/Javascript; also watched with awe as they failed to penetrate the world with WinJS Tutorial ; than when UWP enabled using HTML5/Javascript as one of the technologies to use. So when I heard last year they tackle this area again, I didn't have high hopes.

 

I was wrong. They actually listened to the people – yes, they misinterpreted what they want the first few (?) times, and lost their expected leadership in the desktop app segment (which were quickly eaten into by the HTML5 hybrid applications), but at last they stopped fighting the inevitable.

 

Why do I say this? Next to the bridges Microsoft announced and later opensourced (centennial for desktop app, the later cancelled bridge for Android applications, the bridges for hosted web applications, for iOS applications and for Silverlight applications) Microsoft sneaked in an announcement: Electron applications for the Windows Store . Using Centennial technologies (so no registry or file system is used, and the application is running on full speed, but still running in a sandbox) you are able to 'compile' your Electron application to a Windows Store AppX (either for the external application store or the internal one). Moreover – using NodeRT (which can be downloaded through npm, and does some crazy magic of generating node.js native addon's C++ code using C# reflection over WinMD manifest files) you are able to access the same WinRT/UWP APIs like any other native application would do – see Showing Native Windows Notifications from Electron Using NodeRT for details.

 

This easily enabled applications do interaction with the native Windows experience, like setting the lockscreen image from JavaScript (TypeScript):

 

const {KnownFolders} = require('windows.storage')

const {LockScreen} = require('windows.system.userprofile')

myFolder.getFileAsync('image.jpg', (err, file) => {

  LockScreen.setImageFileAsync(file, (err) => { })

})

 

Or popping up a toast using https://github.com/felixrieseberg/electron-windows-notifications :

 

const appId = 'electron-windows-notifications'

const {ToastNotification} = require('electron-windows-notifications')

 

let notification = new ToastNotification({

    appId: appId,

    template: `<toast><visual><binding template="ToastText01"><text id="1">%s</text></binding></visual></toast>`,

    strings: ['Hi!']

})

 

notification.on('dismissed', () => console.log('Dismissed!'))

notification.show()

 

So yes, Windows 10 migration for some of the applications and firms might be far ahead, but I think we would be prepared. We are still to see how they will tackle the issues regards Electron security sandbox, but I feel like this time Microsoft might be just naturally doing what was expected from them for a long time, and we can see others stepping into the same direction with getting on the Electron bandwagon.

Sphinx, Pygment, and more

New year, new petprojects. One of them is moving existing documentation in an easier to manage, easier to use format. I looked around what is being used right now internally, and seen many tools from doxygen to sandcastle. One of the tools that caught my attention was sphinx. Being a Python tool, it enables to run on both on Windows and Linux, and looking little more around I found support for VSCode editing – I'm in.

 

So, what is sphinx? Sphinx is a tool that makes it easy to create intelligent and beautiful documentation, written by Georg Brandl and licensed under the BSD license. It was originally created for the Python documentation, and it has excellent facilities for the documentation of software projects in a range of languages.

 

So I gave it a try and I ended up setting up something similar to github pages – I send a PR to a particular internal repo, and when it gets merged, it kicks off a build which automatically uploads it as the new content.

 

Locally I did set up the following in my tasks.json:

 

{

    "version": "0.1.0",

    "command": "python",

    "isShellCommand": true,

    "args": [ "sphinx-build", "-b", "html", "-d", "build/doctrees", ".\\src\\sphinx\\source", ".\\doc"],

    "showOutput": "always"

}

 

Next thing was to set up my conf.py, I picked a few plugins I was thinking I'd use:

 

extensions = [

    'sphinx.ext.autodoc',

    'sphinx.ext.intersphinx',

    'sphinx.ext.viewcode',

#    'sphinx.ext.autosectionlabel', — this depends on a newer sphinx that I have

http://www.sphinx-doc.org/en/1.5.1/ext/autosectionlabel.html

    'sphinx.ext.autosummary',

http://www.sphinx-doc.org/en/1.5.1/ext/autosummary.html

    'sphinx.ext.todo'

http://www.sphinx-doc.org/en/1.5.1/ext/todo.html

]

 

I started playing than around with the RST syntax, and very quickly figured out that there is good support for python (not a surprise) in code coloring, but even if I did set .. code:: csharp it wasn't recognizing it.

 

So time to write my first sphinx plugin, in _ext/csharplexer.py:

 

def setup(app):

    import pygments

    from pygments.lexers import CSharpLexer

    app.add_lexer('csharp', CSharpLexer())

 

than I can add 'csharplexer' as an extension.

 

Next, I started looking making the rendering little nicer – I'll likely have longer listings as part of the code, and I wanted to have a syntax for toggles working. So time for the next extension – toggle. Taking the following source:

 

.. container:: toggle

    .. container:: header

        **Example to show how to add unitycontainer**

    .. code-block:: csharp

        :linenos:


        Assert(true);  // OK

        var i = 1;

        Console.WriteLine("Hello World!");

 

I could just add _static/custom.css:

 

.toggle .header {

    display: block;

    clear: both;

    cursor: pointer;

}

 

 

.toggle .header:after {

    content: " ▼";

}

 

 

.toggle .header.open:after {

    content: " ▲";

}

 

and _templates/page.html:

 

{% extends "!page.html" %}

 

{% set css_files = css_files + ["_static/custom.css"] %}

 

{% block footer %}

<script type="text/javascript">

    $(document).ready(function() {

        $(".toggle > *").hide();

        $(".toggle .header").show();

        $(".toggle .header").click(function() {

            $(this).parent().children().not(".header").toggle(400);

            $(this).parent().children(".header").toggleClass("open");

        })

    });

</script>

{% endblock %}

 

Which resulted in something I liked 🙂

 

Will be continued – next time I'll try to get viewcode working.

32 bit vs 64, revisited (again)

I did post previously about 32 bit vs 64 bit through the magnifying glass of .NET – good news is, that it’s now high time to scrap all those results, and revisit the question. The reason behind is one of the changes between .NET 4.5.2 and 4.6 (and therefore 4.6.1) is the introduction of a new JITter (ryujit) which should result each of us carefully revisiting this question.

 

But, let’s not go that quick; what the problem is we are trying to solve.

 

32 bit vs 64 bit

 

“I’m in .NET, why should I be interested in 32 bit vs 64 bit? Isn’t .NET bit agnostic?”

 

Yes, .NET itself is agnostic; however some of the libraries and technologies you might use might be not. Think about technologies like: P/Invoke, COM Interop, Unsafe Code, Marshaling, Serialization, Managed CPP, … So, yes, if you happen to have 100% type safe managed code, you can just copy your application from a 32 bit system to a 64 bit system, and it would “just run” successfully under the 64 bit CLR. However, likely you are using some of the technologies just mentioned, so you should do your homework to investigate whether your code is depending on the bit length. Be aware, that unlike C++, .NET only changes size of the pointers (IntPtr) and not the builtin value types (e.g. int is going to stay the same). So moving between 32 and 64 bit world either result in no changes or a set of changes related to pointers, changes related to 3rd party libraries, marshaling, serialization, and more; and you can use System.IntPtr.Size and System.Reflection.Module.GetPEKind to determine the current bitlength and/or querying a deployment assembly for platform affinity.

 

Why 64 bit? Actually, why 32 bit?

 

What does 64 bit allows you to? Addressing (not necessary accessing) a bigger chunk of memory. 32 bit applications inherently (because of the pointers they use) are limited into a 2Gb section of the memory, 64 bit applications don’t have this limitation.

 

So, that means I should just specify I want to have 64 bit, and that’s it? I’d have more memory and would be faster? Actually, not necessarily. 64 bit pointers do occupy more memory. Cache lines in the processor gets evicted more likely. Stack becomes bigger. Your application will likely (mileage might vary) occupy more memory, and there is a chance (mileage will vary) it will perform worse – despite the fact that running on 32 bit results involved in the WOW64 subsystem that has its own performance hit.

 

So should I not update to 64 bit? You should measure; although because what explained above it might be not trivial, you might not want to put effort into it right now.

 

Why this is a topic now?

 

With .NET 4.6 a new JITter got introduced that is a significant rewrite of the existing JITter (and caused some uproar when just after .NET 4.6 release a problem in a tail call optimization caused issues). It’s actually optimized to bring 64 bit nirvana for the masses by incorporating more usecases to use SIMD and SSE for. Yes, I’m going to talk about synthetic microbenchmarks here. Synthetic microbenchmarking is evil and you shouldn’t trust any of the results below, rather test your code – mileage will vary.

 

There are many usecases, like: matrix multiplication, simple floating point arithmetic and more where there is a significant speedup – we speak 4-5x (due to better usage of registers, opcodes, due to better coalescing of arithmetic instructions and use of noeffect code reorder). However, there are many other usecases – just calling a static method, or calling a virtual method might slow down by the same 4-5x factors.

 

Conclusion

 

Don’t believe any of the results above – please do measure yourself, and feel free to leave in the comments below whether you did see any performance improvements using .NET 4.6.1 and 64 bit over your 32 bit application. Also, going over 2Gb of memory usage – is it possible your application should be restructured not to hold all the data on the client side? Probably revisit a different pattern for client-server interaction is timely?