Console Tabs Coming to Windows?

I was overjoyed when I first heard that Microsoft would be improving the console in Windows 10 (yeah, really) and we’ve had a steady amount of updates since then – including a load of improvements in the Creators Update.

All the updates we’ve had so far have been great and all… but I’ve been begging to have tab support in the console. And yes I voted for it on User Voice!

This morning I saw a couple of interesting posts from Windows Central and MSPoweruser with reports of a “Tabbed Shell” interface coming to Windows 10 in a future update.

Obviously my nerdy mind immediately went to the command line. (See above for my mock-up of what it might look like.)

It suddenly becomes super-obvious that having tab support at the OS level makes so much sense. From an engineering point of view there is no point in having multiple teams working on their own implementation of tabs in Windows.

When something crashes you don’t want one tab to take out all of the others. With support built directly into the shell you can ensure stability and consistency across all applications.

Hopefully we’ll find out more at the Build conference next month… and hopefully I’ll be using PowerShell in a tabbed interface in the not too distant future!

Note: yes I know there apps which provide tabbed consoles in Windows. I’m really interested by Hyper, but it doesn’t feel ready for every day use yet.

Surface Book

I usually push my computers pretty hard until they must be upgraded. Earlier this year it was time for my ageing Surface Pro 2 to be replaced with a shiny new Surface.

But which one?

There was no doubt I was going to get another Microsoft Surface, but it wasn’t until I started using a Surface Book at work that I decided to get myself “the ultimate laptop” rather than “the tablet that can replace your laptop“.


Since I first started using the original Surface I was convinced I would get tablets in the Surface Pro line moving forward. I still believe that they are the most forward looking form factor, but the stability of the Surface Book’s laptop base has allowed me to get more done with the computer on my lap, rather than only feeling productive at a desk.

I decided it was worth the switch for now… but maybe in the not too distant future the Surface devices would allow you to switch between the kind of base you want to use at any time, rather than making you choose from either the Pro or Book line. Why have a different tablet component? I guess that’s is a discussion for another time, but I still very firmly believe in the tablet form factor.

Flexible computing

I am fortunate enough to use the same model of Surface Book in my day job as well as having my own for personal use. Thanks to the dock I can plug either of them into external devices really easily: power, network, keyboard, monitor, and mouse are all provided by one cable that can be used with either my personal or work computers. Very handy.


Most of the software development I did on my Surface Pro 2 was performed while it was docked, so I still have the same experience with the full-size Sculpt Ergonomic Keyboard and Mouse set that I was using before, and I’m still using the same ultra-wide monitor too. (I’m actually considering making some changes at my desk, but I’m not sure what yet.)

The real benefit of the Surface Book is that I feel just as productive when I am away from my desk. It is much larger than the Surface Pro 2 but it still feels light enough that I am not carrying around something huge like my old Dell workstation.

The biggest reason for this is that the Surface Book is primarily a laptop rather than a tablet. While Star Trek has taught me that tablets are the future, decades of history has shown me that laptops are the best form factor for getting things done on the move. Thankfully the Surface Book isn’t just a laptop. You can remove the screen and use it as a fully functional tablet too.

As almost all of the electronics are in the screen itself the device would be top heavy if it used a standard hinge. Microsoft’s solution to this problem is the Surface Books most striking feature: the dynamic fulcrum hinge.


The way the hinge closes results in some space between the two sections and I remember people questioned on weather this was a good idea or not. Most arguments against the hing centered around the supposition that loose items stored inside a bag may get between the screen and the keyboard.

I am a sane human being… so I never put anything in the same bag compartment as the Surface Book itself.

While the gap may look striking in photographs, it very quickly becomes normal. In fact the last laptop I owned had an issue where the keys would touch the screen and would regularly need cleaning because of it.

Keyboard and touchpad

The backlit keyboard is great to type on, both in thanks to the stability of the base and the overall feel of the metal keys. They’re raised from the base thanks to the aforementioned hinge.

The touchpad is also pretty amazing, certainly on par with Apple’s MacBook and light-years ahead of the fabric touchpad I was using on the Surface Pro 2.

Surface Keyboard

The gestures to allow switching between desktops has really changed the way I use Windows and it all works together to make using the the Surface Book as a laptop a really good experience.

The combination of the impressive keyboard and multi-touch keyboard has enabled me to be more productive while hot-desking and moving between meetings too.

Touch screen and pen

I was never going to buy a laptop that didn’t have a touch screen, and Microsoft was never going to make a Surface without one either.

Honestly I was kind of waiting for OLED technology to make its way to the Surface line, but after using a Surface Book I realised that the screen was so good it didn’t matter. (OLED isn’t ready yet either, apparently!)

The step up from my previous device is substantial, and I love how crisp everything looks.

Here you can see the difference between the 1920 × 1080 @ 150% desktop of the Surface Pro 2 compared to the 3000 × 2000 @ 200% desktop of the Surface Book.


These are the default settings and I’ve seen people tweak the settings to get larger working areas. I find 100% too small, but 150% seems okay. Either way there are a lot more pixels to work with and the aspect ratio is a lot more useful.

Surface Pen

The Surface Pen was updated for the Surface Pro 4 and the Surface Book, but the technology inside is largely the same as was used on the Surface Pro 3.

It sports a different technology to the Wacom used on the original Surface Pro and the Surface Pro 2 which means my existing pens do not work. This hasn’t been much of an issue for me as I think the pen that comes with the Surface Book feels superior when compared to the Wacom pens I was using before.

As the button on the end works over Bluetooth I must be careful not to confuse which pen is pared to which Surface. Amazingly I haven’t taken the wrong one to work… yet.

Windows Hello

One of the best things about the Surface Book is the way it can authenticate you by using an infrared camera.

The difference between the technology in the Surface Book and the Lumia 950 is night and day – using the Surface Book is absolutely fantastic and I rarely have to move just to be in the right position in normal use.

(For those rare times it is confused you can always use the Jedi mind trick to get it to try again)

Windows Hello

Specifications and storage

One complaint is that the SD Card slot is a bit dumb – like the MacBook Pro they use a full size card which doesn’t go all the way in.

Obviously I don’t use full size SD cards (as it is 2016!) but I do use microSD cards.

I’ve got a little BaseQi adaptor in the side of the device and I highly recommend this to anyone who has a Surface Book. I tend to use this microSD card for things like ISO files – but no actual data as it is not encrypted like the built in SSD.


Finally, the device itself is super powerful and brilliant for use as a developer machine I have the high spec version meaning there is an Nvidia GPU in the base, and lots of disk storage.

  • CPU: 6th Generation Intel Core i7-6600U CPU @ 2.60GHz
  • RAM: 16GB DDR3
  • Storage: 256GB SSD
  • Graphics: Intel HD graphics 520 and NVIDIA GeForce GPU with 1GB GDDR5 memory

Highlights from Build 2016


Even though I have never attended a Microsoft Build conference in person I always learn so much from them.

Every year there are new platforms to try, lots of documentation to read, and many presentations and recoded sessions to watch.

I still have a lot of videos to watch, but here are some of top announcements from Build 2016 which matter to me the most as a developer.

Windows 10 as the best OS for Developers

A number of new features coming to Windows 10 in the “Anniversary” update were shown in the day-one keynote, and then even more features where shown at sessions throughout the conference. Solid improvements to the inking, biometrics, and the Action Center were all well received.

Windows Ink

Many of the features shown help fix minor annoyances in the system. For example, pressing on a live tile showing a preview of a news article can now take you directly to it, and notifications dismissed on the PC or tablet will automatically get dismissed on the phone too.

One of the most exciting new features was the addition of Bash (on Ubuntu) on Windows which is both technically very interesting and extremely useful for many development workflows. The new Ubuntu subsystem will allow any (command-line) Linux application to run natively on Windows. This instantly unlocks a massive amount of tools and utilities for developers, making common scenarios significantly easier from Windows.

Bash on Windows

As a huge fan of command line interfaces I’m going to go into this in more detail in a future article – but essentially Microsoft are positioning Windows to be the ultimate developer platform, no matter what operating systems you use for your solutions.

Azure is growing up with more data centres and services

Microsoft would prefer you use Azure when you deploy your applications though, and the day-two keynote showed that is still serious about the cloud.

Improvements which interested me the most included Azure Functions, Service Fabric, Containers, DocumentDB, and much, much more.


Azure is the future of Microsoft, and by the numbers they going strong. They’re expanding their datacentres and really betting big on the cloud. This is no surprise to Microsoft watchers, but it’s good to see steady improvements here. Many of which I will use.

Visual Studio keeps getting better

I spend absolutely huge amounts of time in Visual Studio so any improvements here have a very positive effect on my productivity.

Visual Studio 2015 Update 2 was released (with lots of improvements) and an early preview version of Visual Studio vNext was also shown. I’ve tried both and they’re definitely going in the right direction for me.

Visual Studio

I’m especially looking forward to some of the improvements coming in the Visual Studio installation experience moving forward. This should make setting up new development environments much faster, and the side-by-side installations means there’s much less risk when installing previews.

App development for Windows, iOS, Android

The mobile app development story from Microsoft is stronger than it ever has been. This year brings a number of improvements to the Universal Windows Platform (UWP) itself, and a more integrated store experience which now includes the apps on the Xbox One and HoloLens.

The Desktop App Converter lets you wrap up existing Win32 and .NET apps into UWP packages, allowing access to new features like UWP APIs – including Live Tiles. Even though I don’t currently develop any Win32 or .NET applications that I want to put in the store, this is an important step and I’m looking forward to the benefits of this as an app user.


For targeting non-Windows devices, the Xamarin platform is now the obvious choice. After recently purchasing Xamarin (and their amazing talent) they’ve decided to make Xamarin available for no extra charge with Visual Studio. And that includes shipping it with the free Community version. Very cool.

The combination of UWP and Xamarin means I can directly apply my C# and .NET skills to making applications for a wide range of platforms, sharing many code components. It’s really coming together nicely.

.NET and the continued move into Open Source

As well as making Xamarin’s development tools free to Visual Studio users, the folks over at Microsoft also announced their intention to open source the Xamarin SDK (including the runtime, the libraries, and command line tools), and give the governance of it over to the .NET Foundation.

Mono, the cross platform and open source sibling of the full .NET Framework has also been re-licenced to be even more permissive, and given to the .NET Foundation. (To be honest I actually thought this was already the case!)

.NET Core, the future replacement of both the .NET Framework and Mono, also saw steady improvements – my favourite of which was official F# language support:

$ cd hellofs/
$ ls
$ dotnet new --lang f#
Created new F# project in /home/julian/hellofs.
$ # I can now dotnet restore and run this F# app using .NET Core!

The Future of Cortana and Conversation as a Platform

So far everything I have mentioned has been mostly around solid updates to existing platforms, but this year’s Build included a slightly different way of thinking about productivity with the idea of Conversation as a Platform.

Conversation as a Platform

The Microsoft Bot Framework provides templates for creating bots with C# and JavaScript, as well as connectors to simplify their interaction with services like Slack and Skype. When linked with the new Cognitive Services, these bots can understand natural language and perform tasks for the user.


The demonstration of talking to Cortana through Skype was very interesting – where essentially Cortana can act as a broker between the user and other bots on the Internet which can act as experts in their field. I found this very compelling, and something I can see myself using.

As this is as subject that interests me greatly, I’ll be writing more about this over the next week or so.

And everything else…

Of course, there’s no way I could summarise everything I looked at so I have skipped a number of cool announcements ranging from Microsoft Graph to HoloLens.

The hard-working folk over at Channel 9 have videos for many of the events and topics, so be sure to check them out if you’re interested. I’m very thankful that these videos are all made available for everyone to watch, I really enjoy watching them.

Using Docker with Hyper-V: the easy way

Docker is pretty cool, but there’s one thing that always gets in the way for me.

The default installer sucks.

Installing Docker using the Windows version of the Docker Toolkit means it installs multiple things on your computer:

  • The Docker command-line tools (hooray)
  • Virtual Box (boo)
  • Kitematic, a user interface (only works with Virtual Box… so also boo?)

As I use Hyper-V as my virtualization platform of choice, Virtual Box is completely redundant. More than that, it’s an annoyance. They’re incompatible with each other.

Obviously this was no good for me, so after a bit of messing around I have found my preferred method for getting Docker up and running on own workstations using Hyper-V.

As is always the case with these things, this may not be the best way for everyone. But if you’re interested, here’s what I did:

Note: the following assumes you have some knowledge of Windows and PowerShell in order to keep the instructions brief.

Setup Hyper-V

  • Ensure you have Windows 8.1 or 10 Professional
  • Ensure you have Hyper-V installed
  • Create an external switch called “External Switch” (if you don’t have one)

Get the binaries

  1. Get docker.exe client from:
  2. Get docker-machine.exe from:
  3. Put them in a new ~\Scripts\Docker folder

Update your PowerShell profile.ps1 file

Make sure you can get to your new commands by using an alias or add the folder to your PATH, either works:

Set-Alias docker            "~\Scripts\Docker\docker.exe"
Set-Alias docker-machine    "~\Scripts\Docker\docker-machine.exe"

And then add some variables to be used by default, like the network switch and memory allocation:

$env:HYPERV_VIRTUAL_SWITCH  = "External Switch"
$env:HYPERV_MEMORY          = 2048

Create and configure your machine

After this you should be able to create a new boot2docker VM on Hyper-V:

PS> docker-machine create -d hyperv default

Once that’s up and running you’ll need to set your environment for the Docker client:

PS> docker-machine env --shell=powershell default | invoke-expression

Try out a container

Working? Great! Try running something, like the new .NET Core container:

PS> docker run -it microsoft/dotnet:latest


Looks like there will be an even easier way to do this moving forward, thanks to an upcoming release of Docker for Windows which will use Hyper-V by default. (Yesssss!!!)

Find out more through this handy article from Scott Hanselman.

Interactive showdown: csi.exe verses fsi.exe

Visual Studio 2015 Update 1 brings with it a nice little utility called C# Interactive which you can access from the View menu, but we’re going to have a look at the command line version which you can run from the Developer Command Prompt for VS2015.

Using this tool you can quickly run C# using a REPL environment – which is a fast and convenient way to explorer APIs, frameworks, and script existing code from .NET assemblies in a non-compiled way.

While this is new to Visual Studio and C#, we have enjoyed this functionality for a long time with F#.

(Technically this has been around for a while, but it’s officially shipping now!)

I decided to take a look at the new csi.exe application, and compare it to how I already use fsi.exe and see if it’s enough to make me switch my default command line tool.

C# Interactive

For me the most important way I’d use C# Interactive is via the command line, so it’s important to know what it’s capable of, even though you may not need to use the advanced features right away.

To find out the current version and get a list of the command line options in C# Interactive, just add the /? switch and read the output:

PS> csi /?
Microsoft (R) Visual C# Interactive Compiler version
Copyright (C) Microsoft Corporation. All rights reserved.

Usage: csi [option] ... [script-file.csx] [script-argument] ...


/help Display this usage message (alternative form: /?)
/i Drop to REPL after executing the specified script.
/r:<file> Reference metadata from the specified assembly file (alternative form: /reference)
/r:<file list> Reference metadata from the specified assembly files (alternative form: /reference)
/lib:<path list> List of directories where to look for libraries specified by #r directive. (alternative forms: /libPath /libPaths)
/u:<namespace> Define global namespace using (alternative forms: /using, /usings, /import, /imports)
@<file> Read response file for more options
-- Indicates that the remaining arguments should not be treated as options.

Form a first look, I can see that csi.exe has all of the command line options I really want in normal use – I especially find /i to be useful – but we’ll come to that shortly.

F# Interactive

F# Interactive has been around for a lot longer, and is built on different technology under the hood – so there are a more options going on here, but we can take a look by providing a similar -? switch:

PS> fsi -?
Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

Usage: fsi.exe <options> [script.fsx [<arguments>]]...

Input Files

--use:<file> Use the given file on startup as initial input
--load:<file> #load the given file on startup
--reference:<file> Reference an assembly (Short form: -r)
-- ... Treat remaining arguments as command line arguments, accessed using fsi.CommandLineArgs

Code Generation

--debug[+|-] Emit debug information (Short form: -g)
--debug:{full|pdbonly} Specify debugging type: full, pdbonly. ('full' is the default and enables attaching a debugger to a running program).
--optimize[+|-] Enable optimizations (Short form: -O)
--tailcalls[+|-] Enable or disable tailcalls
--crossoptimize[+|-] Enable or disable cross-module optimizations

Errors and Warnings

--warnaserror[+|-] Report all warnings as errors
--warnaserror[+|-]:<warn;...> Report specific warnings as errors
--warn:<n> Set a warning level (0-5)
--nowarn:<warn;...> Disable specific warning messages
--warnon:<warn;...> Enable specific warnings that may be off by default
--consolecolors[+|-] Output warning and error messages in color


--checked[+|-]Generate overflow checks

--define:<string> Define conditional compilation symbols (Short form: -d)
--mlcompatibility Ignore ML compatibility warnings


--nologo Suppress compiler copyright message
--help Display this usage message (Short form: -?)


--codepage:<n> Specify the codepage used to read source files
--utf8output Output messages in UTF-8 encoding
--fullpaths Output messages with fully qualified paths
--lib:<dir;...> Specify a directory for the include path which is used to resolve source files and assemblies (Short form: -I)
--noframework Do not reference the default CLI assemblies by default
--exec Exit fsi after loading the files or running the .fsx script given on the command line
--gui[+|-] Execute interactions on a Windows Forms event loop (on by default)
--quiet Suppress fsi writing to stdout
--readline[+|-] Support TAB completion in console (on by default)
--quotations-debug[+|-] Emit debug information in quotations
--shadowcopyreferences[+|-] Prevents references from being locked by the F# Interactive process

As you can see there’s a lot more options for F#, but many of them are not needed for every day use.

Quick Interactive Use

It’s fairly common that I use F# Interactive just to test out how part of the Framework behaves.

In this instance, I’ll use HttpUtility.HtmlEncode method to see see what output I get when one of my emoticons is encoded into HTML-friendly characters.

PS> fsi

Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> open System.Web;;
> let encode s = HttpUtility.HtmlEncode(s);;

val encode : s:string -> string

> encode "<(>_<)>";;
val it : string = "&lt;(&gt;_&lt;)&gt;"

This is how I’d do it in F# – we could call the HtmlEncode function directly, but creating functions is so easy with F# that we might as well shorten the name to make it nice and easy if we need to run it multiple times.

The function encode actually returns a string rather than printing it to the screen, but F# is setting that output to a special value called it – a special identifier which is used for displaying the value of the last expression on the screen. It’s handy, and you’ll see why.

Alright so here’s my first attempt to do something similar in C# Interactive.

PS> csi
Microsoft (R) Visual C# Interactive Compiler version
Copyright (C) Microsoft Corporation. All rights reserved.

Type "#help" for more information.
> using System.Web;
> HttpUtility.HtmlEncode("<(>_<)>");
(1,1): error CS0103: The name 'HttpUtility' does not exist in the current context

Ah. HttpUtility is missing because it hasn’t loaded the clases from the System.Web.dll assembly. I didn’t notice on the first line becuase of the way namespaces work – the namespace exists, but not the class we want. No problem, we just reference it using #r – you reference assemblies this way in F# too!

> #r "System.Web"
> HttpUtility.HtmlEncode("<(>_<)>");

This worked and we have access to the static HttpUtility class and the HtmlEncode method – however the output has not been displayed to the screen because C# Interactive doesn’t have that the specal it value F# had.

I didn’t realise this at first but in the absense of the it value F# has, the C# Interactive prompt introduces a slightly different syntax for when you want to see the value.

> HttpUtility.HtmlEncode("<(>_<)>");
> HttpUtility.HtmlEncode("<(>_<)>")

Notice the difference a semicolon makes? This is important, and something I missed when first trying out C# Interactive. Avoiding the semicolon would normally result in invalid C#, but this is a great way to view the output as if you’re typing it into the Immediate Window in Visual Studio.

Let’s also create a function using normal C# syntax so that we don’t have so much typing to do. Notice that I’m going to call this function without the semicolon so that I can see the output.

> string encode(string s) { return HttpUtility.HtmlEncode(s); }
> encode("<(>_<)>")

Loading Scripts

Let’s keep things simple, we’ll take the functions we just created in each langauge, and create a script file so that they can be loaded up when we start an interactive session.

First of all, let’s do it with F#. Here’s the content of encode.fsx:

open System.Web
let encode s =

And then we can run it from the command line using the --use switch. This will drop us into an interactive prompt after the code file has been loaded.

PS> fsi --use:.\encode.fsx

Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

val encode : s:string -> string

> encode "<(>_<)>";;
val it : string = "&lt;(&gt;_&lt;)&gt;"
> encode "<(^o^)>";;
val it : string = "&lt;(^o^)&gt;"
> encode "<(T_T)>";;
val it : string = "&lt;(T_T)&gt;"

Not bad at all. So let’s do the same thing with the C# interactive, using a file called encode.csx:

#r "System.Web"
using System.Web;
string encode(string s)
    return HttpUtility.HtmlEncode(s);

I love that they used a similar extension! And again, we can run the code file and then get an interactive prompt as above using the /i switch.

PS> csi /i .\encode.csx
> encode("<(>_<)>");
> encode("<(^o^)>");
> encode("<(T_T)>");

We have the same end result, though like before the actual functions behave slightly differently. C# Interactive gives a cleaner output here, though you can always clean up the F# Interactive prompt a little bit by using the --nologo switch.

Use Inside PowerShell

Because I want to get access to both of these utilities as fast as possible, I have added a few lines to my PowerShell profile which will ease their use.

I’ve mentioned doing this kind of thing before – and I highly that developers using Windows spend a good amount of time learning PowerShell – but here’s a little snippet that may be useful.

$PROGFILES32 = "C:\Program Files (x86)\"
# create an alias to the full path of the executable
Set-Alias fsi "$PROGFILES32\Microsoft SDKs\F#\4.0\Framework\v4.0\fsi.exe"
Set-Alias csi "$PROGFILES32\MSBuild\14.0\Bin\amd64\csi.exe"
# add helpers which include common switches
function fsu ($fsx) { fsi --nologo --use:$fsx }
function csu ($csx) { csi /i $csx }

Adding this to my profile means I can just run them using fsu encode.fsx or csu encode.csx respectively. Very easy.