Using Docker with Hyper-V: the easy way

Docker is pretty cool, but there’s one thing that always gets in the way for me.

The default installer sucks.

Installing Docker using the Windows version of the Docker Toolkit means it installs multiple things on your computer:

  • The Docker command-line tools (hooray)
  • Virtual Box (boo)
  • Kitematic, a user interface (only works with Virtual Box… so also boo?)

As I use Hyper-V as my virtualization platform of choice, Virtual Box is completely redundant. More than that, it’s an annoyance. They’re incompatible with each other.

Obviously this was no good for me, so after a bit of messing around I have found my preferred method for getting Docker up and running on own workstations using Hyper-V.

As is always the case with these things, this may not be the best way for everyone. But if you’re interested, here’s what I did:

Note: the following assumes you have some knowledge of Windows and PowerShell in order to keep the instructions brief.

Setup Hyper-V

  • Ensure you have Windows 8.1 or 10 Professional
  • Ensure you have Hyper-V installed
  • Create an external switch called “External Switch” (if you don’t have one)

Get the binaries

  1. Get docker.exe client from: https://github.com/docker/docker/releases
  2. Get docker-machine.exe from: https://github.com/docker/machine/releases
  3. Put them in a new ~\Scripts\Docker folder

Update your PowerShell profile.ps1 file

Make sure you can get to your new commands by using an alias or add the folder to your PATH, either works:

Set-Alias docker            "~\Scripts\Docker\docker.exe"
Set-Alias docker-machine    "~\Scripts\Docker\docker-machine.exe"

And then add some variables to be used by default, like the network switch and memory allocation:

$env:HYPERV_VIRTUAL_SWITCH  = "External Switch"
$env:HYPERV_MEMORY          = 2048

Create and configure your machine

After this you should be able to create a new boot2docker VM on Hyper-V:

PS> docker-machine create -d hyperv default

Once that’s up and running you’ll need to set your environment for the Docker client:

PS> docker-machine env --shell=powershell default | invoke-expression

Try out a container

Working? Great! Try running something, like the new .NET Core container:

PS> docker run -it microsoft/dotnet:latest

Update

Looks like there will be an even easier way to do this moving forward, thanks to an upcoming release of Docker for Windows which will use Hyper-V by default. (Yesssss!!!)

Find out more through this handy article from Scott Hanselman.

Interactive showdown: csi.exe verses fsi.exe

Visual Studio 2015 Update 1 brings with it a nice little utility called C# Interactive which you can access from the View menu, but we’re going to have a look at the command line version which you can run from the Developer Command Prompt for VS2015.

Using this tool you can quickly run C# using a REPL environment – which is a fast and convenient way to explorer APIs, frameworks, and script existing code from .NET assemblies in a non-compiled way.

While this is new to Visual Studio and C#, we have enjoyed this functionality for a long time with F#.

(Technically this has been around for a while, but it’s officially shipping now!)

I decided to take a look at the new csi.exe application, and compare it to how I already use fsi.exe and see if it’s enough to make me switch my default command line tool.

C# Interactive

For me the most important way I’d use C# Interactive is via the command line, so it’s important to know what it’s capable of, even though you may not need to use the advanced features right away.

To find out the current version and get a list of the command line options in C# Interactive, just add the /? switch and read the output:

PS> csi /?
Microsoft (R) Visual C# Interactive Compiler version 1.2.0.51106
Copyright (C) Microsoft Corporation. All rights reserved.

Usage: csi [option] ... [script-file.csx] [script-argument] ...

Options

/help Display this usage message (alternative form: /?)
/i Drop to REPL after executing the specified script.
/r:<file> Reference metadata from the specified assembly file (alternative form: /reference)
/r:<file list> Reference metadata from the specified assembly files (alternative form: /reference)
/lib:<path list> List of directories where to look for libraries specified by #r directive. (alternative forms: /libPath /libPaths)
/u:<namespace> Define global namespace using (alternative forms: /using, /usings, /import, /imports)
@<file> Read response file for more options
-- Indicates that the remaining arguments should not be treated as options.

Form a first look, I can see that csi.exe has all of the command line options I really want in normal use – I especially find /i to be useful – but we’ll come to that shortly.

F# Interactive

F# Interactive has been around for a lot longer, and is built on different technology under the hood – so there are a more options going on here, but we can take a look by providing a similar -? switch:

PS> fsi -?
Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

Usage: fsi.exe <options> [script.fsx [<arguments>]]...

Input Files

--use:<file> Use the given file on startup as initial input
--load:<file> #load the given file on startup
--reference:<file> Reference an assembly (Short form: -r)
-- ... Treat remaining arguments as command line arguments, accessed using fsi.CommandLineArgs

Code Generation

--debug[+|-] Emit debug information (Short form: -g)
--debug:{full|pdbonly} Specify debugging type: full, pdbonly. ('full' is the default and enables attaching a debugger to a running program).
--optimize[+|-] Enable optimizations (Short form: -O)
--tailcalls[+|-] Enable or disable tailcalls
--crossoptimize[+|-] Enable or disable cross-module optimizations

Errors and Warnings

--warnaserror[+|-] Report all warnings as errors
--warnaserror[+|-]:<warn;...> Report specific warnings as errors
--warn:<n> Set a warning level (0-5)
--nowarn:<warn;...> Disable specific warning messages
--warnon:<warn;...> Enable specific warnings that may be off by default
--consolecolors[+|-] Output warning and error messages in color

Language

--checked[+|-]Generate overflow checks

--define:<string> Define conditional compilation symbols (Short form: -d)
--mlcompatibility Ignore ML compatibility warnings

Miscellaneous

--nologo Suppress compiler copyright message
--help Display this usage message (Short form: -?)

Advanced

--codepage:<n> Specify the codepage used to read source files
--utf8output Output messages in UTF-8 encoding
--fullpaths Output messages with fully qualified paths
--lib:<dir;...> Specify a directory for the include path which is used to resolve source files and assemblies (Short form: -I)
--noframework Do not reference the default CLI assemblies by default
--exec Exit fsi after loading the files or running the .fsx script given on the command line
--gui[+|-] Execute interactions on a Windows Forms event loop (on by default)
--quiet Suppress fsi writing to stdout
--readline[+|-] Support TAB completion in console (on by default)
--quotations-debug[+|-] Emit debug information in quotations
--shadowcopyreferences[+|-] Prevents references from being locked by the F# Interactive process

As you can see there’s a lot more options for F#, but many of them are not needed for every day use.

Quick Interactive Use

It’s fairly common that I use F# Interactive just to test out how part of the Framework behaves.

In this instance, I’ll use HttpUtility.HtmlEncode method to see see what output I get when one of my emoticons is encoded into HTML-friendly characters.

PS> fsi

Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> open System.Web;;
> let encode s = HttpUtility.HtmlEncode(s);;

val encode : s:string -> string

> encode "<(>_<)>";;
val it : string = "&lt;(&gt;_&lt;)&gt;"
>

This is how I’d do it in F# – we could call the HtmlEncode function directly, but creating functions is so easy with F# that we might as well shorten the name to make it nice and easy if we need to run it multiple times.

The function encode actually returns a string rather than printing it to the screen, but F# is setting that output to a special value called it – a special identifier which is used for displaying the value of the last expression on the screen. It’s handy, and you’ll see why.

Alright so here’s my first attempt to do something similar in C# Interactive.

PS> csi
Microsoft (R) Visual C# Interactive Compiler version 1.1.0.51109
Copyright (C) Microsoft Corporation. All rights reserved.

Type "#help" for more information.
> using System.Web;
> HttpUtility.HtmlEncode("<(>_<)>");
(1,1): error CS0103: The name 'HttpUtility' does not exist in the current context
>

Ah. HttpUtility is missing because it hasn’t loaded the clases from the System.Web.dll assembly. I didn’t notice on the first line becuase of the way namespaces work – the namespace exists, but not the class we want. No problem, we just reference it using #r – you reference assemblies this way in F# too!

> #r "System.Web"
> HttpUtility.HtmlEncode("<(>_<)>");
>

This worked and we have access to the static HttpUtility class and the HtmlEncode method – however the output has not been displayed to the screen because C# Interactive doesn’t have that the specal it value F# had.

I didn’t realise this at first but in the absense of the it value F# has, the C# Interactive prompt introduces a slightly different syntax for when you want to see the value.

> HttpUtility.HtmlEncode("<(>_<)>");
> HttpUtility.HtmlEncode("<(>_<)>")
"&lt;(&gt;_&lt;)&gt;"
>

Notice the difference a semicolon makes? This is important, and something I missed when first trying out C# Interactive. Avoiding the semicolon would normally result in invalid C#, but this is a great way to view the output as if you’re typing it into the Immediate Window in Visual Studio.

Let’s also create a function using normal C# syntax so that we don’t have so much typing to do. Notice that I’m going to call this function without the semicolon so that I can see the output.

> string encode(string s) { return HttpUtility.HtmlEncode(s); }
> encode("<(>_<)>")
"&lt;(&gt;_&lt;)&gt;"
>

Loading Scripts

Let’s keep things simple, we’ll take the functions we just created in each langauge, and create a script file so that they can be loaded up when we start an interactive session.

First of all, let’s do it with F#. Here’s the content of encode.fsx:

open System.Web
 
let encode s =
    HttpUtility.HtmlEncode(s)

And then we can run it from the command line using the --use switch. This will drop us into an interactive prompt after the code file has been loaded.

PS> fsi --use:.\encode.fsx

Microsoft (R) F# Interactive version 14.0.23413.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

>
val encode : s:string -> string

> encode "<(>_<)>";;
val it : string = "&lt;(&gt;_&lt;)&gt;"
> encode "<(^o^)>";;
val it : string = "&lt;(^o^)&gt;"
> encode "<(T_T)>";;
val it : string = "&lt;(T_T)&gt;"
>

Not bad at all. So let’s do the same thing with the C# interactive, using a file called encode.csx:

#r "System.Web"
using System.Web;
 
string encode(string s)
{
    return HttpUtility.HtmlEncode(s);
}

I love that they used a similar extension! And again, we can run the code file and then get an interactive prompt as above using the /i switch.

PS> csi /i .\encode.csx
> encode("<(>_<)>");
&lt;(&gt;_&lt;)&gt;
> encode("<(^o^)>");
&lt;(^o^)&gt;
> encode("<(T_T)>");
&lt;(T_T)&gt;
>

We have the same end result, though like before the actual functions behave slightly differently. C# Interactive gives a cleaner output here, though you can always clean up the F# Interactive prompt a little bit by using the --nologo switch.

Use Inside PowerShell

Because I want to get access to both of these utilities as fast as possible, I have added a few lines to my PowerShell profile which will ease their use.

I’ve mentioned doing this kind of thing before – and I highly that developers using Windows spend a good amount of time learning PowerShell – but here’s a little snippet that may be useful.

$PROGFILES32 = "C:\Program Files (x86)\"
 
# create an alias to the full path of the executable
Set-Alias fsi "$PROGFILES32\Microsoft SDKs\F#\4.0\Framework\v4.0\fsi.exe"
Set-Alias csi "$PROGFILES32\MSBuild\14.0\Bin\amd64\csi.exe"
 
# add helpers which include common switches
function fsu ($fsx) { fsi --nologo --use:$fsx }
function csu ($csx) { csi /i $csx }

Adding this to my profile means I can just run them using fsu encode.fsx or csu encode.csx respectively. Very easy.

Windows Server 2016 for Developers

Windows Server

It’s not often I get excited about new versions of Windows Server. It has been a long time since I have professionally managed any servers or worked in any kind of IT environment. It’s also been a long time since I’ve had my own personal servers at home. At one point, I had five Windows Server 2003 boxes in an Active Directory domain!

As a developer, many of the changes coming in Windows Server 2016 have got me excited.

The things I care most about are servers which power cloud applications, and not the traditional view of a back office server for files and printers – something Windows has traditionally been associated with since the 1990s.

With this in mind, here are the top three technologies I am most interested in as a software development engineer and a solution architect.

1. Nested Virtualisation

Nested Virtualisation

Virtualisation has always been something I have been keen on, and Microsoft’s main platform for this is Hyper-V, a powerful server-based virtualisation platform which works on the client, server, and cloud.

Because Hyper-V uses a hypervisor to directly access virtualisation-enabled hardware, there has always been a limitation stopping you from running hypervisor based virtualisation inside a machine which is already virtualised. With the latest version of Hyper-V shipping with Windows Server 2016 (and Windows 10) you can actually nest these hypervisors inside each other – essentially letting you run a virtual machine inside a virtual machine.

I use a virtual machine hosted on Azure as a developer platform, so the ability to use virtualisation technologies (including Windows and Android emulators) inside of that virtual machine would be very handy. At the moment I have to run these tools locally on my physical hardware.

Currently, virtual machines need to be manually tweaked to enable the nested virtualisation – so we’re not quite at the stage where it is completely seamless, but being able to run a Windows 10 Mobile emulator inside of a Windows 10 desktop virtual machine running inside of Windows Server doesn’t seem too far fetched.

2. Containers

Windows Containers

Container technology is similar to virtualisation, but rather than having the overhead of virtualising the whole machine, applications can be sandboxed into their own execution environment while continuing to share system resources, like the file system.

This means these sandboxed applications can be started much faster and the overheads are smaller, allowing much higher density.

Windows Server 2016 brings container technology to Windows applications and also allows an extra level of separation by offering Hyper-V containers as well.

You can see why nested virtualisation is important.

Containers aren’t new, Linux has had support for containers for a while now, and the recent popularity of Docker has made this technology a fantastic option for developers to design their applications to work inside these containers, and then share them on Docker Hub.

Microsoft recently announced a partnership with Docker and you can find plenty of material from the folks in Redmond showing how the Docker tools work with Windows containers. It’s important that Microsoft get this right, as they don’t want to miss out on this important change in the way developers build and ship solutions.

With the ability to have a full VM separation running in Hyper-V containers, it’s quite possible that Linux could run on top of the Windows Server container system. A single management interface to run mixed containers? Sign me up.

For this to really take off, developers would need to be able to do this on their own machines. Right now Docker on Windows is a pain if you use Hyper-V as it’s incompatible with the current version of the Docker Toolbox. Microsoft have to be trying to fix this with their partnership and it’s likely ‘Barcelona‘ is part of this.

Windows is most certainly my platform of choice for the desktop, but I want the applications I create to be cross platform. Being able to create Linux containers using the same management tools as Windows containers is a must.

3. Nano Server

Nano Server

When trying to increase the density if your containers, you want your operating system to be as compact as possible. Windows Server has always been quite a bit larger than Linux when used in its smallest configuration.

Nano Server is a new, highly cut-down version of Windows specifically designed for virtualisation, containers, and cloud environments. Nano Server’s new reduced feature set is a minimum bar to which Windows containers can target – anything that runs on the Nano SKU can run on the Core SKU and above too. But this new minimum bar cuts out many features which are unnecessary, including any UI. If you want to do anything on Nano Server you need to use PowerShell or SSH. (PowerShell Direct is an awesome new feature which will ensure you can connect to a virtualised Nano Server even when it’s not connected to the TCP/IP network – very cool)

Out of the box, Microsoft claims that Nano Server will have over 90% smaller VHD footprint and 80% fewer reboots than the current Windows Server. That’s a big improvement for both Hyper-V hosts and guests.

Roles like IIS can be already be added to Nano Server and Microsoft’s Tools for Docker already helps you write ASP.NET, Node.js, or any other kind of application and directly target a Nano Server container. The tools are great for publishing, and remote debugging is supported, just as you’d expect.

Running a Nano Server in a Hyper-V container like this means the overhead on the developer’s machine is smaller, but it’s still running the real environment just as you’d get in production. Need a special version of a framework for a project? No problem – it’s a container running inside Nano Server which you can spin up as required. This makes me think that all three of these technologies born in Windows Server 2016 must be coming to Windows 10. You can’t expect a developer to run Windows Server 2016 on their Surface!

One Last Thing…

There is one part of this release which is bothering me.

Why call it Windows Server 2016?

I think the trend of having these year-based names must come to an end, It just doesn’t make sense anymore. I’d much rather see Microsoft brand the platform as Windows Server 10 or something similar. Think of how companies like Ubuntu brands their server versions: 14.04 LTS, 14.10, 15.04, 15.10 etc. (LTS stands for Long Term Support, something Microsoft is now doing for their Windows 10 Enterprise customers.)

Currently, Nano Server skips all of this branding in Microsoft’s documentation. I don’t know if that’s just because it’s in preview, but I hope it’s a sign of changes to come. Recently, Microsoft decided to drop year-based naming from their Dynamics AX ERP product, I think they should do this for Windows Server too.

For me, the details are important. In a world where Microsoft is finally embracing new thinking, I feel this year-based branding is a tradition that should let go.

Keep in mind it’s December 2015 and the current version of Windows Server is called “Windows Server 2012 R2 with Update” – seriously.

Windows Server 2016 is due to ship in late 2016.

Launch PowerShell with AutoHotkey

Sometimes nerds like me just need to open PowerShell as fast as possible.

This is very easy to achieve thanks to AutoHotkey – a very popular desktop automation application for Windows.

First install AutoHotkey from their website. Modern Windows machines just want the x64 + Unicode option when installing, if in doubt check their help documentation.

Once you’ve got it installed you need to create a new file for the script. For me, I created a new file called PowerShell.ahk in my scripts directory using gvim – but you can use your editor of choice and place it wherever you like.

Inside the file enter the following script:

#+p::
   Run, PowerShell
Return

The # is the symbol used for the Windows key, the + is the symbol used for shift, and the p stands for PowerShell. On then next line I’ve put Run, PowerShell and that’s it.

This means we are set up to run PowerShell when we press WIN + SHIFT + P.

Obviously you can do a lot more than just this, and for me starting PowerShell like that is not enough – I really dislike that blue background they use by default.

I have already set myself up with a nicely customised shortcut to PowerShell which I keep in my scripts folder and syncronise across machines. This includes the font and colour options I prefer.

#+p::
   Run, C:\Users\Julian\Scripts\PowerShell.lnk
Return

However you decided to script it, you just need to double click the PowerShell.ahk file when you’re done and AutoHotkey will register the combination for you.

There you have it! A super fast way to bring up a PowerShell prompt whenever you need it.

Being productive on Windows 10

I thought I’d write down some of my thoughts on how I’m productive on Windows 10 now that it has been out for a little while and all of my machines have been updated.

Including my phone and 7 inch tablet, I run Windows 10 on four machines:

The following discussion is only about the first two, which are both configured to be general purpose devices used for all sorts of tasks, including development and productivity. I’ll write about the phone and tablet another time.

Windows 10 Desktop

With Windows 10 the desktop is back on the PC and, as usual with my computers, there are no icons in sight. I use my desktop for temporary things, not as a place to keep anything for any extended amount of time. If I’m downloading a file to run it through a comparison tool or something like that, my desktop is fine.

I’m still using teal as the main colour for the user interface. I have used this on my workstations for a number of years now and, with Windows 10, the colour configuration is better than ever. You can choose to have it just as a highlight colour on top of black or have variations of the colour used throughout the Start Menu and Action Centre UI. I prefer the latter with this colour choice.

I feel like teal has worked really well for me, it’s fairly conservative and seems to fit into multiple uses really well:

  • It is not too bright, and offers good contrast with both black and white
  • It works well in both cool or warm lighting environments
  • It doesn’t become too saturated when used with high F.lux settings

For my Surface, I have selected a nice ultra-wide space wallpaper which fits nicely with the colours I choose. This has been a real favourite of mine since I first started using it, but I am unsure who the original artist is. I’d love to give them credit.

Windows 10 Taskbar

I have no applications pinned on my taskbar so I get a really clean environment when I have nothing open. I launch all of my applications from the Start Menu or PowerShell.

I’ve loved using live tiles since they were first introduced on the phone. I enjoy the benefits you get from the glanceable information and I find the grid based organisational structure is way more useful than just a menu. My initial thoughts were that having the Start Menu in the corner may not be as good as having it full screen like on Windows 8, but I quickly changed my mind as soon as I started using it on the insider previews.

Right now, I have grouped the tiles into four main sections with the bottom right configured slightly differently depending on which machine I’m using.

Windows 10 Start Menu

My current setup of tiles and most used applications is pretty much a snapshot in time though – I don’t feel like I have had enough time to really know what I want to have pinned here. At the moment I’m enjoying having a mixture of glancable information (Weather, Calendar, etc.) unread content counts (NextGen Reader, Mail, etc.) and launcher icons (Edge, Store, etc.).

I’m certain this will change quite a lot with use.

Windows 10 Cortana

Cortana has been a very welcome addition to the PC. I’ve been using Cortana on my phone since the original previe, and she’s very much a part of my computer use now. She has had numerous improvements over her first iteration and now that she’s available through all my personal Windows devices, using her for things like reminders and glanceable information has been easier than ever.

I use her on my Surface quite a bit, though I do sometimes have trouble with her listening to me when I say ‘Hey Cortana’, so I usually just press WIN + C to activate her, then she has no trouble understanding my requests.

All of my requests are typed when I use the Virtual Machine. Typing requests is as easy as pressing the Windows key. I find typing to be just as natural as speech, and really fast when I’m using a desktop keyboard. I also tend to use the VM when I’m in locations where speaking wouldn’t be very useful anyway.

I have had issues with using the location-based features on the VM, but I worked around it using a Fake GPS driver.

The Task View is a another new addition to the Windows task bar, and even though I regularly use the key combination WIN + TAB to activate it, I still like to have the icon on the task bar anyway. This screen also includes the ability to add a number of virtual desktops. Surprisingly, I don’t use virtual desktops as much as I thought I would – but I am really glad they there when I do use them.

I originally thought I would always split things out every time I used the computer. For example, I thought that all my communications apps would always be in one desktop and development apps would belong in another. It just didn’t really happen that way. As I was regularly switching between them, I quickly got confused when I had more than a few apps open.

Virtual desktops become useful for me when I really want to concentrate on one or two different activities. I move their windows around on the Task View and put them into their own desktop to get a distraction fee environment when I need it. Ad hoc desktops to help me focus have been much more useful than trying to set rules for myself.

CTRL + WIN + LEFT and CTRL + WIN + RIGHT are used to switch back and forth between desktops. (I’d like to see better support for this with a three finger swipe on the trackpad please Microsoft!)

Windows 10 Notification Area

The Notification Area has been shuffled around a bit in Windows 10. The keyboard icon is now integrated and right next to the clock, and there’s now an additional new notification icon for the Action Centre.

I only show the very minimum of icons here – Process Explorer, Power, Network, Sound. I often use a FuzzyClock application to change how the time is displayed down here too. I am not a fan of using the notification area as a place to minimize windows, or launch applications.

Process Explorer is Microsoft’s ultra-nerdy replacement for the Task Manager and something I always use on my Windows machines. I find it to be way more detailed than the built in version and it includes many features developers find useful. As you can see from the screenshot, you also get a glanceable indicator of CPU usage here too. I find that CPU usage is often the most important metric for how the machine is doing, as I don’t really care how much RAM is being used unless I am having problems with something. If I do have problem, full access to everything running on the machine is just a click away.

Windows 1`0 Action Centre

Action Centre is a welcome addition to Windows on the PC, and something I’m already well used to using, thanks to Windows Phone. The version that ships today is not perfect though. Over time I’d like to see better notification sync with the phone. I also find that the having a solid icon isn’t enough to really draw attraction to the fact there is a new notification pending. I’d like to see options here for flashing or some other more substantial indicator, though I have to admit, I probably wouldn’t want it to be like that all time.

In fact, when I’m trying to be super productive, I turn on Quiet Hours. I use this in combination with the Quite Hours feature on my phone to ensure I don’t get annoyed with notifications when I don’t need them. But they’re still a click away.

The utilities I have mentioned above, like FuzzyClock and Process Explorer, are tiny portable executables and don’t require some system-changing installation mechanism. All these small applications I use are stored in a Scripts folder I have been maintaining for years.

This folder lives in my profile under C:\Users\Julian\Scripts and is synchronised to a private Git repository hosted on Visual Studio Online. Inside there are a number of scripts to run automated tasks and setup my PowerShell profile to be exactly the same across machines. In addition to these scripts, there’s a Tools folder which contains all of these small utility applications as well as some larger applications which have been modified to work in a ‘portable’ way.

windows-10-powershell

I spend a lot of my time in PowerShell and this folder is absolutely fundamental to how I complete many tasks on my Windows machines including, but not limited to:

  • Scripting languages and runtimes – Ruby, Python, IKVM
  • Text editors and UNIX utilities- Vim, grep, wget, curl
  • Windows Tools – Process Explorer, Autoruns
  • General Utilities – FileZilla, Far, WinMerge, Putty
  • Plus years of PowerShell and F# scripts, registry files and more

I could probably go into more detail around this in the future. If you are interested, let me know in the comments.

Not everything is installed this way though. Some of the biggest applications I use require installation from the web through subscriptions, like Office 365 and MSDN:

  • Outlook, OneNote, Visio and the rest of Office (from Office 365)
  • Visual Studio Enterprise (from MSDN)
  • Visual Studio Code, Node and Git (free)
  • Wunderlist, Slack and Skype (free)
  • 7-Zip, F.lux, Paint.NET (free)

And finally, there are a number of applications which either are preinstalled on Windows or I have to install from the Windows Store. The ones I use the most are:

  • Groove Music, Film & TV, Photos and other entertainment apps
  • MSN apps like Weather and Sports
  • Audible, Netflix
  • NextGen Reader

Applications installed through the Windows Store are super painless. I wish more applications could be installed this way. I’d like to see more parity with the phone too, and I’m sure that’ll be coming when Windows 10 Mobile ships at the end of the year.

Overall, I feel like I’m more productive on Windows 10 than I have been on any other operating system. I feel like things are only getting better in general – with things like SSH and containers coming soon, the future is pretty bright for Windows 10.