Why are there so few women in tech, particularly in systems and in infrastructure software?

As a 66.6(recurring)%* female team you might think we’d have the answer! But we don’t. Is it
Lack of senior women implying a poor career path?
Perceived youth/presenteeism/male culture (20 hour days and hackathons)
Crazy superlatives in job descriptions subtly discouraging woman or people with disabilities or folk with caring responsibilities from even applying?
More attractive alternative careers in Law or Accountancy or Finance?
Is Sheryl right? Are we just not Leaning In?
All of the above?
Something else entirely?
*Excessive pedantry
Tech can be a great career. It’s hard, it’s creative, you get to make things, you meet bright people, it’s decently paid, it’s potentially very flexible and there are lots of interesting jobs available. So why so few women and so few people with disabilities? It’s clearly failing to be the career choice it could be.

What should we be doing better for everyone? We’re co-organising Coed:Code on the 3rd February with the lovely folk at Weaveworks, Redmonk, Canonical, Pivotal, Cloudsoft and ClusterHQ to find out. Let’s do something about it.

Come along and have your say as well as chatting about Go, C and other favourite low-level languages with a panel of smart & pleasant** women in tech
Jenny Mulholland from Softwire
Yanqing Cheng from MetaSwitch Networks
Sue Spence of IndigoCode and London’s Women Who Go (proving that women in tech don’t shy away from the double entendre)
Chair: Anne Currie of Force12.io
**Not calling anyone “Rockstars” because that sounds egomaniacal

Posted by Anne Currie at 11:25
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: diversity
Friday, 22 January 2016
Microscaling Unikernels
We were really happy yesterday to hear the news Unikernel Systems have joined Docker. We’ve seen Anil and Justin present on unikernels many times and been excited by their vision for unikernels. The hope is that by joining with Docker they can move unikernels from being a niche technology into widespread usage. Given Docker’s success in bringing containers into wider usage there is reason to be optimistic.

We’re also excited about the possibilities for microscaling using unikernels. With Force12 we’re using the much faster startup times (1-2 secs) of containers compared to VMs. This makes it possible to scale containers in real time based on current demand. Unikernels start even faster and can be booted in milliseconds. This would make it possible to boot unikernels in response to incoming requests.

However currently working with unikernels is hard. Several of the major projects (MirageOS, LING and HalVM) are designed to run on the Xen hypervisor. EC2 also runs on Xen and supports running your own kernel. So its possible now to boot a unikernel on EC2 but it’s not easy and they certainly can’t be run in production.

This is where the acquisition by Docker really excites us. Workflow and open standards are an essential part of whether unikernels will be successful. Also not all workloads are suitable for unikernels. A lot of existing applications designed for the Linux or Windows kernel will never run as a unikernel. So a unified ecosystem for containers and unikernels makes a lot of sense.

For Force12 and microscaling in general this is great news. As we need to be able to launch an application quickly whether its packaged as a container or a unikernel.
Posted by Ross at 14:51
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: containers, microscaling, unikernels
Wednesday, 20 January 2016
Announcing the Force12 Alpha Programme
We’ve been blown away by the interest in what we’re doing with microscaling, from sign-ups to our Microscaling-in-a-Box demo, to invitations to speak at events, to people we respect saying lovely things about our ideas.

We’re starting to form a pretty good idea of what our initial product offering is going to be, making it really easy to scale your container workloads to adapt to demand.

You set up the priorities for each of your services, and performance targets that are important to your business such as web site response time. Force12 hooks into metrics on your system to see whether those targets are being met, and scales the number of running containers to optimize the performance of your deployment, according to the current load on the system.

But rather than build it in isolation and then discover what we’ve done wrong, we’re planning to work with a handful of early customers who’ll really help us shape the product.

A couple of slots have already been taken in this Alpha programme, but if you’ve got containers in production (or you’re close to doing so) and you’d like to get involved, we’d love to hear from you!
Posted by Liz at 09:41
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: force12
Wednesday, 13 January 2016
What is the difference between a Windows Native Container and a Hyper-V Container?

Docker on Windows
Microsoft do seem to have a (confusing?) variety of container offerings, all still in preview. I’ve been asked about this several times, so here’s my understanding.

We really have 2 ways to run containers on Windows
Native Containers

Native containers run straight on the host using the Windows Docker Container Engine (basically Docker compiled for Windows, with a shim that starts and stops and otherwise controls local containers using the WinAPIs).

You build native container images using DockerFiles and control the resulting containers using the Docker APIs, which is nice. I’ve played with these on Azure and they basically work as described.

Hyper-V Containers

I understand these are really just a native container (see above) running inside a Hyper-V lightweight VM on your host. Unlike normal VMs though, you should be able to build these “Hyper-V containers” using DockerFiles and use the Docker APIs to control them just like a native container, which is again nice. I haven’t been able to try this as they are not yet supported on Azure but I have no reason to believe anyone’s lying 😉


The Why is quite interesting. MS are pitching native containers for environments where all the containers are trusted and Hyper-V containers for mixed setups where you don’t know and trust all your containerized neighbours. I.E. the VM-like wrapper provides better security.

This sounds a little like the setup at Google Container Engine where I believe they run containers inside VMs inside containers (for ease of deployment, security and resource mgmt respectively). It looks like MS are targeting ease of deployment and security with Hyper-V Containers.

So if you were running Hyper V containers on Azure you’d have containers on VMs on VMs so not miles from the Google Container Engine architecture 😉 This tech is definitely made for layering.

If you want to know more there’s an excellent overview by Mark Russinovich, which appears to match what I’ve seen 😉

Posted by Anne Currie at 09:58
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: docker, hyper-v, windows
Wednesday, 6 January 2016
Microscaling a web site behind a load balancer
One of the basic use cases for microscaling is to scale the number of user-facing containers that need to respond in real time – like your web site – and use whatever resources are left over for your background tasks. So I thought it would be a good idea to show that you can scale the number of web server containers behind a load balancer using Force12.io.

The key thing we need to make this work is the ability to keep the load balancer up-to-date with the containers that can handle web requests. Fortunately I came across docker-loadbalancer which gave me a solution:

Use Registrator to spot when Docker starts and stops containers, and write them to Consul
Use consul-template to rewrite nginx’s config file when new web services are added to or removed from the Consul store.

Here’s a hugely helpful post from Joseph Miller on how registrator & consul work together.

I created an unbelievably small Flask web server that simply shows the host name, which is the container ID you’re running in. I used this as the high-priority task in Microscaling-in-a-Box, to randomly vary the number of running instances of this server container. The other task is just a busybox to represent low priority work.

Here’s a short video showing it running.

If you’d like to run this yourself, all the instructions are on the GitHub page.

Next steps include distributing the containers across a cluster rather than a single machine (we’ve done this before with simple sleeper containers, without the load balancing aspect). And of course as part of the Force12 product we’re working on mapping performance metrics to the number of containers to run.

Posted by Liz at 11:49
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: linux containers, load balancer, microscaling, nginx
Tuesday, 5 January 2016
Microscaling in Native Windows Containers – Part 2
Using Docker Containers on Windows 2016 Server
In part 1 we set up a Windows 2016 Server VM with container support on Azure. In part 2 we are going to create some Windows Docker containers and run them natively on Windows using the Windows Docker Container Engine on Windows 2016.

As a reminder, we can’t run existing Linux containers on Windows, we need to create new Windows-specific containers.
Creating a Simple “BusyBox”-Style Docker Windows Container Image
By default, in the Force12 Microscaling-in-a-Box demo we run BusyBox-style Linux containers which do nothing much except run and sleep.

Now we need to create similar container images for Windows so we can run our demo on Windows 2016. Fortunately that’s quite easy.
Docker Images
First run docker images to see what you already have on the VM. You’ll see the 2 windowsservercore container images that come pre-installed with your Azure VM. These images are not useful on their own but they can be used as a base to build your own Windows containers. That’s what we’ll do.

We’ll create a simple container using the windowsservercore image as a base and running our own batch file.
First create a simple batch file
Create a subfolder: temp. We’ll do everything in there (because you seem to get errors if you place a dockerfile in your home directory for some reason)
md temp
cd temp
Now we’ll create a batch job to run in our “BusyBox” style container. Create and open a new file in the temp directory

run notepad busyrun.bat
You can put anything you like in this batch file but we’ll start with a very simple job that loops forever, pinging a nonexistent ip address
just cut and paste the code below into notepad and save it

REM just do something forever
@echo off
goto loop

Building a Docker Container Image
Now we need to build this batch file into a Docker Windows container image. First we need to create a dockerfile by running notepad dockerfile and pasting in the code below (you can change the maintainer to you)

FROM windowsservercore
MAINTAINER me “[email protected]
# batch job that just runs forever occasionally echoing
ADD busyrun.bat /
ENTRYPOINT [“c:\\busyrun.bat”]

Finally we can run docker build from the temp directory to create an image with busyrun.bat in it that will execute busyrun.bat on creation (note the final full stop in the commands below)
docker build -t busyrunning1 .
I need 2 container images with different names so I’ll do this twice
docker build -t busyrunning2 .
Check this completed successfully by running docker images again, you should see your new images

You can now run these images using the standard docker run command. It works just like normal docker run, so you can look at the standard docker run reference for ideas.

Before you run this command, run start cmd.exe to start a second command prompt on your host. You’ll need it in a few seconds.
docker run –name myfirstwindowscontainer -it busyrunning1 cmd
Executing this command will start the container and change your current command prompt to be pointing at the c:\Windows\system32 directory in your new container, where you’ll see the output from the busyrun.bat file.

Below, container running busyrun.bat

You can kill your container from the second (host) command prompt you just created using
docker kill myfirstwindowscontainer
You can also use the second command prompt to run docker ps to see your new container running on your host

Below, using docker ps to view running containers on host and then docker kill to stop one

You can clean up your images using docker rmi. I.e. most of the normal docker management commands will work. Try them out.

You’ve now created a very simple Windows Docker container, run it and stopped it.

If you want to try more, you can see the Microsoft Docker Windows Containers Guide here

In part 3 we’ll look at getting our Microscaling-in-a-box application up and running in this environment

Posted by Anne Currie at 11:30
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: azure, containers, docker, windows 2016
Monday, 4 January 2016
Microscaling in Native Windows Containers – Part 1
Native Containers on Windows 2016
Until very recently you only had one OS option for running Docker containers – Linux. You couldn’t run containers natively on Mac or Windows.

Docker do a great job of allowing you to develop and test Linux Docker containers on Windows or Mac inside the Boot2Docker VM, but the Docker client and daemon are still running on a Linux OS within your Boot2Docker VM.

Now there’s another option (at least in preview) – the Docker client and daemon will run natively on Windows 2016 Server and you can run Windows Docker containers there.

This post takes you through a worked example of how to port an application to run in a native Windows Container. I’ll use our application, Force12, as our example.

Part 1 covers how to set up an Azure Windows 2016 host VM to try out Windows Containers.
Windows Docker Containers?
First off, let’s be clear that Windows 2016 only runs Windows Docker containers natively. You can’t run your existing Linux Docker containers natively on Windows and you’ll never be able to. Let it go.

This article is about creating new Windows containers with Windows-compatible content and running them on Windows.
PowerShell Containers Vs Docker Containers
To confuse things further, Windows 2016 supports both Docker containers and PowerShell containers. These are not interchangeable either. If you create a container using Docker you have to manage it using Docker. If you create a container using the new PowerShell container commands you have to manage it using PowerShell commands.

This article is about creating, running and managing Docker containers on Windows.

We can worry about PowerShell containers another time.
Aside – How does this all work?
Just for background info, Microsoft formed a team of developers to work with Docker on stuff including adding a shim into the Docker daemon to create/start etc.. containers on Windows. The shim is open source and you can see it in the Docker repo on github. The shim is written in Go and sits on top of the WinAPIs, using familiar WinAPI calls like LoadLibrary to drive the WinOS container functionality in a Docker-like way.
Getting Started
In part 1, we start with simply creating a Windows 2016 machine to play with.

To get started the first thing we’ll need is a Windows 2016 Server with the Windows Docker daemon installed. You can set up your own server, but the easiest way to do this is by creating a VM on Azure. (Azure has a free 1 month trial plan so this doesn’t need to cost anything).
Step 1 – Set up an Azure account
Visit https://azure.microsoft.com/en-gb/pricing/free-trial/ and follow the instructions
Step 2 – Set up a Windows 2016 Server Host VM
To create a test VM

Log into the Azure portal https://portal.azure.com using your account
Select “Virtual Machines” in the left hand navigation
Click the + Add button to create a new VM

Type “containers” in the search box
Select “Windows Server 2016 Core with Containers Tech Preview 4” from the returned options

Select the image, and click Create for the first of many times 😉

Give the Virtual Machine a name, select a user name and a password

Select Optional Configuration > Endpoints >
enter an HTTP endpoint with a private and public port of 80. Now click OK in both tiles

Click the create button to kick everything off

When the VM is ready and started, click on the running VM in “Virtual machines (classic)” , then click the Connect button to start an RDP session with the VM

You can log into the VM using the username and password you provided during creation. Once logged in you will be looking at a Windows command prompt. Yay!
You can type “start cmd.exe” into that command prompt to get another prompt, which will be handy later.

At this point brace yourself. This Windows 2016 Server variant DOES NOT COME WITH A GUI. This really is preview stuff. You’re going to have to use the command prompt and PowerShell to start with. Don’t panic 😉 We’ll step through it in Part 2.

Posted by Anne Currie at 14:52
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: azure, containers, docker, windows, windows 2016
Why do Linux Containers not run on Windows?
Windows Docker Containers
Windows 2016 only runs Windows Docker containers natively. You can’t run your existing Linux Docker containers on Windows.

The paragraph above has surprised people so I’ll quickly go over why.
What are Containers?
Containers are a way to
group together multiple executables and libraries of your choice
associate them with all the other libraries and executables they need in order to run
make the whole lot into one nice, tidy parcel.
Containers are a clever install/distribute/run/move/manage tool. That’s great – the concept has loads of very useful applications and the potential to be quite revolutionary in data centers.

However, containers are not VMs. The executables and libraries in your container are all running directly on your host. So, the executables and libraries in a Linux container have to be able to run natively on the Linux host. Similarly everything in a Windows container has to be able to run natively on Windows. So, the containers aren’t shareable, they have to be made from different things.
Posted by Anne Currie at 14:51
Email This
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: containers, docker, VMs, windows
Newer PostsOlder PostsHome
Subscribe to: Posts (Atom)
architecture aws azure container management containers coreos demo diversity docker ecs force12 hyper-v IoT labels linux containers load balancer marathon mesos metadata microbadger microscaling microservices Microsoft new relic nginx NSQ packet queue unikernels VMs windows windows 2016
▼ 2016 (21)
► August (1)
► July (1)
► June (3)
► May (1)
► April (5)
► March (1)
► February (1)
▼ January (8)
Coed:Code A Diversity in Tech Event
Microscaling Unikernels
Announcing the Force12 Alpha Programme
What is the difference between a Windows Native Co…
Microscaling a web site behind a load balancer
Microscaling in Native Windows Containers – Part 2
Microscaling in Native Windows Containers – Part 1
Why do Linux Containers not run on Windows?
► 2015 (12)
@microscaling on Twitter Follow @microscaling

© 2015 Force12.io. Powered by