Magic with Juju

I’ve been working on a set of tutorials over the last few months on how to write “juju charms”.

If you haven’t checked out Juju yet – do it now.

Juju is an open source framework to operate small and large software stacks in any cloud environment. It works fairly transparent on well known clouds like AWS, Google Cloud Engine, Oracle Clouds, Joyent, RackSpace, CloudSigma, Azure, vsphere MAAS, Kubernetes, LXD and even custom clouds!

I touched base with Juju some 5 years ago and immediately understood that this was going to make an impact in the computer science domain. The primary reasons were to me:

  1. Open Source all the way.
  2. No domain specific languages required to work with it. E.g. use your preferred programming language to do your Infrastructure As Code / DevOps.
  3. Smart modular architecture (charms) that allows re-use and evolution of best solutions in the juju ecosystem.
  4. “Fairly” linux agnostic – although the community is heavy on Ubuntu.
  5. Cloud agnostic – you can use your favorite public cloud provider or use your own private cloud, like an openStack or even VMware environment. Huge benefit to anyone looking to move into cloud.
  6. Robust and maintained codebase backed by Canonical – if enterprise support matters to you. There are other companies, like OmniVector Solutions, also delivering commercial services around Juju if you don’t like Canonical. [Disclaimer: I’m biased in my affiliation with OmniVector]
  7. A healthy, active & friendly open source community – if you, like me, think a living open source community is a key factor to success.

Above factors weigh in heavy to me and even though there are a few competitors to Juju, I think you can’t really ignore it if you are in to DevOps, IaaC, CI/CD, clouds etc.

Juju means “Magic” which is an accurate description on what you can do with this tool.

So, here are a few decent starting points from my collection to getting started with juju. Knock yourself out.

The juju community

Getting started with juju.

Tutorial “Getting started with juju hooks”.

Tutorial “Getting started with reactive charms”

Tutorial “Charms with snaps”

Massive open source win!

Now, this is good news!

The OpenChain project, (backed by The Linux Foundation) helps companies all over the world to be compliant with good practices dealing with open source. In effect, fundamentally to how they do business together within the software realm.

On December 6, 2018 – a large amount of huge players within industry made a massive joint announcement regarding this project! 

I’ve been working for over 10 years, slowly pulling my employer Scania towards this path – and now, we are there! Have a look at the map and you also see the global reach. Given the massive support, from huge players in many domains, its foreseeable that most companies will or need to follow suite, many more which I’m helping in the process.

What does this mean to you?

If you are working for a (any) software company and want to sell or do business with any openChain compliant company in the future. You need to show that your company are following the open chain standards, or, you won’t be able to business with us. The only exception will be if none of your code is open source, which in our world – is more or less impossible.

OpenChain can be seen somewhat similar to an ISO certification for software. This change, will need a transition period of course – where exceptions will be needed for sorting out proprietary mess – but we are getting there.

This framework, will in time, be affecting all software businesses. Probably all over the world. The initiative comes not from politics, but from a strong and healthy private sector and is a massive win for open source in the whole world. As the chairman of open source at Scania, and its representative within the Volkswagen group (640.000 employees), I’m more than happy to announce this message!

Now, go open source! Share this message to everyone that works with software.

 

 

Develop or die

I’ve been technical lead for a HPC center for over a decade now and for some 15 years  watched paradigms come and go within the systems engineering domain. This text is about future developments of HPC in relation to the more general software domain. There are some very tough challenges ahead and I’ll try to explain what is going on.

But first, let me get you in line of thinking with a bit of my own story.

Turning forty this year, exploring and experimenting within computer science, has landed me in a very peculiar place. People reach out for advice in very difficult matters around building all sorts of crazy computer systems. Some sets out to build successful business, some other just want to build robots. I help out as much as I can, although my time normally is just gone. Its a weird feeling to be referenced as senior. Where did those years go?

My experience in building HPC systems started in the early 2000 and from a computer science perspective, I’ve tried to stay aligned with the development in the field. This is never easy in large organizations, but having an employer bold enough to let me develop the field has made it possible, not only to deliver enterprise grade HPC for over a decade – but also to develop some other parts of it in exciting ways.

It didn’t always look like this of course.

I’ve been that person, constantly promoting and adopting “open source” wherever I’ve gone. A concept that at least in automotive (where I roam mostly) was pariah (at best) in my junior years. I came in contact with open source and Linux in the university studies and was convinced open source was going to take over the world. I could see Linux dominate the data centers from single the fact that the licensing model would allow it. There are off course tons of other reasons for that to happen, but the licensing model in it self was decisive.

I’m sorry, but a license model that doesn’t scale better than O(n) is just doomed in the face of O(1).

(I think I’m a nightmare to talk to sometimes.)

Anyway, telling my story of computer science and systems engineering, can’t be without reflecting on some technology elements and methodologies that came about during the past decades. After all, important projections on systems engineering can’t be without referencing technology paradigms.

Late in the mid 1990:ies, “Unix Sysadmins” ruled the domain of HPC. Enterprise systems outside of HPC was Microsoft Windows. For heavyweight systems, VMS and UNIX dialect systems prevailed. Proprietary software was what mattered to management (still is?), and everything was home-rolled. The concept of getting “Locked-In” by proprietary systems was poorly understood which was reflected by low understanding of real costs and how ability to build high quality systems was constituted. Building HPC system at this time was an art form. Web front ends didn’t exist. Most people were self taught and HPC was very different from today. My first HPC cluster (for a large company) lived in wooden shelves from IKEA in a large wardrobe. The CTO of the IT division was a carpenter. I’ll refer to this era as the “Windows/Unix Era“.

The Windows/Unix Era started to morph during the 2000 into the Linux revolution. In just a few years, available (not only) HPC software and infrastructure was getting more and more Linux. Other OS dialects such as HPUX, VMS, AIX, Solaris and Mainframe ended up in phase-out, witnesses of an era slowly disappearing. Linux on client desktops entered the stage and at this point, the crafting of HPC system was becoming an art of automation. Mac:s got thinner.

Sysadmin meant in practice, scripting. bash, ash, ksh etc. cfengine, ansible and puppet. Computer science was starting to become quasi magic (that I think most people perceive computers in general) and advanced provisioning systems became popular. Automation!

I was myself using Rocks Clusters for my HPC platform, with some Red Hat Satellite assistance to achieve various forms of automation. The still ever popular DevOps appeared, mainly in the web services domain and some people started to do python in favor of bash (OMG!). Everything was now moving towards Open Source. It became impossible to ignore this development, also at my primary employer. I became the chairman of the open source forum and slowly started to formalize that field in a professional way. A decade behind giants such as Canonical, IBM or RedHat, but hey – automotive is not IT, right?

However, time and DevOps development stood still in the HPC domain. I blame the performance loss in virtualization layers for that situation, or perhaps being victim of own success. While vmware, xen, kvm etc. filled up the data centers in just a few years, it never took off in HPC. Half a decade passed by with this and so called “clouds” started to appear.

I personally hate the term cloud. It blurs and confuses the true essence which perhaps is better put like: “Someone else’s computer”.

Recent development in the cloud has exposed one of the core problems with cloud resources. I’m not going to make the full case here, but generally speaking, access/integrity/safety/confidentiality of any data is almost impossible to safeguard – if you are using someone else’s computer. There is a fairly easily accessible remedy, which I’ll throw up a fancy quote for:

“Architect distributed systems, keep your data local & always stay in control over your own computing. Encrypt”. 

I think time will prove me right and I’ve challenged that heavily by taking active part in a long running research project; LIM-IT that is set out to map and understand Lock-in effects of various kind. The results from this project should be mandatory education to any IT-manager that wants to stay on top of their game in my industry.

From a technology point of view, I can recommend looking at projects like Nextcloud, OpenStack, Bittorrent, etc. But there are many, many others that operates with this great mindset.

So, where are we now. First of all, HPC is in desperate need of getting ingested by technology from other domains. Especially low hanging fruits like those from the Big Data domain. Just to take one conceptual example; making use of TCP/IP communication stacks to solve typical HPC problems. One of my research teams explored this two years ago by implementing a pCMC algorithm in SPARK as part of a thesis program. The intention was to explore and prove that HPC systems indeed serve as excellent hybrid solutions to tackle typical Data Analytics problems and vice versa.  I’m sure MPI-doctors will object, but frankly, that’s a lot of “Not Invented Here” syndrome. The results from our research thesis spoke for itself. Oh, the code can be downloaded here under a open source license so you can try it yourself. (Credits to Tingwei Huang, and Huijie Shen for excellent work on this.)

Now, there is a downside to everything. So also when opening up for a completely new domain of software stacks. Once you travel down the path of revamping your HPC environment with new technologies from, for example, “Cloud” (Oh I hate that word) and going all in “Everything as a Service” – you are basically faced with an avalanche of technology. Hadoop, SPARK, ELK, OpenStack, Jenkins, K8, Docker, LXC, Travis, etc. etc. All of these stacks requiring years, if not decades, to learn. There exists few, if any, persons that master even a few of these stacks and their skills are highly desired on a global market for computer wizards. Even more true is that they’re probably not on your pay list, because you just can’t afford them.

So, as a manager in HPC, or some other complex enough IT-environment. You face the Gordian knot of a problem:

“How do I manage and operate an IT-environment of vast complexity PLUS manage to keep my cost under some kind of control?”

Mark Shuttleworth talks brilliantly about this in his key-notes nowadays by the way which I’m happy to repeat.

Most managers will fail facing this challenge (unfortunately) and they will fail in three ways:

  1. Managers in IT will fail to adopt the right kind of technologies and get “locked in” on either data-lock-in, license-lock-in, vendor-lock-in, technology-lock-in or all of them – ending up in a budget disaster.
  2. Managers in IT will fail to recruit skilled enough people or to many with the wrong skill set – they will fail to deliver relevant services to the market.
  3. Managers in IT will do both of the above – delivering sub-performing services too late to market, at extreme costs.

If I’m right in my dystopian projections, we will see quite a few companies go down within IT in the coming years. The market will be brutal to those failing to develop. Computer Scientists will be able to ask for fantasy salaries and there will be a latent panic in management.

I’ve spent significant amount of time researching and working on this very challenging problem, which is general to its nature, but definitely applicable to my own expert domain in HPC. In a few days, I’ll fly out to get the opportunity to present a snapshot of my work to one of Europe’s largest HPC center in Germany – HLRS. Its happy times indeed.

My native ipv6 – part1

So, I’ve spent about two years arguing with my network provider Fibra to enable my high speed internet for ipv6 native. A struggle that in January 2018 payed off. I was kindly offered to be included in their PoC for ipv6 and I was thrilled.

Thank you Fibra for finally coming around on this.

I’m no guru on ipv6, but I feel comfortable navigating the fundamentals and have been running ipv6 tunnels for many years now. This is my hopefully short blogg series on getting my native ipv6 working at home.

My ambition at this point, is to have a public ipv6 address assigned from my network provider along with a 48 bit routed network prefix which will be split into a 64 bit prefix delegation for my home network.

This seems like a very basic setup at the time of writing and suits my ambitions for now. Later, I will be running separate ipv6 prefixes on my other virtual environments, while keeping the home network separate.

My current WAN access setup is a openWRT Chaos Calmer TPLINK with the WAN interface obtaining public IP:s from Bahnhof in Sweden. Fibra is the network provider, that for the purpose of the PoC, has turned on the switchport for ipv6 traffic for me.

For this to make sense to a reader, this is an extract of a starting /etc/config/network that serves as a starting point for a native ipv6 setup.

config interface 'wan'
 option ifname 'eth0.2'
 option proto 'dhcp'
 option ipv6 '1'

​config interface 'wan6'
 option _orig_ifname 'eth0.2'
 option _orig_bridge 'false'
 option ifname 'eth0.2'
 option proto 'dhcpv6'
 option reqaddress 'try'
 option reqprefix 'auto'

Bringing up and down the interface should have you ending up with a ipv6 address on the eth0.2 interface. I did that maneuver, but no ipv6 address was handed out to me.

Basic debugging needed, you would likely do the same if you get this result.

I started by turning off the firewall (to be sure no traffic is dropped before it hits the tcpstack):

$ /etc/init.d/firewall stop

Then I tcpdump for ipv6 icmp traffic. I expect to see some ipv6 icmp traffic which I did. (in an ipv6 network, icmp traffic is chatty which is a bit different than you might be used to from ipv4 networks)

$ tcpdump -i eth0.2 icmp6

I could see Router Advertisement messages from an interface not on my devices, so something was working at Fibras side. I needed to verify that my router actually sent out dhcpv6 messages and the replies that should follow.

So, I force the ipv6 dhcp client (odhcp6c) to send router solicitation messages out on the interface I wish to handle the ipv6 native stack.

​$ odhcp6c -s /lib/netifd/dhcpv6.script -Nforce -P48 -t120 eth0.2

tcpdumpv6

Nope, no reply from my network provider.

What should happen here is that the dhcp6 server at my network provider Fibra side should respond with a configuration package (Advertise), but as you can see – its just spamming RA packages, seemingly ignoring my client Router Solicitations. This is not the way its supposed to be and I sent Fibra the information above.

To be continued.

Update 1:

For the curious reader, a correct dhcpv6 exchange (RFC 3315) should look like this:
Client -> Solicit
Server -> Advertise
Client -> Request
Server -> Reply

Update 2:

Also a note from my ipv6 friend guru Jimmy is that, if you need to see more details on the ipv6 dhcp messages coming from the odhcp6c, you can expand the tcpdump filter as:

$ ​tcpdump -i eth0.2 icmp6 or port 546 or 547

You should see packages coming out from your interface as:

IP6 fe80::56e6:fcff:fe9a:246a.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit

 

 

My blogg

I’ll add some blogging here soon.