Why I like Juju….

by Canonical on 21 September 2015

 

A long time ago Canonical (of Ubuntu fame) got in touch with us and asked if we’d like to bring Saiku to their new Juju platform. We were sceptical and semi agreed but ended up getting diverted to other stuff and never got started. Then a few months ago Canonical (still of Ubuntu fame) got in touch again and asked again if we’d like to bring Saiku to the Juju Charm Store, and again we were sceptical but this time they were a little more persistent and so we got started in creating a deployment mechanism that would allow us to bring both Saiku CE and Saiku EE to Juju’s Universal Cloud App Store.

So I guess at this point I should explain a little about Juju. Juju is a modelling platform that allows you to deploy “charms” to servers automatically, both using scripts and the command line or using the Juju GUI as shown above. So for example, if I wanted a LAMP stack, I could deploy an Apache HTTPD charm, and a MySQL charm and have a webservice up and running in the cloud in no time. The cool thing about Juju is that it can deal with hardware and server provisioning for you as well. So if you wanted a MySQL cluster, you can just tell it to spin up a few units. You can also do stuff like define EC2 instance size and use different cloud service providers. Or like us, you can use manual or local mode to configuring a single server or couple of servers with the software of your choosing.

You can think of it being a little like Puppet or Chef, but not really, its not that low level, but it is absolutely fantastic at getting your software installed and deployed quickly and painlessly.

So why do I think Juju is great?

When we wanted to initially test Saiku over Hadoop, Spark etc, I played around with Cloudera Manager, for some reason it wouldn’t install. I tried MapR’s demo VM but it didn’t work with Drill like it was supposed to. I tried Amazon’s Hadoop stuff, but it was very out of date. So in the end I spun up a box at our lovely hosting provider and deployed my own single node Hadoop setup with HBase, Spark etc. It worked but it was a jumble of different installs, hacked configs and stuff that would be a pain to upgrade in the figure. So I tested a Hadoop cluster that was provided by Juju, I was stunned, within 10 minutes of installing Juju, I had connected it in manual mode to my remote server, and started deploying my own Hadoop cluster with Yarn, Spark, the works to it. It even behaves like a cluster because the charms deploy to their own LXC containers which virtualise it all. What surprised me more was…. it actually ran and did stuff, first time, without me touching a thing.

So how has this affected what we do? We still build the same installer packages for Saiku, but we are now putting a lot of effort into the deployment mechanisms to make them more flexible. I can now deploy Saiku CE or EE in 2 commands, I can upload schema, I can seed the server with datasources. This means standing up servers for clients, demos and internal testing is easier than ever, and this can very much be used in production as well. We can also relate Saiku with other data sources which offers helpers and connection utilities for Mongo, Spark and others.

The other worry I had was that we would have to start maintaining .deb packages and build scripts to facilitate this. This couldn’t be further from the truth, infact all we did was write an install script in Bash and a few hooks and actions, also in Bash.

I’m sure Juju will have a number of detractors in the Sys Ops world who will complain that they find it limiting or don’t see the point. But for me, the fact I can both standup and tear down entire services in the click of a button or 1 line command, is great and hopefully with added Centos and Windows tooling adoption will grow and Canonical will be known for more than just being the folks who make Ubuntu.

Canonical are very kindly doing a demo of Saiku at Strata Conf, so if you are there, swing by their booth to find out what the fuss is about.

About the author

Tom is the founder and technical director of Meteorite.bi, a consulting company specialising in the Saiku Analytics platform. His weekly duties include BI consulting, Scala & Java programming and tinkering with System Administration frameworks. In his spare time Tom is a regular blogger and open source committer. You can read more about Tom on the Meteorite.bi blog.

Related posts

Kubernetes backups just got easier with the CloudCasa charm from Catalogic

For a native integration for Canonical’s Kubernetes platform, Juju was the perfect fit, and the charm makes consuming CloudCasa seamless for users. […]

What is a Kubernetes operator?

Kubernetes is the open source, industry-standard platform for deploying, managing and scaling containerized applications – and applications on Kubernetes are easier with operators. […]

Operate popular open source on Kubernetes – Attend Operator Day at KubeCon EU 2024

Operate popular open source on Kubernetes – Attend Operator Day at KubeCon EU 2024 […]