Posts

  • Ubuntu Jammy disables ssh-rsa

    Have you upgrade to Ubuntu Jammy lately, and have SSH access or git breaking? If so, you have come to the right place!

  • Huge packet losses with OVN

    A service provided by the Nectar Cloud is ‘Tenant Networks’, where a user can create their own networks in their tenancy to connect VMs together. Tenant Networks have the following features:

  • Australia Day

    On my first year in Australia, I was pretty excited when Australia Day came around and mentioned it to a colleague in passing. Unexpectedly, the colleague scoffed and said “Bogan Day”.

  • How we broke our national object storage and no one noticed

    During the last 5 years in Nectar, I admit we’ve broken a number of things. However, one of the most memorable incident in Nectar occurred just last week, where we messed up our national swift cluster. Fortunately, we did not lose any data (fingers crossed), and no one really noticed it.

  • Using CoreOS on OpenStack

    Most instances on the Nectar Cloud runs Linux (Ubuntu, CentOS). On Nectar’s Linux images, a provisioning tool call cloud-init runs on first boot, which inserts your SSH key and other user data into the instance. This allows you to log in to your instance securely using SSH keys, and also run any scripts for software installation when your instance first boot up.

  • State of the Cloud 2019

    Now that 2020 is upon us, I thought it might be a good idea to generate some statistics about the Nectar Cloud for 2019.

  • The year in review

    Things are winding down at the end of the year, so I thought it might be helpful to myself to jot down some of what I did this year, and keep gauge if I am growing professionally.

  • Passing entrophy to virtual machines

    Recently, when we were working on testing new images with Magnum, I found that the newest Fedora Atomic 29 images were taking a long time to boot up. A closer look using nova console-log revealed that they were getting stuck at boot with the following error.

  • Kubernetes With Loadbalancer

    In Kubernetes Part I, we’ve discussd how to spin up a kubernetes cluster easily on Nectar. In this post, we will discuss how to host an application and access it externally.

  • GitLab and Kubernetes Integration

  • Kubernetes now available on Nectar!

    We’ve just deploy OpenStack Magnum (Container Infrastructure as a Service) on Nectar cloud. This allows a user to spin up a container cluster (kubernetes or docker swarm) on Nectar.

  • Tracking down mysterious swift failures in Nectar

    A while ago, there were a few support tickets about instance snapshots not working. Looking into them, we could see that the snapshots were created, but uploads to our storage were failing randomly.

  • Password manager with Pass, Keybase and per device PGP keys

    Not having a cross platform, easy to use, open sourced password manager has always been a pet peeve of mine. I started playing with Keybase and pass a while back, and was wondering whether I could build the password manager I wanted out of these pieces.

  • Terraform for Nectar

    If you are interested in using Terraform and having Nectar has one of your cloud providers, below is how you can get started.

  • Upgrading NeCTAR to Puppet 4

    We’ve been running Puppet 3 for the longest time. With Puppet 3 at EOL, and OpenStack going to Puppet 4 for Octa/Pike, we really had little choice but to make ourselves Puppet 4 ready too. After a couple of months on this project, we are (finally!) seeing light at the end of the tunnel. It wasn’t too difficult, more of grunt work really. Here’s documenting how we did it; hopefully it’ll help someone needing to do the same. Notes are to be used in tandem with PuppetLabs official guide.

  • Patched NeCTAR images for CVE-2016-5195 (Dirty Cow)

    We have patched NeCTAR Images for CVE-2016-5195. The official updated images are available right now. If you are unsure whether you are running a patched version, you can check the kernel by running uname -a and compare it to the list below.

  • Tips for running NeCTAR VMs

    It has been a real privilege to be able to work on NeCTAR Research Cloud over the last year and half. It has given me much experience on how Melbourne Node’s hardware and OpenStack work together. With this knowledge, I’ve put together some tips on spinning up a NeCTAR VM, or “What should I do when I launch a NeCTAR VM?”

  • Running Tempest for NeCTAR

    Over the past year, we (Melbourne node of NeCTAR) has pushed in 2 new batches of compute hardware. One of the thing that bugs me was that the testing of compute hardware was terribly inefficient - operators were manually creating new instances, volumes, attaching volumes to instances, etc, to make sure that each host is working before we move it to production. On top of it being a terrible waste of time, we were also prone to missing out on different test cases (boot from volume? oops! boot from resized volume? oops!). This led us to look at a better way to do testing when we started off with a hardware refresh this year (+4000 VCPUs yay!).

  • Run just one tox test

    If you are using tox, you can run just one tox test by doing:

  • git review unpack failed

    Recently, I ran into a problem doing a git review with just a changed commit message (no changed files). The error message was like:

  • Puppet development in production

    Over here at NeCTAR Research Cloud, we have a bunch of hosts, mainly computes and a few other control plane hosts. When I started, there wasn’t a great puppet infrastructure set up - people were mostly still able to edit on the production puppet master. Given that this host holds all the configuration for NeCTAR RC, and one mistake could stop the whole cloud, it was something we thought would be good to change from day one.

  • Migrating Cinder to multiple backend

    OpenStack cinder has the ability to do multiple backends, which is quite useful if you are running out of space on one type of storage and you need to put in additional/replacement storage.

  • Benchmarking NeCTAR Cloud

    NeCTAR is made up of 8 nodes around Australia. Each node has quite a bit of discretion in choosing hardware and technologies to support their implementation of OpenStack. For example, Pawsey and Tasmania are both using Ceph as their cinder backend, whereas Melbourne is using a NetApp cluster for the same. As a result, it can be expected that performance will vary across the different nodes.

  • Trip to Vancouver!

    It’s off to Vancouver for a week to attend the 2015 OpenStack Summmit!

subscribe via RSS