Posts

Runing Saltstack in multi-master

Recently I ran into some problems when trying to use multiple saltmasters in combination with using the mine and thought I'd share my experiences. Background   Like puppet   The salt mine can be compared to the PuppetDB ; it's a place to store stuff like (custom) facts for use on other nodes/instances. The classic use-case for this is a monitoring setup like Nagios that need to be configured for each additional deployed service/instance with the particular info of that service/instance. So let's say you deploy a new MySQL instance and automatically slave it to an existing cluster, you want the ip address, database name and maybe some other stuff to be configured on the monitoring server. Puppet does this by something called " exported resources " which basically sends data to a central location (the PuppetDB) which can then be collected by other machines when puppet runs there. But different   Where Puppet uses a database (PostgreSQL by default)

When all you have is a hammer.....

Introducing Ballista Some time ago I created a few tools and a library to interface with the Satellite 6 installation at a client and  made   some posts  about that. While these tools got some use, we found that what we really wanted was a tool that was easily extended and consistent in its use to manipulate our Satellite 6/Katello installations. So, together with my colleague Joey Loman  I set out to create a comprehensive tool and provide an infrastructure for future functionality and supplement the existing hammer-cli interface. You can check it out at   https://github.com/RedHatSatellite/ballista

Cleaning up unused Content View versions in Satellite 6

The problem When using the whole concept of Content Views in Satellite 6 the way it is intended, you can end up with a lot of unused versions; every time a new version is promoted to all your environment, you end up with an unused one. When using a lot of content views and especially when using Composite Content Views that end up with a new version every time one of their components get updated, you end up spending a lot of time removing the via the gui. The solution (well, a  solution anyway) While you can clean them up using hammer in a loop, you need to make sure you only remove versions that are not currently used by environments. Since I already made a python library for various tasks, it was trivial to add a small script that removes every version of a content view that is not in use. We use this at my current client and it saves a lot of time, maybe someone else can find a use case for it too:) You can find it at  https://github.com/yhekma/satellite6_tools  (the one c

Syncing repositories that need authentication using a proxy in Satellite 6

At my current client we found that when we used a proxy without authentication we could not sync external repository's that required authentication in Satellite 6. After a trek down the pulp-code-lane with a collegue (check  http://binbash.org/  for his blog) we found the problem to be a simple python statement inside the pulp-nectar code. After submitting a pull request ( https://github.com/pulp/nectar/pull/47 ) and mentioning it to RedHat via a support case I have been told that they have created an internal bugzilla entry and will fix it in an upcoming release (thanks for the quick response!). Until then, if you find you get authentication errors when you try to sync external repository's (like https://username:password@repo.org) and you use a proxy without authentication, take a look at the path, it's really as simple as it seems:)

Recursively update Composite Content Views in Satellite 6/Katello

The basic idea Satellite 6 (and Katello for that matter) have a new way of dealing with content, whether it being puppet modules, rpm's or docker images. Below I will focus on rpm's, which I think will be the use case for most people. A content view can contain one or more rpm repository's at a specific point in time but can consist of multiple versions. So let's say I create a content view named RHEL7_BASE on Monday containing 2 repositories I just synced:  rhel7_server and rhel7_epel.  Version 1 of that content view points to the packages as they are on that Monday. Now the next Friday I do a sync of my repositories so I get the latest versions and patches and whatnot, but note that version 1 of my RHEL7_BASE content view is unchanged, and any servers that are using this version will not have access to the new packages. In order to make these new packages available, I need to publish a new version of the view and promote this version to the environment that con

Mounting data onto your filesystem for fun and unfortunately no profit

A little more than a year ago I was working for a client that desired to have a simple way do do an inventory of their linux servers (running SLES ). They had their networking DTAP configured to make every environment only available via a bastion host and only that host. Luckily you can do a lot with Ansible in conjunction with ProxyCommands (see  https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts ) so reaching the servers was not really a problem and since Ansible's excellent setup  module provides a wealth of information, I had all the ingredients I needed, I just needed to connect the dots. Since I wanted to learn more about filesystems (and FUSE in particular) I thought it would be a nice exercise to try and "map" the collected data from Ansible's setup module onto a mountpoint to make it easy to grep and parse and all that jazz. Since the data format Ansible uses is JSON anyway, I thought it would be best to first focus on creating a script

First!

Well, I finally bit the bullet and started a blog. In here I will try to share some problems and solutions I encountered/devised in my daily dealings with mainly Linux as a DevOps Linux Engineer.