Quick Code: Repo List

So I ran into an interesting problem over the weekend, I forgot my 2FA token for Gitlab at home while I was away. My laptop’s SSH key was already loaded into Gitlab so I knew I could clone any of my repositories if only I could remember the exact name. That of course turned out to be the problem: I couldn’t remember the name of a specific repository that I wanted to work on. I even tried throwing a bunch of things at git clone to try to guess it and still had no luck. Enter the Gitlab API:

#!/usr/bin/env python3
                                                                                                                    import requests                                                                                                     from tabulate import tabulate
                                                                                                                    personal_token = 'asdfqwerzxcv1234'                                                                             user_id = 'dword4'                                                                                                                                                                                                                      base_url = 'https://gitlab.com/api/v4/'                                                                             repo_url = 'users/'+user_id+'/projects'                                                                                                                                                                                                 full_url = base_url + repo_url + '?private_token=' + personal_token                                                                                                                                                                     res = requests.get(full_url).json()
table = []
for project in res:                                                                                                     name = project['name']
    name_spaced = project['name_with_namespace']
    path = project['path']
    path_spaced = project['path_with_namespace']
    if project['description'] is None:                                                                                      description = ''                                                                                                else:                                                                                                                   description = project['description']                                                                            #print(name,'|', description)                                                                                       table.append([name, description])                                                                                                                                                                                                   print(tabulate(table, headers=["name","description"]))

This is of course super simplistic and does virtually no error checking, fancy formatting, etc. However now with a quick alias I can get a list of my repositories even when I do flake out and forget my token at home.

Terraform – Reference parent resources

Sometimes things get complicated in Terraform, like when I touch it and make a proper mess of the code. Here is a fairly straight forward example of how to reference parent resources in a child.

├── Child
│   └── main.tf
└── main.tf

1 directory, 2 files
$ pwd
/Users/dword4/Terraform

First lets look at what should be in the top level main.tf file, the substance of which is not super important other than to have a rough idea of what you want/need

provider "aws" {
  region = "us-east-2"
  profile = "lab-profile"
}

terraform {
  backend "s3" {}
}

# lets create an ECS cluster

resource "aws_ecs_cluster" "goats" {
  name = "goat-herd"
}

output "ecs_cluster_id" {
  value = aws_ecs_cluster.goats.id
}

What this does simply is create an ECS cluster with the name “goat-herd” in us-east-2 and then outputs ecs_cluster_id which contains the ID of the cluster. While we don’t necessarily need the value outputted visually to us, we need the output because it makes the data available to other modules including child objects. Now lets take a look at what should be in Child/main.tf

provider "aws" {
  region = "us-east-2"
  profile = "lab-profile"
}

terraform {
  backend "s3" {}
}
module "res" {
  source = "../../Terraform"
}
output "our_cluster_id" {
  value = "${module.res.ecs_cluster_id}"
}

What is going on in this file is that it creates a module called res and sources it from the parent directory where the other main.tf file resides. This allows us to reference the module and the outputs it houses, enabling us to access the ecs_cluster_id value and use it within other resources as necessary.

Managing a Growing Project

I am no Project Manager in even the loosest sense of the word. Despite that I find myself learning more and more of the processes of PM. This is especially true when projects start to expand and grow. Specifically I am speaking about the NHL API project I started almost two years ago. This lead me to the rabbit hole that is permissions and how to manage the project overall going forward. The projects roots are very rough, even today I still generally commit directly to master. Now the repository has grown to over 70 commits, two distinct files and 17 contributors.

Balance

I am constantly trying to be cognizant of is becoming overly possessive of the project. While it may have started as a one-man show I want and enjoy contributions from others. The converse of worrying about becoming possessive is that there are times when steering is necessary. One of the instances that comes to mind is the suggestion of including example code. The goal of the project is documentation, so I declined such suggestions. Unmaintained code becomes a hindrance over time and I don’t want to add that complexity to the project.

Growth

There is often a pressure to grow projects, to make them expand over time and change. Its a common thing for businesses to always want growth and it seems that mentality has spread to software. Something like the NHL API is a very slow changing thing, just looking at the commit history shows this. Weeks and months will go by without new contributions or even me looking at the API itself. I dabbled with ideas such as using Swagger to generate more appealing documentation. Every time I tried to add something new and unique I realized it felt forced. This ultimately forced me to accept that growth will not be happening, the project has likely reached its zenith.

Looking Forward

The next steps are likely small quality-of-life things such as the recent Gitter.im badge. Things that make it easier for people to interact but don’t change the project overall. My knowledge of the API makes for fast answers so I try to help out when I am able.

Youtube Essential Ripping Platform

So I have been a longtime user of youtube-dl for ages to archive some things (obscure music, recordings of tech talks) and figured it was worth taking some time to make a simple and easy to use way to achieve this that others could benefit from. More simply put I created a front-end with Python and Flask to sit on top of youtube-dl and make the process so easy non-technical people could use it. Thus YERP was born (https://gitlab.com/dword4/yerp) to fill that role. I know there are tons of other competiting ideas out there doing the exact same thing but I wanted to take a crack at it for my own home network and get it so simplified that all you had to do was run a Dockerfile and it would spring into existence without configuration.

The project is VERY green right now and things are moving around and changing alot (even in my head before code is committed to the repository) so don’t bank on things staying how they are. There are tons of little features I want to put in like folder organization, backups, flags for filetypes and the like which will take quite a while to figure out how I would like to implement them. So if you do run the program just beware and if you find something that can be done better feel free to submit a PR and I will gladly bring other code into the project since I am only one person and not exactly a professional at this to begin with.

Winter Improvements for Hockey-Info

Finally got around to a rather large update for this project, fixed some small bugs such as the L10 data being way wrong (was showing win-loss-ot for the entire seaseon) as well as added in missing stats for Goalies and made the display of previous game results more sensible. Also redid about 95% of the interface to use Bootstrap4 which has made the look more uniform throughout. If you are interested in seeing the code itself you can see that here, or if you just want to check out the live site which I host as well then you can head over to http://hockey-info.online.

Hockey Records

The NHL was kind enough to release records.nhl.com to the public to browse more interesting stats than just game-by-game data, things like players that have hit the 1000 point milestone and other more trivia-friendly factoids.  Naturally the spidey senses went to tingling as soon as I saw the news on Reddit so I ran off to start poking at it and lo-and-behold it actually hits what appears to be the same data source as nhl.com/stats/rest but with all sorts of extra endpoints to try out.  This time around I attempted to be slightly clever and looked at https://records.nhl.com/static/js/client.bundle.js to save myself the trial-and-error process I used on a lot of the Stats API.  Turns out this was actually a smart move an probably would have let me document a lot of this stuff sooner if I had thought to spend some time poking around the code of the stats website.  No matter,  what counts is that now there is a rough outline of the Records API and it has been rolled into the NHLAPI repo on Gitlab.  Just like before if you see something I missed feel free to open a PR and if I don’t happen to see it right away @ me on twitter, I try to respond fairly quickly.

Footnote: https://beautifier.io/ is fantastic, it let me unmangle the client.bundle.js file so it was readable

Simple Icinga2 Plugin

I’ve seen bits and pieces of the process of creating an Icinga2 (or Nagios) plugin, so here are my notes dumped straight from my brain.

First and foremost we need a script to call from Icinga, in this case I created a very simple Python script to simply get the version of LibreNMS running on my monitoring system.

#!/usr/bin/python
import argparse
import requests
import json
import sys

parser = argparse.ArgumentParser(description='Process some integers.')

parser.add_argument('-H', action="store",dest="host", help='name of host to check')

#parser.add_argument('token', metavar='token', help='API token')
token = 'yourAPItokenGOEShere'
args = parser.parse_args()

host_check = 'http://'+args.host+'/api/v0/system'
headers = {'X-Auth-Token': token }
r = requests.get(host_check, headers=headers,verify=False)

#print(r.json())

json_string = r.text
parsed_json = json.loads(json_string)

system_status = parsed_json['status']
system_ver = parsed_json['system'][0]['local_ver']

if system_status == 'ok':
	ret = "status: "+system_status+" version:"+system_ver
	print(ret)
	sys.exit(0)
elif system_status != 'ok':
	ret = "status: "+system_status+" version:"+system_ver
	print(ret)
	sys.exit(3)

This is a pretty simple script, you could call it with ./check_lnms_ver.py -H 192.168.1.100 to see how it works.  With the script working the next portion is done in the command line, first create the directory that will later be referenced as CustomPluginDir

# mkdir -p /opt/monitoring/plugins

Now we need to tell Icinga2 about the directory, this is done in a few different places

in /etc/icinga2/constants.conf add the following

const CustomPluginDir = “/opt/monitoring/plugins”

and in /etc/icinga2/conf.d/commands.conf we add the following block

object CheckCommand "check-lnms" {
    command = [ CustomPluginDir + "/check_librenms.py" ]

    arguments = {
        "-H" = "$address$"
    }
}

The block above defines the custom command, specifies the script we created first and also passes the correct flags.  Now its time to add the check into the hosts.conf file, so place the following block into /etc/icinga2/conf.d/hosts.conf

object Host "itsj-lnms" {
        address = "192.168.1.85"
        check_command = "check-lnms"
}

And with that we wait for the next polling cycle and should see something like the screenshot below

This is a highly simplistic example, but figuring it out was necessary for me because I had to port some existing code from Ruby to Python so I wanted to know exactly how a plugin was created to understand what values were returned and how it all fits together.

So long Github!

So Microsoft bought Github for a moderate mountain of money and now everyone is fleeing it before the deal has even been approved by regulatory bodies. Some folks are calling it over-reacting but the reality is that Microsoft has a terrible track record (Nokia, Skype, Codeplex) and has been at times outright antagonistic to Open Source as a whole. Given that lately purchases are often about getting access to data I really don’t feel like providing useful metrics to Microsoft about the projects I work on no matter how small and insignificant they may be so all new work will appear on my Gitlab account. I went through by hand and tried to find all the places I linked my code here but if I happened to miss something either leave a comment or hit me up on Twitter and I will update the links.

How fast does the NATO phonetic alphabet go through letters?

So I saw a thread on reddit about people using phrases like usual “quick brown fox” one to test out fountain pens and it got me to thinking, I normally use the NATO phonetic alphabet to test my pens out but how fast does that go through all the letters of the alphabet.  After some banging around I came up with code that figures it out, just without all the hassle of actually trying to time it.

#!/usr/bin/python3

alphabet = 'abcdefghijklmnopqrstuvwxyz'

chars = list(alphabet)

words = ['alpha','bravo','charlie','delta','echo','foxtrot','golf','hotel','india','juliett','kilo','lima','mike','november','oscar','papa','quebec','romeo','sierra','tango','uniform','victor','whiskey','xray','yankee','zulu']

for w in words:
    # loop through each word specified
    for cw in w:
        # now we work through each character in the word
        if cw in chars:
            chars.remove(cw)
        else:
            pass
        print(chars)
    print(w)

print(chars)

Turns out that it really doesn’t use up everything until the very end but by the time you get to the word papa all but 4 letters have been used already.

['q', 's', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
oscar
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
['q', 'w', 'y', 'z']
papa

So as a way to work through all the letters of the alphabet its really not the most efficient way to go but perhaps there are better phrase combinations than the quick brown fox?

Fakes abound!

Its already making the rounds in various online news outlets that Reddit banned deepfakes (AI assisted fake pornograph videos), and naturally its causing all manner of consternation as people on every side of the issue get all twisted up and yell at each other incoherently.  Whats slipping through the cracks however is that there is also technology out there to fake voice as well and while its not great its not absolutely terrible as one might expect.  Since I have no desire to see myself superimposed on the body of another I figured I might as well see how good a computer was at faking out my voice since so many things take only very brief conversations to authorize these days.

In order to prime the software you have to record yourself reading a bunch of sentences, enough material for at least 30 seconds according to the prompts.  Once you have that corpus of material ready you tell the service to go build your voice (I was imagining Bene Gesserit Voice training while it processed) and when its done you can type in anything you want and the synthesized version of your voice spits out the phrase for better or for worse.

Non-generated

Generated with Lyrebird.ai

 

Naturally there are some modulated sounds in the generated one, however having reviewed recorded phone calls of myself it sure could pass for me if the phone mic was bad.  What is scary is that it correctly hit the emphasis that I naturally put on some words, enough that I suspect had I not been sick when recording this and had a better soundproof room to do it in it might have done better. Of course for like 30 minutes of screwing around on the site re-recording my various gaffs I think it did an admirable job of spoofing my voice and I suspect given enough time to refine the software it could probably get pretty good, fortunately I’m broke compared to the Hollywood folks who are turning coal into diamonds right now worrying about faking technology producing sex tapes that they never actually starred in.

Close Bitnami banner
Bitnami